Re: [sqlite] How to achieve fastest possible write performance for a strange and limited case

2013-03-29 Thread Dominique Devienne
On Fri, Mar 29, 2013 at 8:17 PM, Jeff Archer <
jsarc...@nanotronicsimaging.com> wrote:

> I could write directly to a file myself.  But I would need to provide some
> minimal indexing, some amount of housekeeping to manage variable
> sized BLOBS and some minimal synchronization so that multiple instances of
> the same application could operate simultaneously on a single set of data.
>

Have you looked at HDF5? Aside from synchronization, HDF5 gives you the
indexing and book-keeping. It was designed for large "structured" binary
data. By structured I mean storing n-D arrays of "structs", that can be
"chunked" or "tilled"  and compressed as desired, transparently, and
accessed in whole or parts (hyperslabs) as desired, optimizing the access
for you. An HDF5 file is basically like a file-system, with folders, and
the "files" are "datasets", and HDF5 knows what's inside each dataset (say
a 3D array of {x,y,z} doubles in a Point struct). It's structured data for
binary scientific data. You can associate "attributes" (key, value pairs,
the value's type, primitive or aggregate, known to HDF5 like for datasets)
to datasets, which are in B-trees, and the whole thing is "paged" with a
B-tree of pages and a page cache, similar to sqlite3. HDF5 is fast. I've
often wished for the ability to "type" my sqlite blobs similar to how
everything is typed in HDF5. You can in fact stuff HDF5 inside a blob,
since HDF5 as a VFL (again just like sqlite3. the parallels are many), but
you can't do much with that in-blob data using sqlite SQL, and since custom
functions can only return scalars, there's little point. Might as well
store the HDF5 files on disk next to the sqlite db file. But I guess I'm
digressing here :)

We've used a mix of SQLite3 and HDF5, which worked well for us, but we
didn't have to worry about concurrent access though. --DD
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to achieve fastest possible write performance for a strange and limited case

2013-03-29 Thread David King
In case you haven't seen this: http://www.sqlite.org/intern-v-extern-blob.html 


On Friday, 29 March, 2013 at 13:52, Jeff Archer wrote:

> On Fri, Mar 29, 2013 at 4:46 PM, Richard Hipp  (mailto:d...@sqlite.org)> wrote:
> > On Fri, Mar 29, 2013 at 4:41 PM, Michael Black  > (mailto:mdblac...@yahoo.com)> wrote:
> > 
> > > I think many people would tell you not to store your images in your
> > > database.
> > > Just store a filepath to them.
> > > That will speed things up quite a bit and even possibly prevent having to
> > > use an SSD.
> > > 
> > > With the filepath your processing apps can use file locking too if you 
> > > need
> > > it.
> > > 
> > 
> > 
> > Lots of people think this. But experiments suggest otherwise, depending on
> > the size of your images.
> > 
> > For BLOBs smaller than about 100KB, it is actually faster to read them out
> > of an SQLite database than it is out of a file. Presumably the extra
> > overhead of the open/close is what makes the difference. For BLOBs larger
> > than 100KB it is faster to store them in a file. That's for reading - I've
> > never run the experiment for writing. Note also that the 100KB is highly
> > operating-system and file-system and storage-media dependent and so your
> > mileage may vary.
> > 
> 
> 
> FWIW. My tests are definitely indicating that BLOBs in database are
> significantly faster.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org (mailto:sqlite-users@sqlite.org)
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 
> 


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to achieve fastest possible write performance for a strange and limited case

2013-03-29 Thread Jeff Archer
On Fri, Mar 29, 2013 at 4:46 PM, Richard Hipp  wrote:
> On Fri, Mar 29, 2013 at 4:41 PM, Michael Black  wrote:
>
>> I think many people would tell you not to store your images in your
>> database.
>> Just store a filepath to them.
>> That will speed things up quite a bit and even possibly prevent having to
>> use an SSD.
>>
>> With the filepath your processing apps can use file locking too if you need
>> it.
>>
>
> Lots of people think this.  But experiments suggest otherwise, depending on
> the size of your images.
>
> For BLOBs smaller than about 100KB, it is actually faster to read them out
> of an SQLite database than it is out of a file.  Presumably the extra
> overhead of the open/close is what makes the difference.  For BLOBs larger
> than 100KB it is faster to store them in a file.  That's for reading - I've
> never run the experiment for writing.  Note also that the 100KB is highly
> operating-system and file-system and storage-media dependent and so your
> mileage may vary.
>

FWIW.  My tests are definitely indicating that BLOBs in database are
significantly faster.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Performance with journal_mode = off

2013-03-29 Thread Jeff Archer
>On Fri, Mar 29, 2013 at 4:05 PM, Simon Slavin  wrote:
>
>No.  The two things have nothing to do with one-another.  Transactions are 
>about grouping changes together
>so that either they all happen or none happen.  Journalling is about surviving 
>through crashes and
>power-failures, and being able to undo changes.  Transactions are about 
>programming.  Journalling is about
>hardware failure.  Either one can be useful without the other.

Thank you.  This helps my understanding.  This would be good
information to have on the documentation page for PRAGMA journal_mode;
 Or, maybe this is common knowledge and I just illustrate my
ignorance.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to achieve fastest possible write performance for a strange and limited case

2013-03-29 Thread Richard Hipp
On Fri, Mar 29, 2013 at 4:41 PM, Michael Black  wrote:

> I think many people would tell you not to store your images in your
> database.
> Just store a filepath to them.
> That will speed things up quite a bit and even possibly prevent having to
> use an SSD.
>
> With the filepath your processing apps can use file locking too if you need
> it.
>

Lots of people think this.  But experiments suggest otherwise, depending on
the size of your images.

For BLOBs smaller than about 100KB, it is actually faster to read them out
of an SQLite database than it is out of a file.  Presumably the extra
overhead of the open/close is what makes the difference.  For BLOBs larger
than 100KB it is faster to store them in a file.  That's for reading - I've
never run the experiment for writing.  Note also that the 100KB is highly
operating-system and file-system and storage-media dependent and so your
mileage may vary.

-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to achieve fastest possible write performance for a strange and limited case

2013-03-29 Thread Michael Black
I think many people would tell you not to store your images in your
database.
Just store a filepath to them.
That will speed things up quite a bit and even possibly prevent having to
use an SSD.

With the filepath your processing apps can use file locking too if you need
it.


-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Jeff Archer
Sent: Friday, March 29, 2013 2:18 PM
To: SQLite-user.org
Subject: [sqlite] How to achieve fastest possible write performance for a
strange and limited case

I have previously made an apparently bad assumption about this so now I
would like to go back to the beginning of the problem and ask the most
basic question first without any preconceived ideas.

This use case is from an image processing application.  I have a large
amount of intermediate data (way exceeds physical memory on my 24GB
machine).  So, I need to store it temporarily on disk until getting to next
phase of processing.  I am planning to use a large SSD dedicated to holding
this temporary data.  I do not need any recoverability in case of hardware,
power or other failure.   Each item to be stored is 9 DWORDs, 4 doubles and
2 variable sized BLOBS which are images.

I could write directly to a file myself.  But I would need to provide some
minimal indexing, some amount of housekeeping to manage variable
sized BLOBS and some minimal synchronization so that multiple instances of
the same application could operate simultaneously on a single set of data.

So, then I though that SQLite could manage these things nicely for me so
that I don't have to write and debug indexing and housekeeping code that
already exists in SQLite.

So, question is:  What is the way to get the fastest possible performance
from SQLite when I am willing to give up all recoverability guarantees?
Or, is it simple that I should just write directly to file myself?
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to achieve fastest possible write performance for a strange and limited case

2013-03-29 Thread Dominique Pellé
Pavel Ivanov  wrote:

> On Fri, Mar 29, 2013 at 12:17 PM, Jeff Archer
>  wrote:
>> I have previously made an apparently bad assumption about this so now I
>> would like to go back to the beginning of the problem and ask the most
>> basic question first without any preconceived ideas.
>>
>> This use case is from an image processing application.  I have a large
>> amount of intermediate data (way exceeds physical memory on my 24GB
>> machine).  So, I need to store it temporarily on disk until getting to next
>> phase of processing.  I am planning to use a large SSD dedicated to holding
>> this temporary data.  I do not need any recoverability in case of hardware,
>> power or other failure.   Each item to be stored is 9 DWORDs, 4 doubles and
>> 2 variable sized BLOBS which are images.
>>
>> I could write directly to a file myself.  But I would need to provide some
>> minimal indexing, some amount of housekeeping to manage variable
>> sized BLOBS and some minimal synchronization so that multiple instances of
>> the same application could operate simultaneously on a single set of data.
>>
>> So, then I though that SQLite could manage these things nicely for me so
>> that I don't have to write and debug indexing and housekeeping code that
>> already exists in SQLite.
>>
>> So, question is:  What is the way to get the fastest possible performance
>> from SQLite when I am willing to give up all recoverability guarantees?
>
> Use
> pragma journal_mode = off;
> pragma synchronous = off;
> pragma locking_mode = exclusive;
>
> In addition to that you may issue BEGIN statement at the beginning of
> the application and never COMMIT.


Yes that should be the fastest.

In addition:

* make sure that you prepare your INSERT query just once
  before the insert loop, and use bind/step/reset in the
  insertion loop.

* avoid duplicating the implicit uid index with another
  primary key by using INTEGER PRIMARY KEY
  and not something like INT PRIMARY KEY

* if you have indexes, make sure that you create them
  after all the inserts, rather than before.

* if you have several tables to populate, you can consider
  storing them in different databases, and populate them
  in parallel in different processes or threads. Then later
  you can ATTACH all of them to see them as a unique
  database.

Regards
Dominique
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to achieve fastest possible write performance for a strange and limited case

2013-03-29 Thread Pavel Ivanov
On Fri, Mar 29, 2013 at 12:17 PM, Jeff Archer
 wrote:
> I have previously made an apparently bad assumption about this so now I
> would like to go back to the beginning of the problem and ask the most
> basic question first without any preconceived ideas.
>
> This use case is from an image processing application.  I have a large
> amount of intermediate data (way exceeds physical memory on my 24GB
> machine).  So, I need to store it temporarily on disk until getting to next
> phase of processing.  I am planning to use a large SSD dedicated to holding
> this temporary data.  I do not need any recoverability in case of hardware,
> power or other failure.   Each item to be stored is 9 DWORDs, 4 doubles and
> 2 variable sized BLOBS which are images.
>
> I could write directly to a file myself.  But I would need to provide some
> minimal indexing, some amount of housekeeping to manage variable
> sized BLOBS and some minimal synchronization so that multiple instances of
> the same application could operate simultaneously on a single set of data.
>
> So, then I though that SQLite could manage these things nicely for me so
> that I don't have to write and debug indexing and housekeeping code that
> already exists in SQLite.
>
> So, question is:  What is the way to get the fastest possible performance
> from SQLite when I am willing to give up all recoverability guarantees?

Use
pragma journal_mode = off;
pragma synchronous = off;
pragma locking_mode = exclusive;

In addition to that you may issue BEGIN statement at the beginning of
the application and never COMMIT.


Pavel
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Performance with journal_mode = off

2013-03-29 Thread Simon Slavin

On 29 Mar 2013, at 6:42pm, Jeff Archer  wrote:

> But shouldn't transactions be disabled when journal_mode = off? 

No.  The two things have nothing to do with one-another.  Transactions are 
about grouping changes together so that either they all happen or none happen.  
Journalling is about surviving through crashes and power-failures, and being 
able to undo changes.  Transactions are about programming.  Journalling is 
about hardware failure.  Either one can be useful without the other.

Having seen you ask this question probably explains your earlier confusion.  So 
yes, if you're doing trials, you have four test cases: (transactions on and 
off) times (journalling on and off) = 2 x 2.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to achieve fastest possible write performance for a strange and limited case

2013-03-29 Thread Simon Slavin

On 29 Mar 2013, at 7:17pm, Jeff Archer  wrote:

> This use case is from an image processing application.  I have a large
> amount of intermediate data (way exceeds physical memory on my 24GB
> machine).  So, I need to store it temporarily on disk until getting to next
> phase of processing.  I am planning to use a large SSD dedicated to holding
> this temporary data.  I do not need any recoverability in case of hardware,
> power or other failure.   Each item to be stored is 9 DWORDs, 4 doubles and
> 2 variable sized BLOBS which are images.
> 
> I could write directly to a file myself.

That's what I thought: just blat it straight out to a text file in append mode. 
 Organise it later.  But if you don't want to do that ...

> But I would need to provide some
> minimal indexing, some amount of housekeeping to manage variable
> sized BLOBS and some minimal synchronization so that multiple instances of
> the same application could operate simultaneously on a single set of data.

Locking between different simultaneous apps is going to kill you.  Do you 
really need, at this stage, one app to access data written by different writers 
?  If not, have each app write to a different database.  Then use a different 
app running on a different computer (or at least in a different process) to 
merge the data into one big file, either simultaneously or after 
data-collection is finished.

> So, then I though that SQLite could manage these things nicely for me so
> that I don't have to write and debug indexing and housekeeping code that
> already exists in SQLite.
> 
> So, question is:  What is the way to get the fastest possible performance
> from SQLite when I am willing to give up all recoverability guarantees?
> Or, is it simple that I should just write directly to file myself?



Journalling off and Synchronous off pragmas.  And I note you're asking for 
'fastest' whereas you really just want 'fast enough'.

However I note that you seem to have >24Gig of data to write.  At that size, 
the speed of SQLite isn't your limiting factor.  Instead you have to worry 
about the speed of your storage medium.  With 4K sectors, writing 24Gig of data 
means you're writing 3 million sectors.  If you're writing to rotating hard 
disks that means you'll be waiting for 3 million sectors to be rotated into the 
right place.  Even at 10,000 RPM that's a lot of waiting.  Your biggest speed 
increase isn't going to come from clever programming, it's going to come from 
moving to SSD.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Performance with journal_mode = off

2013-03-29 Thread Jeff Archer
>Date: Thu, 28 Mar 2013 14:13:51 +0100
>From: ibrahim 
>
>... You should compare
>
>a) Journal mode=on  w/wo single transaction
>b) Journal mode=off w/wo single transaction

So, this means I can use transaction when I have journal_mode = off?
I did not understand that this was a possibility?   I have re-read the
PRAGMA journal_mode documentation and still it sounds to me like this
cannot be done:  "The OFF journaling mode disables the atomic commit
and rollback capabilities of SQLite."  Maybe I am not reading
throughly...  Maybe all it truely says definitively is "Applications
must avoid using the ROLLBACK command when the journal mode is OFF."

I will rerun my experiment with a single transaction with journal_mode = off.

Thanks.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] How to achieve fastest possible write performance for a strange and limited case

2013-03-29 Thread Jeff Archer
I have previously made an apparently bad assumption about this so now I
would like to go back to the beginning of the problem and ask the most
basic question first without any preconceived ideas.

This use case is from an image processing application.  I have a large
amount of intermediate data (way exceeds physical memory on my 24GB
machine).  So, I need to store it temporarily on disk until getting to next
phase of processing.  I am planning to use a large SSD dedicated to holding
this temporary data.  I do not need any recoverability in case of hardware,
power or other failure.   Each item to be stored is 9 DWORDs, 4 doubles and
2 variable sized BLOBS which are images.

I could write directly to a file myself.  But I would need to provide some
minimal indexing, some amount of housekeeping to manage variable
sized BLOBS and some minimal synchronization so that multiple instances of
the same application could operate simultaneously on a single set of data.

So, then I though that SQLite could manage these things nicely for me so
that I don't have to write and debug indexing and housekeeping code that
already exists in SQLite.

So, question is:  What is the way to get the fastest possible performance
from SQLite when I am willing to give up all recoverability guarantees?
Or, is it simple that I should just write directly to file myself?
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Performance with journal_mode = off

2013-03-29 Thread Jeff Archer
>From: "James K. Lowden" 
>To: sqlite-users@sqlite.org
>
> Your experiment is telling you different: transaction control costs more
than I/O.
But shouldn't transactions be disabled when journal_mode = off?  Maybe that
is a faulty assumption.  If so, what is the point of journal_mode = off?
For this purpose, I am very happy to give all the ACID promises.

If I understand your point #2, I think you are saying that all of the
inserts within a single transaction are not written to the disk (database
or journal) until the transaction is committed.  But that can't quite be
the answer because if kept my transaction open long enough I would simple
run out of memory and that doesn't seem to happen even when I have 1
million plus inserts.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Install sqlite-netFx40-setup-bundle silently

2013-03-29 Thread Bernd

Am 27.03.2013 10:21, schrieb Jeff Williams:

Is it possible and if so how to install sqlite-netFx40-setup-bundle in a
silent mode.
I would like to do the install as part of a software package but would
prefer the users are not presented with the install screens.

Also what would be the easiest way to determine if
sqlite-netFx40-setup-bundle is already installed.

Regards
Jeff
___


Actually I don't have an answer to both of your questions - but 
hopefully you *do* know that installing those bundles is neither 
necessary nor (in my understanding) the recommended way to bring 
SQLite's binaries onto your client's computer.


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] SQLite Version 3.7.16.1

2013-03-29 Thread D . Richard Hipp
SQLite version 3.7.16.1 is now available on the SQLite website

 http://www.sqlite.org/

Version 3.7.16.1 is a patch release with only minor changes from version 
3.7.16.  The reason for this patch release is to fix a bug in the query 
optimizer enhancements that were part of version 3.7.15.  The optimizer bug 
could cause an ORDER BY clause to be optimized out even though sorting was 
required, resulting in output that is not in the correct order.  A few other 
obscure problems were also fixed with this release.  See the change log at 
https://www.sqlite.org/releaselog/3_7_16_1.html for details.

All users of SQLite versions 3.7.15 and 3.7.16 are encouraged to upgrade.

As always, please let me know if you encounter any problems with this or any 
other release of SQLite.

--
D. Richard Hipp
d...@sqlite.org



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] TCL Test failures on ARM

2013-03-29 Thread Dan Kennedy

On 03/29/2013 04:35 PM, Bk wrote:

The perror("ftruncate") is printing ftruncate: : File too large . i have a
2GB card in the embedded device with 50% mem free.


Seems like EFBIG. What value is actually being passed
as the second argument to the system ftruncate() call?

Does the test pass if you build with -DSQLITE_DISABLE_LFS?

In the debugger, does the expression "sizeof(off_t)" evaluate
to the same value in the ts_ftruncate() frame as it does
in robust_ftruncate()?






Below is the output window shows when the test executed.


sysfault-2.setup... Ok
ftruncate: : File too large
ftruncate: : File too large
ftruncate: : File too large
ftruncate: : File too large
sysfault-2.1-vfsfault-transient.1...
Expected: [0 ok]
  Got: [1 {nfail=1 rc=1 result=disk I/O error}]

Below is the backtrace when the failure is printed (the expected - Got
message)

(gdb) backtrace
#0  robust_ftruncate (h=5, sz=24576) at sqlite3.c:23440
#1  0x00065a1c in fcntlSizeHint (pFile=0x16fcb8, nByte=22528)
 at sqlite3.c:26305
#2  0x00065d04 in unixFileControl (id=0x16fcb8, op=5, pArg=0x7effed80)
 at sqlite3.c:26359
#3  0x00059560 in sqlite3OsFileControlHint (id=0x16fcb8, op=5,
pArg=0x7effed80)
 at sqlite3.c:14920
#4  0x000702b4 in pager_write_pagelist (pPager=0x16fbc0, pList=0x181ffc)
 at sqlite3.c:41001
#5  0x00072878 in sqlite3PagerCommitPhaseOne (pPager=0x16fbc0,
 zMaster=0x1b5a18 "/home/brijesh/SQLite-3.7.14.1/test.db-mjED751E9E4",
 noSync=0) at sqlite3.c:42879
#6  0x0007de8c in sqlite3BtreeCommitPhaseOne (p=0x16f718,
 zMaster=0x1b5a18 "/home/brijesh/SQLite-3.7.14.1/test.db-mjED751E9E4")
 at sqlite3.c:51071
#7  0x0008e90c in vdbeCommit (db=0x16b360, p=0x1be590) at sqlite3.c:60059
#8  0x0008ef04 in sqlite3VdbeHalt (p=0x1be590) at sqlite3.c:60333
#9  0x00098168 in sqlite3VdbeExec (p=0x1be590) at sqlite3.c:66334
#10 0x000918b8 in sqlite3Step (p=0x1be590) at sqlite3.c:61775
#11 0x00091ad4 in sqlite3_step (pStmt=0x1be590) at sqlite3.c:61848
#12 0x0004fdd4 in dbEvalStep (p=0x7efff170) at ./src/tclsqlite.c:1400
#13 0x00051b38 in DbObjCmd (cd=0x14a470, interp=0x122f80, objc=3,
 objv=0x12a764) at ./src/tclsqlite.c:2251




--
View this message in context: 
http://sqlite.1065341.n5.nabble.com/TCL-Test-failures-on-ARM-tp67612p67959.html
Sent from the SQLite mailing list archive at Nabble.com.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] TCL Test failures on ARM

2013-03-29 Thread Bk
The perror("ftruncate") is printing ftruncate: : File too large . i have a
2GB card in the embedded device with 50% mem free. 

Below is the output window shows when the test executed.


sysfault-2.setup... Ok
ftruncate: : File too large
ftruncate: : File too large
ftruncate: : File too large
ftruncate: : File too large
sysfault-2.1-vfsfault-transient.1...
Expected: [0 ok]
 Got: [1 {nfail=1 rc=1 result=disk I/O error}]

Below is the backtrace when the failure is printed (the expected - Got
message)

(gdb) backtrace
#0  robust_ftruncate (h=5, sz=24576) at sqlite3.c:23440
#1  0x00065a1c in fcntlSizeHint (pFile=0x16fcb8, nByte=22528)
at sqlite3.c:26305
#2  0x00065d04 in unixFileControl (id=0x16fcb8, op=5, pArg=0x7effed80)
at sqlite3.c:26359
#3  0x00059560 in sqlite3OsFileControlHint (id=0x16fcb8, op=5,
pArg=0x7effed80)
at sqlite3.c:14920
#4  0x000702b4 in pager_write_pagelist (pPager=0x16fbc0, pList=0x181ffc)
at sqlite3.c:41001
#5  0x00072878 in sqlite3PagerCommitPhaseOne (pPager=0x16fbc0, 
zMaster=0x1b5a18 "/home/brijesh/SQLite-3.7.14.1/test.db-mjED751E9E4", 
noSync=0) at sqlite3.c:42879
#6  0x0007de8c in sqlite3BtreeCommitPhaseOne (p=0x16f718, 
zMaster=0x1b5a18 "/home/brijesh/SQLite-3.7.14.1/test.db-mjED751E9E4")
at sqlite3.c:51071
#7  0x0008e90c in vdbeCommit (db=0x16b360, p=0x1be590) at sqlite3.c:60059
#8  0x0008ef04 in sqlite3VdbeHalt (p=0x1be590) at sqlite3.c:60333
#9  0x00098168 in sqlite3VdbeExec (p=0x1be590) at sqlite3.c:66334
#10 0x000918b8 in sqlite3Step (p=0x1be590) at sqlite3.c:61775
#11 0x00091ad4 in sqlite3_step (pStmt=0x1be590) at sqlite3.c:61848
#12 0x0004fdd4 in dbEvalStep (p=0x7efff170) at ./src/tclsqlite.c:1400
#13 0x00051b38 in DbObjCmd (cd=0x14a470, interp=0x122f80, objc=3, 
objv=0x12a764) at ./src/tclsqlite.c:2251




--
View this message in context: 
http://sqlite.1065341.n5.nabble.com/TCL-Test-failures-on-ARM-tp67612p67959.html
Sent from the SQLite mailing list archive at Nabble.com.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Performance with journal_mode = off

2013-03-29 Thread ibrahim

On 28.03.2013 14:03, Jeff Archer wrote:

On Thu, Mar 28, 2013 at 8:24 AM, ibrahim  wrote:

On 28.03.2013 13:09, Jeff Archer wrote:

On Wed, Mar 27, 2013 at 6:35 PM, Simon Slavin 
wrote:

Reasonable figures.  With 5764 writes to the disk in separates
transactions you have quite a lot of reading of data plus 5764 attempts to
update the database file.  The updates have to be done in the right order,
and each update has to wait for the disk to be rotated into the right
position, though each update changes only a small amount of data (probably
two sectors).

But my most basic question remains.  Why is single transaction faster
than PRAGMA journal_mode = off?

Seems to me that with no journal there should only be single set of
writes to the actual db and that journaling would double the number of
writes because data has to be written to journal file as well.

2.5 sec with journal
5.5 sec without journal   <= seems like this sould be the smaller number


You should read the sections 3 forward.

http://www.sqlite.org/atomiccommit.html

A single transaction happens mostly in memory then is flushed to the OS
Cache in one step. The Journal file (the amount of pages that will be
changed) is small while inserting new data into a database and the OS File
Cache is usually large enough ...


Yes, I have read this.  (And now re-read it)

So, since much more work must be done when using journal file, why
does it take longer to do the inserts when there is NO journal file?
___


You compare different approaches. You should compare

a) Journal mode=on  w/wo single transaction
b) Journal mode=off w/wo single transaction

Single transaction outweights journal mode in terms of performance. It 
reduces the amount of necessary physical read write operations on the OS 
File Cache and the Disk.



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users