[sqlite] Trivial doc issue for functions

2013-03-30 Thread Roger Binns
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

http://www.sqlite.org/lang_corefunc.html

The hex line is out of alphabetical order and should be two lines earlier.

Roger
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.11 (GNU/Linux)

iEYEARECAAYFAlFXwh4ACgkQmOOfHg372QQ9IACePrhGGxVjQdDynxIJhP8ciSoS
Ox8AnjH+DSNVcNr+GD2JeEi/JaPOqyGo
=KLq4
-END PGP SIGNATURE-
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to achieve fastest possible write performance for a strange and limited case

2013-03-30 Thread ibrahim
Those measurements asume that you store each blob in a seperate file. So 
the raw file access seems slower for smaller blob sizes.


If you use external blob storage do it in raw clusters like i suggested 
in a previous post (size limit 32/64 MB) and store your blobs on page 
boundaries (page size 4 k 8 k aso) this will always be faster cause you 
have no b-tree pages which are always fragmented but sequential stored 
image data.


Don't use file sizes larger than 32/64 MB because the pre fetch cache of 
modern HD's can read the whole file even if you only ask for a port of 
it and the fopen command will get slower when you open large files cause 
you have to read the pagelist into an internal library buffer.


The given link is only true if you store each blob in a seperate file.

I use for my similar project raw cluster modell and thats x times faster 
than storing image data in a b-tree organized database file.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Performance with journal_mode = off

2013-03-30 Thread ibrahim

In reference to your needs one more suggestion :

>> If you put the blobs outside of a sqlite database and store your 
householding, indexing data inside your sqlite data i would suggest to 
use journal mode = on because your journal file and database file 
without the blobs has a small amount and if you loose your database due 
to a crash inside your application you would possibly loose the 
housekeeping data for your raw data in files and that would make the 
whole really big amount of storage useless.
>> Keep your indexes and housekeeping data secure (my advice) if you 
don't want to gamble !


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Performance with journal_mode = off

2013-03-30 Thread ibrahim

From :  http://www.sqlite.org/pragma.html#pragma_journal_mode

The OFF journaling mode disables the rollback journal completely. No 
rollback journal is ever created and hence there is never a rollback 
journal to delete. The OFF journaling mode disables the atomic commit 
and rollback capabilities of SQLite. The ROLLBACK 
 command no longer works; 
it behaves in an undefined way. Applications must avoid using the 
ROLLBACK  command when the 
journal mode is OFF. If the application crashes in the middle of a 
transaction when the OFF journaling mode is set, then the database file 
will very likely go corrupt.


Meaning :

You can use Transaction also with journal mode OFF.
>> then there will be no journal file
>> there will be no ROLLBACK command
>> no atomic commit (see description of commit mechanism from the prior 
sent link)
>> a transaction which is possible can leave you with a corrupt 
database if your application crashes due to software or power failures
>> if you are sure you don't need the security of a journal file cause 
your application is crash proof and you won't get a power disk failure 
(assumption is a gun to shoot your feet) you can ommit a journal file 
but you will be able to use transacations.
>> transactions improve your performance for bulk data transfer into a 
database significantly.


hope this will help ;)

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to achieve fastest possible write performance for a strange and limited case

2013-03-30 Thread ibrahim

On 29.03.2013 20:17, Jeff Archer wrote:

I have previously made an apparently bad assumption about this so now I
would like to go back to the beginning of the problem and ask the most
basic question first without any preconceived ideas.

This use case is from an image processing application.  I have a large
amount of intermediate data (way exceeds physical memory on my 24GB
machine).  So, I need to store it temporarily on disk until getting to next
phase of processing.  I am planning to use a large SSD dedicated to holding
this temporary data.  I do not need any recoverability in case of hardware,
power or other failure.   Each item to be stored is 9 DWORDs, 4 doubles and
2 variable sized BLOBS which are images

I could write directly to a file myself.  But I would need to provide some
minimal indexing, some amount of housekeeping to manage variable
sized BLOBS and some minimal synchronization so that multiple instances of
the same application could operate simultaneously on a single set of data.

So, then I though that SQLite could manage these things nicely for me so
that I don't have to write and debug indexing and housekeeping code that
already exists in SQLite.

So, question is:  What is the way to get the fastest possible performance
from SQLite when I am willing to give up all recoverability guarantees?
Or, is it simple that I should just write directly to file myself?
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Suggestion : Put the fixed Data with small sizes into a sqlite 
database.  You won't search in the blobs with a database engine and the 
amount of data you have to process is large to make it fast you should 
write the image data into files. The other data which is necessary for 
processing ordering, indexing, searching comparision is best stored in a 
sqlite database.


To improve the speed of access for your images use full pages fill 
lesser images to the next page boundaries (as an example 4 k, 8 k ...) 
splitt long files into smaller clusters (16 to 64 MB) sequentially 
numbered this makes OS file operations faster because you have to cache 
the block index while opening and processing a file the positions can be 
indexed in sqlite.


I have a similar application for vectorized digitalization of 
handwritten old scripts and i use a database for searchable information 
while using external files (splitt as described) for raster images and 
vector files sqlite would be to slow for blobs like you need them put 
them outside but the indexes inside. Another advantage of this approach 
is that you can process many binary files simultanously while by putting 
them inside a database like sqlite you have only one writer.


The use of transactions makes inserting of data faster especially when 
you have indexes. Then try to create your indexes after fully inserting 
your data because that makes the process faster.



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Performance with journal_mode = off

2013-03-30 Thread ibrahim

On 29.03.2013 19:42, Jeff Archer wrote:

From: "James K. Lowden" 
To: sqlite-users@sqlite.org

Your experiment is telling you different: transaction control costs more

than I/O.
But shouldn't transactions be disabled when journal_mode = off?  Maybe that
is a faulty assumption.  If so, what is the point of journal_mode = off?
For this purpose, I am very happy to give all the ACID promises.

If I understand your point #2, I think you are saying that all of the
inserts within a single transaction are not written to the disk (database
or journal) until the transaction is committed.  But that can't quite be
the answer because if kept my transaction open long enough I would simple
run out of memory and that doesn't seem to happen even when I have 1
million plus inserts.
___

If you keep your transaction open look at the database file size you'll 
see that the changes aren't written to the File until you commit to disk.


1 Million records are to few for modern systems to reach the out of 
memory limit. Lets say your records have 1 k size that would make 1G of 
Memory with overhead and virtual memory why would you expect out of 
memory ???


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users