Re: [sqlite] Fsync or fdatasync performs slowly

2015-01-19 Thread Wei, Catherine
I've printed log out in the case of no disk, and the log shows that file
descriptor that I sent to fsync as an argument is 27, and the fsync
functions is called. But I don't know what will fsync do without a disk.
Appreciate for your quick response and great help.

On 01/19/2015 06:30 PM, Stephan Beal wrote:
 On Mon, Jan 19, 2015 at 11:23 AM, Wei, Catherine catherine@arris.com
 wrote:

 Thank you for your answer very much. But I have  removed the disk from
 my set-up box, the data will be saved in memory and the system doesn't
 know there's no disk, it will still call fsync. What do you think in
 this case?

 man fsync says:

 SYNOPSIS
#include unistd.h

int fsync(int fd);

int fdatasync(int fd);

 DESCRIPTION
fsync()  transfers  (flushes)  all  modified  in-core  data of
 (i.e., modified buffer cache pages for) the file referred to by the file
descriptor fd to the disk device (or other permanent storage device)
 ...


 i.e. if you have no disk (you are using an in-memory VFS), then you have no
 file descriptor, so fsync/datasync _cannot_ be (legally) called.


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Whish List for 2015 (David Barrett)

2015-01-19 Thread Klaas V
On my whish-list for this list is the possibility to include attachments like 
screenshots and databases.
For program more handy functions and documentation more examples. I'm pretty 
sure that less lazy users made functions that could be included as standard to 
be used for all of us.  
Kind regards | Cordiali saluti | Vriendelijke groeten | Freundliche Grüsse,
Klaas `Z4us` V  - OrcID -0001-7190-2544
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Whish List for 2015 (David Barrett)

2015-01-19 Thread Simon Slavin

On 20 Jan 2015, at 12:23am, Klaas V klaasva...@yahoo.com wrote:

 On my whish-list for this list is the possibility to include attachments like 
 screenshots and databases.

I've been active on several technical helplists over the years and from my 
experience I'll tell you what would happen.  We'd get

A) People posting a description of a problem and attaching big databases and 
expecting everyone reading to download their attachment, reproduce the problem 
and solve it for them.  They won't try very hard to solve it themself before 
posting.

B) People posting screenshots of various SQLite management applications, 
thinking that because they have SQLite in their name, bugs in the app should be 
fixed here.

C) People posting screenshots of error messages which mention SQLite, but which 
are actually caused by bugs in a program which uses SQLite, not SQLite itself.

The list would soon get so drowned in these things it would be useless for 
solving the slightly more technical problems we currently see.

By presenting a small challenge to anyone who wants to get a problem solved 
here, we filter out the try once, give up, don't read the documentation just 
ask for help brigade.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Fsync or fdatasync performs slowly

2015-01-19 Thread Wei, Catherine
I've tested it in two kinds of Set-Up box. In one box, it costs about
5ms while in the other box, it costs 30ms.  Disks have been removed on
both of them. It's strange why the difference is so big.
Appreciate for you response, thank you.


On 01/20/2015 04:46 AM, Richard Hipp wrote:
 On 1/19/15, Roger Binns rog...@rogerbinns.com wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 On 01/18/2015 06:49 PM, Wei, Catherine wrote:
 takes too much time when it executes fsync or fdatasync function.
 Note that although the documentation says only the file handle is
 synced, in practise many filesystems actually sync the whole
 filesystem writing out all pending data for all files.

 The reason is the underlying filesystem is written to carefully write
 content (especially meta data) in a particular order, to ensure
 robustness against power losses, make syncs easier to manage, and make
 fsck easier.  Trying to sync just the one file handle is too
 complicated, so they sync everything.

 You need to watch out for other write activity on the system, not just
 SQLite.

 All true.  But also note that you can dramatically reduce the number
 of fsync() calls by using WAL mode in SQLite.


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Page_size increase from 1k to 4k made my REPLACE INTO slower. why?

2015-01-19 Thread Andy (KU7T)
Hi,

 

I am using System.Data.SQLite, Version=1.0.92.0. I read that increasing the
page_size to 4k on modern OS is a good thing and should speed things up.
However, I have a particular query that takes substantially longer. I tried
to make sure that nothing else is changed, so I am a little puzzled. Can
anyone explain why this is? Query is this, in case this gives  a clue:

 

REPLACE INTO PacketSpots 

(Call, TS, Band, Freq, QSXFreq, Comment, Spotter, Bearing, CountryPrefix,
Sect, ZN, Exchange1, GridSquare, MiscText, Name, NR, Continent, Sunrise,
Sunset, Latitude, Longitude, QualityTag) 

VALUES

(@Call, @TS, @Band, @Freq, @QSXFreq, @Comment, @Spotter, @Bearing,
@CountryPrefix, @Sect, @ZN, @Exchange1, @GridSquare, @MiscText, @Name, @NR,
@Continent, @Sunrise, @Sunset, @Latitude, @Longitude, @QualityTag)

 

SqliteParameters not show.

 

The query used to be sub 1 ms, now it is 28 ms. Since I get quite a few
calls per second, this is a big change.

 

Anyone any ideas or tips?

 

Thanks

Andy

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] ORDER BY and LIMIT regression

2015-01-19 Thread Rich Shepard

On Mon, 19 Jan 2015, Richard Hipp wrote:


Thank you for reporting the problem.


  Certainly!


We always fix every problem that we are aware of in SQLite.  But this
problem had not been previously reported to us, and did not occur in any
of the 168 million test cases that we ran prior to releasing SQLite 3.8.8,
so it didn't get fixed.


  And while I've not upgraded that many times over the years, it's never
before been an issue. I've no idea how the timestamps could get so
different, but it's not likely a coding error.

Rich
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Compiling error on Cygwin on Windows 8.1: 3.8.7.4 and 3.8.8

2015-01-19 Thread Jan Nijtmans
2015-01-19 11:01 GMT+01:00 Frank Ho frank...@263.net:
 Hi,

 I compiled the SQLite on the Cygwin 1.7.33 running on a Windows 8.1, here's 
 the error:

 .libs/sqlite3.o: In function `sqlite3ThreadProc':
 ../sqlite-autoconf-3080800/sqlite3.c:22471: undefined reference to 
 `_endthreadex'
 .libs/sqlite3.o: In function `sqlite3ThreadCreate':
 ../sqlite-autoconf-3080800/sqlite3.c:22493: undefined reference to 
 `_beginthreadex'
 collect2: error: ld returned 1 exit status

If you are interested, you can get a corrected amalgamation here:
   
http://cyqlite.sourceforge.net/cgi-bin/sqlite/raw/generic/sqlite3.c?name=83a26f4777390dbcfcc58ba146354002ca2de0c8

An official cygwin build of SQLite 3.8.8 will be available soon.

Regards,
   Jan Nijtmans
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Fsync or fdatasync performs slowly

2015-01-19 Thread Wei, Catherine
Thank you for your answer very much. But I have  removed the disk from
my set-up box, the data will be saved in memory and the system doesn't
know there's no disk, it will still call fsync. What do you think in
this case?
Appreciate your quick response and great help.

On 01/19/2015 05:25 PM, Simon Slavin wrote:
 On 19 Jan 2015, at 5:38am, Wei, Catherine catherine@arris.com wrote:

 The time i'm looking at is the duration that function fsync or fdatasync
 executes, it's about 30ms. I don't know wether it's related to linux
 kernel or something related. I've tested it in another kind of set-up
 box and the duration of fsyn is only about 5ms.
 The data being written to is just an insert sql, very little.  Every
 time after executing the insert sql, database transaction will be
 committed and fsync will be executed during commiting process.
 The answer is that the computer you have that says 5s is lying to you.

 A normal hard disk in a normal computer rotates at 5400 times a minute.  This 
 means it completes one rotation in about 10ms.  And that means that waiting 
 for the right part of the disk to pass under the read/write head takes an 
 average of 5ms.

 Committing a transaction involves SQLite writing to the disk in a number of 
 different places: it has to move the transaction from the journal file to the 
 database file which involves at least two read and two write commands, 
 usually more than that.  (I do not know enough about SQLite to know the 
 proper numbers but I'm sure they are more than that.)  With an average 
 latency of 5ms per access, this means that in the best possible case 
 committing a transaction will take 20ms.  This suggests that the timing of 
 30ms you're getting from your second computer is about right.

 So why is the first computer lying to you ?  Well it's to make itself seem 
 faster.  In this case when a 'write' command is issued to the disk drive it 
 is reporting finished immediately, before the change has actually been made 
 to disk.  This doesn't matter when a computer is used for normal desktop 
 purposes (web browser, word processor) because you might just lose the last 
 few characters of a word processing file, but it matters a lot in a database 
 like SQLite because writing one row to a database involves many changes and 
 if they're not consistent then the database may be corrupted.  So for good 
 safety of data in your database your 30ms computer is the good computer and 
 your 5ms computer is a bad computer.

 It's worth noting that this long delay (30ms) is only for writing a 
 transaction, not for each change to the database.  So

 BEGIN
   INSERT ...
   INSERT ...
   INSERT ...
   97 more of these ...
 END

 only involves one long update, not 100 long updates.

 Simon.
 ___
 sqlite-users mailing list
 sqlite-users@sqlite.org
 http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] ORDER BY and LIMIT regression

2015-01-19 Thread Angelo Mottola
Hello,

I have a regression to report, that seems to have been introduced between 
SQLite 3.8.6 and the newest 3.8.8 (at least our test case worked in 3.8.6 and 
stopped working somewhere in 3.8.7.x; we were hoping it got fixed in 3.8.8 but 
eventually it wasn’t).

In our application we can have SQLite databases with more than 100 tables, and 
we can generate queries automatically; one of these automatically generated 
queries is the following:


SELECT EB_DocumentiFiscali__00.CAP, EB_RigheUnitaMisura__01.Abbreviazione, 
EB_DocumentiFiscali__00.OraInizioTrasp, EB_ClientiFornitori__03.Codice, 
EB_RigheDocFiscali.ValoreUnitarioNetto, EB_DocumentiFiscali__00.id, 
EB_RigheDocFiscali.Sconto, EB_DocumentiFiscali__00.PartitaIVA, 
EB_TipologieDocumenti__08.Codice, EB_DocumentiFiscali__00.Note, 
EB_DocumentiFiscali__00.RagSoc_Dest, EB_TipologieDocumenti__08.id, 
EB_Valute__12.Abbreviazione, EB_DocumentiFiscali__00.ScontoGlobale, 
EB_DocumentiFiscali__00.DataDocumento, EB_Agenti__15.RagioneSociale, 
EB_RigheDocFiscali.DescArticolo, EB_Articoli__17.Codice, 
EB_DocumentiFiscali__00.NumeroColli, EB_DocumentiFiscali__00.CAP_Dest, 
EB_RigheDocFiscali.ValoreUnitario, EB_AliquoteIVA__21.tra_Descrizione, 
EB_DocumentiFiscali__00.Indirizzo_Dest, EB_RigheDocFiscali.NumeroRiga, 
EB_DocumentiFiscali__00.NumeroProgressivo, 
EB_DocumentiFiscali__00.val_NazioneDest, EB_Vettori__26.Descrizione, 
EB_DocumentiFiscali__00.val_ModalitaInvio, EB_DocumentiFiscali__00.AspettoBeni, 
EB_DocumentiFiscali__00.Peso, EB_RigheDocFiscali.id, 
EB_DocumentiFiscali__00.RagioneSociale, EB_Banche__32.Descrizione, 
EB_AliquoteIVA__21.PercentualeIVA, EB_CondizioniConsegna__34.Descrizione, 
EB_TipologieDocumenti__08.tra_Descrizione, 
EB_DocumentiFiscali__00.NumeroInterno, EB_BancheAzienda__37.Descrizione, 
EB_DocumentiFiscali__00.Localita_Dest, EB_CondizioniPagamento__39.Codice, 
EB_DocumentiFiscali__00.Email, EB_DocumentiFiscali__00.Indirizzo, 
EB_Valute__12.Codice, EB_DocumentiFiscali__00.val_Nazione, 
EB_DocumentiFiscali__00.DataInizioTrasp, 
EB_DocumentiFiscali__00.CausaleTrasporto, 
EB_DocumentiFiscali__00.Provincia_Dest, 
EB_CondizioniPagamento__39.tra_Descrizione, EB_RigheDocFiscali.Quantita, 
EB_DocumentiFiscali__00.Provincia, EB_RigheDocFiscali.FlagsStato, 
EB_DocumentiFiscali__00.Localita, EB_DocumentiFiscali__00.CodiceFiscale, 
EB_RigheDocFiscali.ref_RigaDdT, EB_AliquoteIVA__21.Codice, 
EB_DocumentiFiscali__00.Suffisso, EB_Articoli__17.val_GestioneLotti, 
EB_CausaliContabili__57.val_TipoGestIVA, 
EB_DocumentiFiscali__00.val_AddettoTrasporto, EB_BancheAzienda__37.IBAN FROM 
EB_RigheDocFiscali LEFT JOIN EB_DocumentiFiscali AS EB_DocumentiFiscali__00 ON 
(EB_RigheDocFiscali.ref_DocumentoFiscale = EB_DocumentiFiscali__00.id) LEFT 
JOIN EB_RigheUnitaMisura AS EB_RigheUnitaMisura__01 ON 
(EB_RigheDocFiscali.ref_RigheUnitaMisura = EB_RigheUnitaMisura__01.id) LEFT 
JOIN EB_ClientiFornitori AS EB_ClientiFornitori__03 ON 
(EB_DocumentiFiscali__00.ref_Cliente = EB_ClientiFornitori__03.id) LEFT JOIN 
EB_TipologieDocumenti AS EB_TipologieDocumenti__08 ON 
(EB_DocumentiFiscali__00.ref_Tipologia = EB_TipologieDocumenti__08.id) LEFT 
JOIN EB_Valute AS EB_Valute__12 ON (EB_DocumentiFiscali__00.ref_Valuta = 
EB_Valute__12.id) LEFT JOIN EB_Agenti AS EB_Agenti__15 ON 
(EB_DocumentiFiscali__00.ref_Agente = EB_Agenti__15.id) LEFT JOIN EB_Articoli 
AS EB_Articoli__17 ON (EB_RigheDocFiscali.ref_Articolo = EB_Articoli__17.id) 
LEFT JOIN EB_AliquoteIVA AS EB_AliquoteIVA__21 ON 
(EB_RigheDocFiscali.ref_AliquotaIVA = EB_AliquoteIVA__21.id) LEFT JOIN 
EB_Vettori AS EB_Vettori__26 ON (EB_DocumentiFiscali__00.ref_Vettore1 = 
EB_Vettori__26.id) LEFT JOIN EB_Banche AS EB_Banche__32 ON 
(EB_DocumentiFiscali__00.ref_Banca = EB_Banche__32.id) LEFT JOIN 
EB_CondizioniConsegna AS EB_CondizioniConsegna__34 ON 
(EB_DocumentiFiscali__00.ref_CondizioneConsegna = EB_CondizioniConsegna__34.id) 
LEFT JOIN EB_BancheAzienda AS EB_BancheAzienda__37 ON 
(EB_DocumentiFiscali__00.ref_BancaAzienda = EB_BancheAzienda__37.id) LEFT JOIN 
EB_CondizioniPagamento AS EB_CondizioniPagamento__39 ON 
(EB_DocumentiFiscali__00.ref_CondizionePagamento = 
EB_CondizioniPagamento__39.id) LEFT JOIN EB_CausaliContabili AS 
EB_CausaliContabili__57 ON (EB_DocumentiFiscali__00.ref_CausContabile = 
EB_CausaliContabili__57.id) WHERE ((EB_DocumentiFiscali__00.id = 258) )  ORDER 
BY EB_DocumentiFiscali__00.NumeroInterno ASC , EB_RigheDocFiscali.NumeroRiga 
ASC  LIMIT 0, 200

The query worked correctly with SQLite 3.8.6, returning for our test-case 
database 5 records with the same EB_DocumentiFiscali__00.NumeroInterno, ordered 
by EB_RigheDocFiscali.NumeroRiga in ascending order.
With 3.8.7 and 3.8.8 however, the very same query returns the same 5 records 
but in the wrong order, as if it was ordered by NumeroRiga DESC (instead of 
ASC). What’s even more strange is the fact that if you remove the LIMIT clause, 
the records are returned in the correct order even with 3.8.7 and 3.8.8.

For reference, the test-case 

[sqlite] Compiling error on Cygwin on Windows 8.1: 3.8.7.4 and 3.8.8

2015-01-19 Thread Frank Ho
Hi,

I compiled the SQLite on the Cygwin 1.7.33 running on a Windows 8.1, here's the 
error:

.libs/sqlite3.o: In function `sqlite3ThreadProc': 
../sqlite-autoconf-3080800/sqlite3.c:22471: undefined reference to 
`_endthreadex' 
.libs/sqlite3.o: In function `sqlite3ThreadCreate': 
../sqlite-autoconf-3080800/sqlite3.c:22493: undefined reference to 
`_beginthreadex' 
collect2: error: ld returned 1 exit status 

Best Regards



Frank Ho
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] ORDER BY and LIMIT regression

2015-01-19 Thread Richard Hipp
On 1/19/15, Angelo Mottola a.mott...@converge.it wrote:
 Hello,

 I have a regression to report, that seems to have been introduced between
 SQLite 3.8.6 and the newest 3.8.8 (at least our test case worked in 3.8.6
 and stopped working somewhere in 3.8.7.x; we were hoping it got fixed in
 3.8.8 but eventually it wasn’t).
...

 The query worked correctly with SQLite 3.8.6, returning for our test-case
 database 5 records with the same EB_DocumentiFiscali__00.NumeroInterno,
 ordered by EB_RigheDocFiscali.NumeroRiga in ascending order.
 With 3.8.7 and 3.8.8 however, the very same query returns the same 5 records
 but in the wrong order, as if it was ordered by NumeroRiga DESC (instead of
 ASC). What’s even more strange is the fact that if you remove the LIMIT
 clause, the records are returned in the correct order even with 3.8.7 and
 3.8.8.


I downloaded your test database and ran your query on 3.8.6, 3.8.7.4,
and 3.8.8.  All three give the same answer for me.  Dan did likewise
with the same results, and in addition ran the test under valgrind
with no warnings issued.

Unable to recreate the problem.

-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] ORDER BY and LIMIT regression

2015-01-19 Thread Richard Hipp
Ignore my previous email on this subject.  We are able to get
different results from 3.8.6 and 3.8.8.  Unclear yet if the one or the
other is incorrect.

On 1/19/15, Richard Hipp d...@sqlite.org wrote:
 On 1/19/15, Angelo Mottola a.mott...@converge.it wrote:
 Hello,

 I have a regression to report, that seems to have been introduced between
 SQLite 3.8.6 and the newest 3.8.8 (at least our test case worked in 3.8.6
 and stopped working somewhere in 3.8.7.x; we were hoping it got fixed in
 3.8.8 but eventually it wasn’t).
 ...

 The query worked correctly with SQLite 3.8.6, returning for our test-case
 database 5 records with the same EB_DocumentiFiscali__00.NumeroInterno,
 ordered by EB_RigheDocFiscali.NumeroRiga in ascending order.
 With 3.8.7 and 3.8.8 however, the very same query returns the same 5
 records
 but in the wrong order, as if it was ordered by NumeroRiga DESC (instead
 of
 ASC). What’s even more strange is the fact that if you remove the LIMIT
 clause, the records are returned in the correct order even with 3.8.7 and
 3.8.8.


 I downloaded your test database and ran your query on 3.8.6, 3.8.7.4,
 and 3.8.8.  All three give the same answer for me.  Dan did likewise
 with the same results, and in addition ran the test under valgrind
 with no warnings issued.

 Unable to recreate the problem.

 --
 D. Richard Hipp
 d...@sqlite.org



-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] ORDER BY and LIMIT regression

2015-01-19 Thread RSmith
Might this not be a reverse_unordered_selects pragma or compile option  going wrong, or at least the code making it work getting 
somehow hooked in the new versions for this query?


I have seen similar things when using that pragma (but of course that was 
intended).
Just a thought...


On 2015/01/19 16:27, Richard Hipp wrote:

Ignore my previous email on this subject.  We are able to get
different results from 3.8.6 and 3.8.8.  Unclear yet if the one or the
other is incorrect.

On 1/19/15, Richard Hipp d...@sqlite.org wrote:

On 1/19/15, Angelo Mottola a.mott...@converge.it wrote:

Hello,

I have a regression to report, that seems to have been introduced between
SQLite 3.8.6 and the newest 3.8.8 (at least our test case worked in 3.8.6
and stopped working somewhere in 3.8.7.x; we were hoping it got fixed in
3.8.8 but eventually it wasn’t).

...

The query worked correctly with SQLite 3.8.6, returning for our test-case
database 5 records with the same EB_DocumentiFiscali__00.NumeroInterno,
ordered by EB_RigheDocFiscali.NumeroRiga in ascending order.
With 3.8.7 and 3.8.8 however, the very same query returns the same 5
records
but in the wrong order, as if it was ordered by NumeroRiga DESC (instead
of
ASC). What’s even more strange is the fact that if you remove the LIMIT
clause, the records are returned in the correct order even with 3.8.7 and
3.8.8.


I downloaded your test database and ran your query on 3.8.6, 3.8.7.4,
and 3.8.8.  All three give the same answer for me.  Dan did likewise
with the same results, and in addition ran the test under valgrind
with no warnings issued.

Unable to recreate the problem.

--
D. Richard Hipp
d...@sqlite.org





___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] ORDER BY and LIMIT regression

2015-01-19 Thread Simon Slavin

On 19 Jan 2015, at 3:10pm, Richard Hipp d...@sqlite.org wrote:

 It is a very
 complex problem.  In particular, the sample query works fine as long
 as the number of columns in the result set is not exactly 60.  Adding
 or removing a single column of result gives the correct answer.

I would love to know the cause of that.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] ORDER BY and LIMIT regression

2015-01-19 Thread Richard Hipp
On 1/19/15, RSmith rsm...@rsweb.co.za wrote:
 Might this not be a reverse_unordered_selects pragma or compile option
 going wrong, or at least the code making it work getting
 somehow hooked in the new versions for this query?


It looks like a bug.  If you update to the latest trunk check-in and set:

 .testctrl never_corrupt 1

prior to running the test query, you get an assertion fault.

That new dot-command invokes
sqlite3_test_control(SQLITE_TESTCTRL_NEVER_CORRUPT, 1) which tells
SQLite that no database it deals with will ever be corrupt.  And this,
in turn, enables some additional assert() statements which are
normally turned off.  One of those new asserts finds the incorrect
logic.

We are working on a simple test case and a fix now.  It is a very
complex problem.  In particular, the sample query works fine as long
as the number of columns in the result set is not exactly 60.  Adding
or removing a single column of result gives the correct answer.



-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] When to run ANALYZE

2015-01-19 Thread Andy (KU7T)
Hi,

 

I added a call to both:

 

ANALYZE

ANALYZE sqlite_master 

 

To my app at timely intervals. We are also running in WAL mode. Do I need to
do a WAL checkpoint (pragma wal_checkpoint) before running the index recalc
or not?

 

Thanks

Andy

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] ORDER BY and LIMIT regression

2015-01-19 Thread Richard Hipp
On 1/19/15, Simon Slavin slav...@bigfraud.org wrote:

 On 19 Jan 2015, at 3:10pm, Richard Hipp d...@sqlite.org wrote:

 It is a very
 complex problem.  In particular, the sample query works fine as long
 as the number of columns in the result set is not exactly 60.  Adding
 or removing a single column of result gives the correct answer.

 I would love to know the cause of that.


So would we :-\   Will probably figure it out soon

The ticket is https://www.sqlite.org/src/tktview/f97c4637102a3


-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Malformatted output by sqlite3

2015-01-19 Thread RSmith


On 2015/01/19 12:52, Stephan Buchert wrote:

(Prompt erased for easier paste and copy):

CREATE TABLE satpos(msec INTEGER, latitude REAL, longitude REAL);
INSERT INTO satpos VALUES (86386217,-0.0318895369716216,-167.689719869132);
INSERT INTO satpos VALUES
(86386716,-2.93238037697483e-06,-167.690497310632);
INSERT INTO satpos VALUES (86387217,0.0319616241531195,-167.69127664905);
SELECT * FROM satpos;
86386217|-0.0318895369716216|-167.689719869132
86386716|-2.93238037697483e-06|-167.690497310632
86387217|0.0319616241531195|-167.69127664905

So far, so good, but:

.mode column
.width -8 -7 -8
SELECT * FROM satpos;
86386217  -0.0318  -167.689
86386716  -2.9323  -167.690
86387217  0.03196  -167.691

In the 2nd row, 2nd column -0. or -2.9e-6 would make me happy. But
-2.9323 definitely messes up my stuff.


The width specifier is simply a cut-off style formatter. Two solutions to get the correct anwer listed below, pic the one that suits 
you -



CREATE TABLE satpos(msec INTEGER, latitude REAL, longitude REAL);
INSERT INTO satpos VALUES (86386217,-0.0318895369716216,-167.689719869132);
INSERT INTO satpos VALUES (86386716,-2.93238037697483e-06,-167.690497310632);
INSERT INTO satpos VALUES (86387217,0.0319616241531195,-167.69127664905);

SELECT msec, round(latitude,6), round(longitude,6) FROM satpos;

msecround(latitude,6)round(longitude,6)
86386217-0.03189-167.68972
86386716-3.0e-06-167.690497
863872170.031962-167.691277

 SELECT msec, printf('%12.3f',latitude), printf('%12.3f',longitude) FROM satpos;

msecprintf('%12.3f',latitude)printf('%12.3f',longitude)
86386217  -0.032-167.690
86386716  -0.000-167.690
86387217   0.032-167.691




  Script Stats: Total Script Execution Time: 0d 00h 00m and 00.049s
Total Script Query Time: 0d 00h 00m and 00.005s
Total Database Rows Changed: 3
Total Virtual-Machine Steps: 141
Last executed Item Index:6
Last Script Error:

2015-01-19 14:22:02.197  |  [Success]Script Success.
2015-01-19 14:22:04.959  |  [Success]Transaction Rolled back.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Fsync or fdatasync performs slowly

2015-01-19 Thread Stephan Beal
On Mon, Jan 19, 2015 at 11:23 AM, Wei, Catherine catherine@arris.com
wrote:

 Thank you for your answer very much. But I have  removed the disk from
 my set-up box, the data will be saved in memory and the system doesn't
 know there's no disk, it will still call fsync. What do you think in
 this case?


man fsync says:

SYNOPSIS
   #include unistd.h

   int fsync(int fd);

   int fdatasync(int fd);

DESCRIPTION
   fsync()  transfers  (flushes)  all  modified  in-core  data of
(i.e., modified buffer cache pages for) the file referred to by the file
   descriptor fd to the disk device (or other permanent storage device)
...


i.e. if you have no disk (you are using an in-memory VFS), then you have no
file descriptor, so fsync/datasync _cannot_ be (legally) called.

-- 
- stephan beal
http://wanderinghorse.net/home/stephan/
http://gplus.to/sgbeal
Freedom is sloppy. But since tyranny's the only guaranteed byproduct of
those who insist on a perfect world, freedom will have to do. -- Bigby Wolf
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Huge WAL log

2015-01-19 Thread Jan Slodicka
Dan Kennedy-4 wrote
 Is it correct that you have a single transaction inserting lots of data 
 into a table with multiple indexes on it? Something like 1GB?

Depends on. It is the best option from the application point of view. Other
solutions introduce additional risks. Apparently, the answer is different if
we adopt database point of view.


Dan Kennedy-4 wrote
 When an SQL write transaction is performed, SQLite begins by modifying 
 database pages within its internal page-cache. If the transaction 
 modifies only a few pages, all dirty pages stay in the cache until the 
 user executes COMMIT, at which point they are appended to the *-wal 
 file. However, the page-cache is of limited size (by default 2000 
 pages), and once it is completely full of dirty pages SQLite begins 
 appending them to the *-wal file mid-transaction in order to free up 
 space. At this point each time SQLite needs to modify a page that is not 
 already in the cache it must select a dirty page to write out to the 
 *-wal file so as to free up space to load the new page into the cache 
 where it can be modified. This means that a transaction with a large 
 working set may append more than one copy of a single page to the *-wal 
 file. Maybe many, many copies.
 
 SQLite indexes are b-trees. Each b-tree node is stored on a database 
 page. So if you're inserting keys in random order into a large index 
 (one too large to fit entirely within the page-cache), then virtually 
 all inserts result in an existing dirty page being flushed from the 
 cache and appended to the *-wal file.

Thanks for clarifying. In fact I was slowly iterating to similar
conclusions.

Don't you think that SQLite documentation should discuss this risk in
greater detail than it is currently done? 


Dan Kennedy-4 wrote
 Inserting keys in sorted order avoids the problem because 
 all writes go to the right-most leaf node of the index b-tree, which 
 will almost always be present in the page-cache.
 
 To change the size of the page-cache, see the PRAGMA cache_size and 
 PRAGMA page_size commands.
 
 One way to get keys to be inserted in order is to create all indexes 
 after populating the table. SQLite sorts the data before creating the 
 index b-tree in this case.
 
 The other is to create a temp (or non temp) table with *exactly the 
 same* columns and indexes as the table to be populated and insert the 
 new rows into it. Then running:
 
INSERT INTO target_tbl SELECT * FROM temp_tbl;
 
 In this case, SQLite detects the similar schemas and copies keys in 
 sorted order from the indexes on temp_tbl to the corresponding index 
 on target_tbl.

Good tips, thanks.





--
View this message in context: 
http://sqlite.1065341.n5.nabble.com/Huge-WAL-log-tp79991p80146.html
Sent from the SQLite mailing list archive at Nabble.com.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Fsync or fdatasync performs slowly

2015-01-19 Thread Stephan Beal
On Mon, Jan 19, 2015 at 3:49 AM, Wei, Catherine catherine@arris.com
wrote:

 Hi, I'm running a software in a set-up box with linux system and find
 that every time when I commit transaction, sqlite takes too much time
 when it executes fsync or fdatasync function. What could be the possible
 reasons?


FWIW, fsync/fdatasync() are _system_ calls, so the OS or one of its drivers
(not sqlite) is taking too much time to return. See also:

https://www.sqlite.org/c3ref/c_sync_dataonly.html
https://www.sqlite.org/pragma.html#pragma_synchronous

-- 
- stephan beal
http://wanderinghorse.net/home/stephan/
http://gplus.to/sgbeal
Freedom is sloppy. But since tyranny's the only guaranteed byproduct of
those who insist on a perfect world, freedom will have to do. -- Bigby Wolf
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Page_size increase from 1k to 4k made my REPLACE INTO slower. why?

2015-01-19 Thread Hick Gunter
From what little you reveal I assume this is some kind of datalogging 
application. I also assume there will be a primary key (call ?) and also 
suspect that there are a number of secondary indices for data retrieval. Since 
you make no mention of transactions, I must infer that you are using 
autocommit, i.e. the data is committed to disk for each and every row.

For the primary key, only the rightmost page of the b-tree will be affected; 
for other indices, it is quite likely that random pages (different for each 
row) will be hit.

In this scenario, increasing the page size means that 4 times as many  sectors 
need to hit the disk surface for each statement.

To increase speed, consider committing the inserts at regular intervals (1 per 
second maybe?), so that disk updates (which is where the time is spent) happen 
only once every quite a few records. Larger pages then have a chance of 
getting filled and may also improve the locality of writes, i.e. fewer seeks 
and more consecutively written sectors.

You might also consider having a logger thread that writes into alternating 
table(s) (e.g. even and odd seconds' data) with INTEGER PRIMARY KEY and no 
other indices; and a transfer thread that copies the entries over into the 
real table in a batch transaction.

-Ursprüngliche Nachricht-
Von: Andy (KU7T) [mailto:k...@ku7t.org]
Gesendet: Montag, 19. Jänner 2015 09:06
An: sqlite-users@sqlite.org
Betreff: [sqlite] Page_size increase from 1k to 4k made my REPLACE INTO 
slower. why?

Hi,



I am using System.Data.SQLite, Version=1.0.92.0. I read that increasing the 
page_size to 4k on modern OS is a good thing and should speed things up.
However, I have a particular query that takes substantially longer. I tried to 
make sure that nothing else is changed, so I am a little puzzled. Can anyone 
explain why this is? Query is this, in case this gives  a clue:



REPLACE INTO PacketSpots

(Call, TS, Band, Freq, QSXFreq, Comment, Spotter, Bearing, CountryPrefix, Sect, 
ZN, Exchange1, GridSquare, MiscText, Name, NR, Continent, Sunrise, Sunset, 
Latitude, Longitude, QualityTag)

VALUES

(@Call, @TS, @Band, @Freq, @QSXFreq, @Comment, @Spotter, @Bearing, 
@CountryPrefix, @Sect, @ZN, @Exchange1, @GridSquare, @MiscText, @Name, @NR, 
@Continent, @Sunrise, @Sunset, @Latitude, @Longitude, @QualityTag)



SqliteParameters not show.



The query used to be sub 1 ms, now it is 28 ms. Since I get quite a few calls 
per second, this is a big change.



Anyone any ideas or tips?



Thanks

Andy

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


___
 Gunter Hick
Software Engineer
Scientific Games International GmbH
FN 157284 a, HG Wien
Klitschgasse 2-4, A-1130 Vienna, Austria
Tel: +43 1 80100 0
E-Mail: h...@scigames.at

This communication (including any attachments) is intended for the use of the 
intended recipient(s) only and may contain information that is confidential, 
privileged or legally protected. Any unauthorized use or dissemination of this 
communication is strictly prohibited. If you have received this communication 
in error, please immediately notify the sender by return e-mail message and 
delete all copies of the original communication. Thank you for your cooperation.


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Fsync or fdatasync performs slowly

2015-01-19 Thread Simon Slavin

On 19 Jan 2015, at 5:38am, Wei, Catherine catherine@arris.com wrote:

 The time i'm looking at is the duration that function fsync or fdatasync
 executes, it's about 30ms. I don't know wether it's related to linux
 kernel or something related. I've tested it in another kind of set-up
 box and the duration of fsyn is only about 5ms.
 The data being written to is just an insert sql, very little.  Every
 time after executing the insert sql, database transaction will be
 committed and fsync will be executed during commiting process.

The answer is that the computer you have that says 5s is lying to you.

A normal hard disk in a normal computer rotates at 5400 times a minute.  This 
means it completes one rotation in about 10ms.  And that means that waiting for 
the right part of the disk to pass under the read/write head takes an average 
of 5ms.

Committing a transaction involves SQLite writing to the disk in a number of 
different places: it has to move the transaction from the journal file to the 
database file which involves at least two read and two write commands, usually 
more than that.  (I do not know enough about SQLite to know the proper numbers 
but I'm sure they are more than that.)  With an average latency of 5ms per 
access, this means that in the best possible case committing a transaction will 
take 20ms.  This suggests that the timing of 30ms you're getting from your 
second computer is about right.

So why is the first computer lying to you ?  Well it's to make itself seem 
faster.  In this case when a 'write' command is issued to the disk drive it is 
reporting finished immediately, before the change has actually been made to 
disk.  This doesn't matter when a computer is used for normal desktop purposes 
(web browser, word processor) because you might just lose the last few 
characters of a word processing file, but it matters a lot in a database like 
SQLite because writing one row to a database involves many changes and if 
they're not consistent then the database may be corrupted.  So for good safety 
of data in your database your 30ms computer is the good computer and your 5ms 
computer is a bad computer.

It's worth noting that this long delay (30ms) is only for writing a 
transaction, not for each change to the database.  So

BEGIN
INSERT ...
INSERT ...
INSERT ...
97 more of these ...
END

only involves one long update, not 100 long updates.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Malformatted output by sqlite3

2015-01-19 Thread Stephan Buchert
(Prompt erased for easier paste and copy):

CREATE TABLE satpos(msec INTEGER, latitude REAL, longitude REAL);
INSERT INTO satpos VALUES (86386217,-0.0318895369716216,-167.689719869132);
INSERT INTO satpos VALUES
(86386716,-2.93238037697483e-06,-167.690497310632);
INSERT INTO satpos VALUES (86387217,0.0319616241531195,-167.69127664905);
SELECT * FROM satpos;
86386217|-0.0318895369716216|-167.689719869132
86386716|-2.93238037697483e-06|-167.690497310632
86387217|0.0319616241531195|-167.69127664905

So far, so good, but:

.mode column
.width -8 -7 -8
SELECT * FROM satpos;
86386217  -0.0318  -167.689
86386716  -2.9323  -167.690
86387217  0.03196  -167.691

In the 2nd row, 2nd column -0. or -2.9e-6 would make me happy. But
-2.9323 definitely messes up my stuff.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Huge WAL log

2015-01-19 Thread Jan Slodicka
I'll add the results from the latest test that was running for the last 2
hours:

This time I did not use intermediate commits. Instead, I dropped the table
indexes before the bulk insert started and re-created them at the end.

The results after a few tables (most of them small, a few contained
5-10 records, one table contained 3.4 mil records):
WAL size = 599 MB (instead of original 7 GB)
DB size = 662 MB
Index rebuild of the huge table took 114 secs.

So it was definitely the index that caused the huge WAL size.





--
View this message in context: 
http://sqlite.1065341.n5.nabble.com/Huge-WAL-log-tp79991p80148.html
Sent from the SQLite mailing list archive at Nabble.com.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Malformatted output by sqlite3

2015-01-19 Thread Hick Gunter
Works as specified.

The .width command sets the output width of a column to a certain number of 
characters;
 output that is shorter is padded on the right (or the left, if the width is 
negative);
 output that is too long is truncated.

Use the printf() function to define the format of real numbers.

-Ursprüngliche Nachricht-
Von: Stephan Buchert [mailto:stephanb...@gmail.com]
Gesendet: Montag, 19. Jänner 2015 11:53
An: sqlite-users@sqlite.org
Betreff: [sqlite] Malformatted output by sqlite3

(Prompt erased for easier paste and copy):

CREATE TABLE satpos(msec INTEGER, latitude REAL, longitude REAL); INSERT INTO 
satpos VALUES (86386217,-0.0318895369716216,-167.689719869132);
INSERT INTO satpos VALUES
(86386716,-2.93238037697483e-06,-167.690497310632);
INSERT INTO satpos VALUES (86387217,0.0319616241531195,-167.69127664905);
SELECT * FROM satpos;
86386217|-0.0318895369716216|-167.689719869132
86386716|-2.93238037697483e-06|-167.690497310632
86387217|0.0319616241531195|-167.69127664905

So far, so good, but:

.mode column
.width -8 -7 -8
SELECT * FROM satpos;
86386217  -0.0318  -167.689
86386716  -2.9323  -167.690
86387217  0.03196  -167.691

In the 2nd row, 2nd column -0. or -2.9e-6 would make me happy. But 
-2.9323 definitely messes up my stuff.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


___
 Gunter Hick
Software Engineer
Scientific Games International GmbH
FN 157284 a, HG Wien
Klitschgasse 2-4, A-1130 Vienna, Austria
Tel: +43 1 80100 0
E-Mail: h...@scigames.at

This communication (including any attachments) is intended for the use of the 
intended recipient(s) only and may contain information that is confidential, 
privileged or legally protected. Any unauthorized use or dissemination of this 
communication is strictly prohibited. If you have received this communication 
in error, please immediately notify the sender by return e-mail message and 
delete all copies of the original communication. Thank you for your cooperation.


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Changing Page_size fails sometimes, retry succeeds but is messy. Ideas?

2015-01-19 Thread Andy (KU7T)
Hi,

 

I determined that I would like to upgrade my page_size from 1k to 4k during
start up time if it is not done yet. At the time I would like to do that, no
other db access is occurring yet. Still, I see failures that often the
page_size returned at the end in the verification step is still the old one.
Doing a simple retry succeeds. Does it somehow take a while for a page_size
change to go into effect or what else can be done to make this work
correctly?  The steps I am executing right now are (in WAL mode):

 

Pragma wal_checkpoint

Vacuum

Pragma journal_mode=Delete

Pragma page_size=4096

Vacuum

Pragma journal_mode=WAL

Pragma page_size (verification step that often returns 1024 even though it
should return 4096)

 

Any ideas what I am doing wrong. Do you have to exclusively lock or wait
somewhere?

 

Thanks

Andy

 

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Advice needed for optimization options

2015-01-19 Thread Andy Jewell
I have a single threaded application, which at its heart is

while (true) {
open a database // any one of thousands, so I can’t just keep it open
do a select
close the database
}

With that part unchanged, does anyone have any optimization suggestions? I am 
compiling the amalgamation myself, so I’m open to both compile time and run 
time options. I don’t need to worry about other processes updating the 
database. I’m not looking for SQL advice at this time. 

Openning the database with SQLITE_OPEN_READONLY was an enourmous help. 
Explicitly turning off threading didn’t seem to help much. Changing my compiler 
optimization level from -O2 to -O3 (gcc 4.4) made it larger but no faster.


many thanks,
adj

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Changing Page_size fails sometimes, retry succeeds but is messy. Ideas?

2015-01-19 Thread Simon Slavin

On 19 Jan 2015, at 6:04pm, Andy (KU7T) k...@ku7t.org wrote:

 Any ideas what I am doing wrong. Do you have to exclusively lock or wait
 somewhere?

You have to run VACUUM immediately after setting the new page_size.  See

http://www.sqlite.org/pragma.html#pragma_page_size

One normally only sets the page_size when creating a new database.  VACUUM 
simulates writing a new database.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] When to run ANALYZE

2015-01-19 Thread Simon Slavin

 On 19 Jan 2015, at 4:11pm, Andy (KU7T) k...@ku7t.org wrote:
 
 ANALYZE
 
 ANALYZE sqlite_master 
 
 
 
 To my app at timely intervals. We are also running in WAL mode. Do I need to
 do a WAL checkpoint (pragma wal_checkpoint) before running the index recalc
 or not?

You don't even need the ANALYZE at timely intervals.  Just do an ANALYZE once, 
when you have realistic data in your tables.  You don't need to do it again 
unless the /type/ of data you're storing changes.  Changes to the data itself 
don't require another ANALYZE.

ANALYZE makes SQLite examine the columns for chunkiness.  In other words, it 
tells the optimizer that your surname table has almost as many different 
values at it has rows whereas the membershipType column only has two values: 
'normal' and 'premium'.  Unless you're suddenly introduce three new types of 
membership there's no need to run ANALYZE again.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] ORDER BY and LIMIT regression

2015-01-19 Thread Richard Hipp
On 1/19/15, Angelo Mottola a.mott...@converge.it wrote:
 Hello,

 I have a regression to report, that seems to have been introduced between
 SQLite 3.8.6 and the newest 3.8.8 (at least our test case worked in 3.8.6
 and stopped working somewhere in 3.8.7.x; we were hoping it got fixed in
 3.8.8 but eventually it wasn’t).


Thank you for reporting the problem.

We always fix every problem that we are aware of in SQLite.  But this
problem had not been previously reported to us, and did not occur in
any of the 168 million test cases that we ran prior to releasing
SQLite 3.8.8, so it didn't get fixed.

We are running another release cycle now.  SQLite version 3.8.8.1
should be out sometime tomorrow, assuming everything goes well.

-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Fsync or fdatasync performs slowly

2015-01-19 Thread Richard Hipp
On 1/19/15, Roger Binns rog...@rogerbinns.com wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 On 01/18/2015 06:49 PM, Wei, Catherine wrote:
 takes too much time when it executes fsync or fdatasync function.

 Note that although the documentation says only the file handle is
 synced, in practise many filesystems actually sync the whole
 filesystem writing out all pending data for all files.

 The reason is the underlying filesystem is written to carefully write
 content (especially meta data) in a particular order, to ensure
 robustness against power losses, make syncs easier to manage, and make
 fsck easier.  Trying to sync just the one file handle is too
 complicated, so they sync everything.

 You need to watch out for other write activity on the system, not just
 SQLite.


All true.  But also note that you can dramatically reduce the number
of fsync() calls by using WAL mode in SQLite.

-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Fsync or fdatasync performs slowly

2015-01-19 Thread Roger Binns
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 01/18/2015 06:49 PM, Wei, Catherine wrote:
 takes too much time when it executes fsync or fdatasync function.

Note that although the documentation says only the file handle is
synced, in practise many filesystems actually sync the whole
filesystem writing out all pending data for all files.

The reason is the underlying filesystem is written to carefully write
content (especially meta data) in a particular order, to ensure
robustness against power losses, make syncs easier to manage, and make
fsck easier.  Trying to sync just the one file handle is too
complicated, so they sync everything.

You need to watch out for other write activity on the system, not just
SQLite.

Roger

-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iEYEARECAAYFAlS9ZfwACgkQmOOfHg372QTsbACgzB3nBOgN3PnHo0C15IPIAQzq
2gYAoLXA3UF6bA5kNSfI31AauNQUlwu5
=uWTV
-END PGP SIGNATURE-
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Malformatted output by sqlite3

2015-01-19 Thread Stephan Buchert
Aha, thanks. PRINTF is the function to use here (or ROUND). I had been too
optimistic to think the -0. would be the truncated output.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users