Re: [sqlite] "INSERT" and "END TRANSACTION" time

2014-03-03 Thread romtek
If that were true, then I wouldn't be getting a very large speed-up when
enveloping write ops in an explicit transaction, would I?


On Mon, Mar 3, 2014 at 8:44 PM, Simon Slavin  wrote:

>
> On 4 Mar 2014, at 1:15am, romtek  wrote:
>
> > I have a question based on my observation. According to your numbers for
> a
> > 5400 RPM disk, one write op should take about 11 ms. However, it often
> > takes only about 1 ms on HostGator drives on its shared hosting servers.
> > Are there drives that are SO much faster than 5400 RPM ones?
>
> I'll bet that the hosting servers are virtual machines and the drives are
> virtual drives, not physical drives.  Everything is actually done in RAM
> and just flushed to physical disk every so often.
>
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Using WHERE clauses with REAL data

2014-03-03 Thread Christopher Vance
There are a small number of floating point numbers which are almost always
exactly testable for equality, namely those you have confidence of exact
storage. Positive zero is good, although negative zero sometimes causes
problems; you can't always assume they're the same number. Smallish
integers multiplied or divided by smallish powers of two are probably okay,
so 3.25 is probably okay while 3.2501 and 52.3 are definitely not.
In most programming languages, if you want to check (near) equality of
floating point numbers you need to think about how many significant bits
you might have in the relevant mantissa, and check the absolute value of
the difference between what you have and what you're looking for, possibly
scaled for the exponent concerned.


On Tue, Mar 4, 2014 at 3:20 PM, Richard Hipp  wrote:

> On Mon, Mar 3, 2014 at 11:14 PM, Donald Shepherd
> wrote:
>
> > It appears that using equals on floating point (REAL) data in WHERE
> clauses
> > doesn't necessarily work, presumably because of rounding errors - see
> below
> > for an example.  Is this the case?  Do I need to use BETWEEN instead of =
> > as I expect to be the case?
> >
>
> Never (well, almost never) use == with floating point numbers.  This rule
> applies to all systems, not just SQLite.
>
> --
> D. Richard Hipp
> d...@sqlite.org
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>



-- 
Christopher Vance
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Using WHERE clauses with REAL data

2014-03-03 Thread Donald Shepherd
Thanks for the quick response.  That was what I was expecting.

Regards,
Donald.


On 4 March 2014 15:20, Richard Hipp  wrote:

> On Mon, Mar 3, 2014 at 11:14 PM, Donald Shepherd
> wrote:
>
> > It appears that using equals on floating point (REAL) data in WHERE
> clauses
> > doesn't necessarily work, presumably because of rounding errors - see
> below
> > for an example.  Is this the case?  Do I need to use BETWEEN instead of =
> > as I expect to be the case?
> >
>
> Never (well, almost never) use == with floating point numbers.  This rule
> applies to all systems, not just SQLite.
>
> --
> D. Richard Hipp
> d...@sqlite.org
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Using WHERE clauses with REAL data

2014-03-03 Thread Richard Hipp
On Mon, Mar 3, 2014 at 11:14 PM, Donald Shepherd
wrote:

> It appears that using equals on floating point (REAL) data in WHERE clauses
> doesn't necessarily work, presumably because of rounding errors - see below
> for an example.  Is this the case?  Do I need to use BETWEEN instead of =
> as I expect to be the case?
>

Never (well, almost never) use == with floating point numbers.  This rule
applies to all systems, not just SQLite.

-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Using WHERE clauses with REAL data

2014-03-03 Thread Donald Shepherd
It appears that using equals on floating point (REAL) data in WHERE clauses
doesn't necessarily work, presumably because of rounding errors - see below
for an example.  Is this the case?  Do I need to use BETWEEN instead of =
as I expect to be the case?

Thanks,
Donald.

sqlite> select * from table2 where Id='METER' and Type=0 and Pointer=15 and
"Date"=30414.0 and "Time"=141954.0;

RowId|Id|Type|Pointer|Date|Time|3|4|etc

31|*METER*|*0*|*15*|*30414.0*|*141954.0*|*3.4504768372*
|4.5594277954|5.6707629395|6.7896185303|7.888664856|8
.8961853027|9.0122888184|10.118855591|11.225422363|12.341525879|13.448092651|14.564196167|15.670762939|1.0
32|*METER*|*0*|*15*|*30414.0*|*141954.0*|*3.4504768372*
|4.5594277954|5.6707629395|6.7896185303|7.888664856|8
.8961853027|9.0122888184|10.118855591|11.225422363|12.341525879|13.448092651|14.564196167|15.670762939|1.0
sqlite> select * from table2 where Id='METER' and Type=0 and Pointer=15 and
"Date"=30414.0 and "Time"=141954.0 and "3"=3.4504768372;


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] "INSERT" and "END TRANSACTION" time

2014-03-03 Thread Simon Slavin

On 4 Mar 2014, at 1:15am, romtek  wrote:

> I have a question based on my observation. According to your numbers for a
> 5400 RPM disk, one write op should take about 11 ms. However, it often
> takes only about 1 ms on HostGator drives on its shared hosting servers.
> Are there drives that are SO much faster than 5400 RPM ones?

I'll bet that the hosting servers are virtual machines and the drives are 
virtual drives, not physical drives.  Everything is actually done in RAM and 
just flushed to physical disk every so often.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] "INSERT" and "END TRANSACTION" time

2014-03-03 Thread romtek
Thank for the clarification, Simon!

I have a question based on my observation. According to your numbers for a
5400 RPM disk, one write op should take about 11 ms. However, it often
takes only about 1 ms on HostGator drives on its shared hosting servers.
Are there drives that are SO much faster than 5400 RPM ones?


On Mon, Mar 3, 2014 at 5:57 PM, Simon Slavin  wrote:

>
> On 3 Mar 2014, at 9:11pm, romtek  wrote:
>
> > Simon, does a real disk have to be a rotating hard disk? Is there
>  problem
> > with SSDs as far as SQLite is concerned?
>
> SSDs aren't a problem, and SQLite works fine with them, but they change
> the timings associated with SQLite a great deal.  Simplified explanation
> follows.
>
> With rotating hard disks each read or write operation has to wait for the
> disk to rotate to the right place.  And if you have your files
> defragmented, all of a SQLite database is clumped up together on disk right
> next to the journal file for that database.  An END TRANSACTION command can
> involve six (more ?  anyone ?) accesses of database or journal file, and if
> all the parts of disk you're writing to are near one-another that can
> involve six rotations of the hard disk.  So ...
>
> Hard disk rotates at 5400rpm = 90rotations/s .
> 6 reads or writes to disk, assuming no skipped rotations, takes 66ms =
> 1/15th of a second .
> A loop doing 100 INSERTs can take over six seconds !
>
> This means that in real life SQLite can be faster if you do /not/
> defragment your drive.  Because you don't have to wait for a full rotation
> every time.
>
> SSDs, of course, don't have to wait for anything physical.  The
> improvement in speed gained by using transactions does not work the same
> way: there are still fewer operations, so it should still be faster, but
> not by as much.  My original statement about so much of the time being
> spent on the END TRANSACTION no longer holds.  That's why I asked.
>
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] "INSERT" and "END TRANSACTION" time

2014-03-03 Thread Simon Slavin

On 3 Mar 2014, at 9:11pm, romtek  wrote:

> Simon, does a real disk have to be a rotating hard disk? Is there  problem
> with SSDs as far as SQLite is concerned?

SSDs aren't a problem, and SQLite works fine with them, but they change the 
timings associated with SQLite a great deal.  Simplified explanation follows.

With rotating hard disks each read or write operation has to wait for the disk 
to rotate to the right place.  And if you have your files defragmented, all of 
a SQLite database is clumped up together on disk right next to the journal file 
for that database.  An END TRANSACTION command can involve six (more ?  anyone 
?) accesses of database or journal file, and if all the parts of disk you're 
writing to are near one-another that can involve six rotations of the hard 
disk.  So ...

Hard disk rotates at 5400rpm = 90rotations/s .
6 reads or writes to disk, assuming no skipped rotations, takes 66ms = 1/15th 
of a second .
A loop doing 100 INSERTs can take over six seconds !

This means that in real life SQLite can be faster if you do /not/ defragment 
your drive.  Because you don't have to wait for a full rotation every time.

SSDs, of course, don't have to wait for anything physical.  The improvement in 
speed gained by using transactions does not work the same way: there are still 
fewer operations, so it should still be faster, but not by as much.  My 
original statement about so much of the time being spent on the END TRANSACTION 
no longer holds.  That's why I asked.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Why would batched write operations NOT be faster than individual ones

2014-03-03 Thread Keith Medcalf

>Another way to bust your data is to rely on RAID 5 or 6 or similar, at
>least if the software does not take special care.
>
>When those mechanisms, updating a block always results in at least 2 disk
>writes: The data block and the checksum block. There's a small time
>window where only one of those blocks physically reached the disk. Now, when 
>the
>power fails during said time window, and the third disk fails, it's
>content will be restored using the new data block and the old checksum (or vice
>versa), leaving your data garbled.

Generally this is only an issue with fake-RAID (aka software RAID).  Hardware 
RAID will issue the writes to update the stripe in parallel across all spindles 
which need to be updated.  Moreover, although writes to a hardware RAID device 
are signaled complete once the data has been written into the buffer on the 
RAID controller, the hardware will take special precautions to ensure that any 
write which makes it into the hardware buffers is properly written to disk even 
if there is a power failure before the scatter-write-with-verify to the 
physical media has returned completion-without-error for all spindles.  You 
will only lose data if the power is out for longer than the battery on the 
hardware controller can maintain the buffer -- and the better classes of 
hardware raid contains NVRAM to which "dirty" stripes are flushed on power loss 
so that they can written to the physical spindles even if the power is not 
restored until long after the buffer RAM battery has lost power.  

In other words, you get what you pay for.




___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Why would batched write operations NOT be faster than individual ones

2014-03-03 Thread Jean-Christophe Deschamps

At 21:35 03/03/2014, you wrote:
´¯¯¯
RAID3-4-5 was great when disks were expensive, in 80's an 90's. Now 
not. A minimal RAID5 needs 3 disks. A minimal RAID10 4. An enterprise 
disk SAS 15Krpm 146 GB 6G is $350, and a not enterprise grade cheaper 
and bigger. Now RAID1E and RAID10E give more flexibility and variable 
security, from "paranoid" to "i don't care" grades.

`---

The point being discussed was not on performance or cost, but on the 
imaginary fact that RAID5-6 and variations have the inherent, by-design 
fatal flaw that they break down because a parity block can be out of 
sync with corresponding data blocks. This is bullshit, period.


Nothing in RAID5-6 design mandates serialization of writes, by far. 
It's only when cheap, unreliable hardware is put to work under below 
par software that the issue can be a real-world problem.


So the rant on the design against parity-enabled RAIDs is moot, if not 
plain fallacious unless "software RAID without dedicated controller" is 
clearly mentionned.


About SAS disks: they have actual very high reliability and don't lie, 
contrary to SATA disks (on both points).


This is not a war about mine being bigger, but it's better to have 
facts stated right. All high-end reliable machines and storage 
subsystems only run parity-enabled RAID levels and this thechnology 
isn't going to disappear tomorrow. 


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] "INSERT" and "END TRANSACTION" time

2014-03-03 Thread romtek
>
> a normal DB where writing does not happen often I would suggest a SSD...
> but anything with a very active read/write cycle is best avoided


If you were to quantify this, would you say that "active" starts at once
per second, many times per second, once a min or...?


On Mon, Mar 3, 2014 at 3:47 PM, RSmith  wrote:

>
> On 2014/03/03 23:11, romtek wrote:
>
>> Simon, does a real disk have to be a rotating hard disk? Is there  problem
>> with SSDs as far as SQLite is concerned?
>>
>
> No, what Simon is trying to point out is simply that the write performance
> experienced by L. Wood might be because journal writes might be synced to
> hard disk (in rotating disk cases anyway) and as such cause delays all
> throughout the transaction even if they are not holding up the final
> commit.  Not because this is in error or wrong in any way, simply as trying
> to explain why he sees the performance spread he sees.
>
> Other reasons might be excessively large binds causing the cache to spill
> or simply have memory writes taking so long that it seems to cause
> inter-transaction delays, although if that were the case one would expect
> the final commit to take a lot longer even.
>
> My money is still on the specific pragmas used which might be forcing
> syncs or non-ACID operation. We would need to have the DB schema and the
> typical query to really test why it works the way it works in his case.
>
>
> One note on SSD's, they pose no physical problem to SQLite, and in fact
> works magically fast, but having a DB which read/write a LOT of data on
> them is not really great since  the repeated read-write cycle of specific
> data areas tire down the typical NAND flash that makes up the SSD, even
> though modern SSDs may use MLC NAND or have firmware that tries to exercise
> every bit in memory equally so as to spread the write operations to avoid
> one piece of disk-memory dying quickly. Eventually though, when all bits of
> memory experienced upwards of 500K write operations (which is quite a
> while), it will fail... but you will have lots of warning. A read-only DB
> on an SSD drive cannot be beaten... even a normal DB where writing does not
> happen often I would suggest a SSD... but anything with a very active
> read/write cycle is best avoided - or at a minimum backed up by a good old
> rotating magnetic platter drive.
>
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] "INSERT" and "END TRANSACTION" time

2014-03-03 Thread RSmith


On 2014/03/03 23:11, romtek wrote:

Simon, does a real disk have to be a rotating hard disk? Is there  problem
with SSDs as far as SQLite is concerned?


No, what Simon is trying to point out is simply that the write performance experienced by L. Wood might be because journal writes 
might be synced to hard disk (in rotating disk cases anyway) and as such cause delays all throughout the transaction even if they 
are not holding up the final commit.  Not because this is in error or wrong in any way, simply as trying to explain why he sees the 
performance spread he sees.


Other reasons might be excessively large binds causing the cache to spill or simply have memory writes taking so long that it seems 
to cause inter-transaction delays, although if that were the case one would expect the final commit to take a lot longer even.


My money is still on the specific pragmas used which might be forcing syncs or non-ACID operation. We would need to have the DB 
schema and the typical query to really test why it works the way it works in his case.



One note on SSD's, they pose no physical problem to SQLite, and in fact works magically fast, but having a DB which read/write a LOT 
of data on them is not really great since  the repeated read-write cycle of specific data areas tire down the typical NAND flash 
that makes up the SSD, even though modern SSDs may use MLC NAND or have firmware that tries to exercise every bit in memory equally 
so as to spread the write operations to avoid one piece of disk-memory dying quickly. Eventually though, when all bits of memory 
experienced upwards of 500K write operations (which is quite a while), it will fail... but you will have lots of warning. A 
read-only DB on an SSD drive cannot be beaten... even a normal DB where writing does not happen often I would suggest a SSD... but 
anything with a very active read/write cycle is best avoided - or at a minimum backed up by a good old rotating magnetic platter drive.


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] "INSERT" and "END TRANSACTION" time

2014-03-03 Thread romtek
Simon, does a real disk have to be a rotating hard disk? Is there  problem
with SSDs as far as SQLite is concerned?


On Mon, Mar 3, 2014 at 2:21 PM, Simon Slavin  wrote:

>
> On 3 Mar 2014, at 6:03pm, L. Wood  wrote:
>
> > _bind() on a prepared statement and execution of "INSERT" are taking 70%
> of the time, but the "END TRANSACTION" only 30% of the time.
> >
> > The time between _bind() and execution of "INSERT" is roughly 50/50, so
> it's a total of:
> >
> > _bind(): 35%
> > INSERT: 35%
> > END TRANSACTION: 30%
> >
> > I would have expected the execution of INSERT to taken almost 0% within
> a transaction. Why is this not the case?
>
> Have you used any PRAGMAs which defeat ACID ?  In other words, anything
> that speeds SQLite up ?
>
> Are you writing to a real file on a real rotating hard disk ?
>
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Why would batched write operations NOT be faster than individual ones

2014-03-03 Thread Eduardo Morras
On Mon, 03 Mar 2014 17:36:10 +0100
Jean-Christophe Deschamps  wrote:

> 
> >It's how RAID5 works. Check this page docs http://baarf.com/ about
> >it.
> 
> This is utter BS.

No.
 
> Serious RAID controllers perform parallel I/O on as many drives that 
> are making up a given array. Of course I'm talking of SAS drives here 
> with battery backed-up controller.
> 
> Kid sister RAID5-6 implementations using SATA drives and no dedicated 
> hardware are best avoided and have more drawbacks than are listed in 
> cited prose.
> 
> I run 24/7 an Areca 1882i controller with 6 SAS 15Krpm drives in
> RAID6 and a couple more in RAID1 and I've yet to witness any problem
> whatsoever.

RAID3-4-5 was great when disks were expensive, in 80's an 90's. Now not. A 
minimal RAID5 needs 3 disks. A minimal RAID10 4. An enterprise disk SAS 15Krpm 
146 GB 6G is $350, and a not enterprise grade cheaper and bigger. Now RAID1E 
and RAID10E give more flexibility and variable security, from "paranoid" to "i 
don't care" grades.

When something goes wrong:

RAID 3-4-5-6
When one of your disk brokes, replace it. 
Then rebuild the RAID3-4-5-6. 
You need read from all disks to recover the lost blocks. 
All disks are busy recovering it and your R/W performance drops. 
Recovery reads the same block on and the parity data, makes some computations 
and writes the lost block. 
If any of the RAID disks is near its MTBF and fails, you lost everything.

RAID 10
When one of your disks brokes, replace it.
Then rebuild the RAID10.
You need read from mirror disks to recover lost blocks.
Only the mirror disks are busy recovering and your R/W performance drops only 
when accessing data in those disks.
Recovery reads the same block and directly writes lost block.
If all disks that mirrors to broken one are near its MTBF and fail, you lost 
everything.

The time to recover a RAID 10 is less (lot less) than recreating a RAID3-4-5-6.

> It's just like talking BS on a language because of some obscure bug
> in a non-conformant compiler. 

No, it's talking BS on language that is bad designed for your actual needs and 
no matter which compiler you use because is not an implementation problem. 

---   ---
Eduardo Morras 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] "INSERT" and "END TRANSACTION" time

2014-03-03 Thread Simon Slavin

On 3 Mar 2014, at 6:03pm, L. Wood  wrote:

> _bind() on a prepared statement and execution of "INSERT" are taking 70% of 
> the time, but the "END TRANSACTION" only 30% of the time.
> 
> The time between _bind() and execution of "INSERT" is roughly 50/50, so it's 
> a total of:
> 
> _bind(): 35%
> INSERT: 35%
> END TRANSACTION: 30%
> 
> I would have expected the execution of INSERT to taken almost 0% within a 
> transaction. Why is this not the case? 

Have you used any PRAGMAs which defeat ACID ?  In other words, anything that 
speeds SQLite up ?

Are you writing to a real file on a real rotating hard disk ?

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] About "speed"

2014-03-03 Thread big stone
The result in a .csv format for Mikael,

Sorry I'm very bad in html, I hope someone can re-post it in a nice-looking
html table

Nota :
- Postgresql is not tuned at all, and its figures move a lot between two
measures,
- I couldn't do a Pandas measure because "not enough memory".


"sequence of operations \ time sec(at first
try)","postgresql9.2.3","sqlite3.8.3.0 7200rpm_disk.db","sqlite3.8.3.0
:memory:","sqlite3.8.3.0 7200rpm_disk.db compiled -o2","sqlite3.8.3.0
:memory: compiled -o2"
"loading 5 344 498 records * 18 columns from .csv",78,151,131,137,51
"select cand_nm, sum(contb_receipt_amt) as total from fec group by cand_nm
",54,22,20,11,10
"select cand_nm, sum(contb_receipt_amt) as total from fec group by cand_nm
-- a second time",5.1,22,20,11,10
"select cand_nm, contbr_st, sum(contb_receipt_amt) as total from fec group
by cand_nm, contbr_st",5.6,27,23,16,14
"CREATE INDEX xyzzy2 ON fec(cand_nm, contbr_st,
contb_receipt_amt)",176,43,38,30,25
"select cand_nm, sum(contb_receipt_amt) as total from fec group by
cand_nm,",4.9,3,3,2,2
"select cand_nm, contbr_st, sum(contb_receipt_amt) as total from fec group
by cand_nm, contbr_st,",5.6,4,4,3,3
"total time (not relevant)",329,272,239,210,115
"relative speed-up (not relevant)",-21%,0%,12%,23%,58%
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] "INSERT" and "END TRANSACTION" time

2014-03-03 Thread L. Wood
On a recent thread, some of you told me that if I wrap lots of "INSERT" 
statements inside "BEGIN TRANSACTION" and "END TRANSACTION", then executing 
"END TRANSACTION" will take the most (99%) of the time.

This is not happening.

_bind() on a prepared statement and execution of "INSERT" are taking 70% of the 
time, but the "END TRANSACTION" only 30% of the time.

The time between _bind() and execution of "INSERT" is roughly 50/50, so it's a 
total of:

_bind(): 35%
INSERT: 35%
END TRANSACTION: 30%

I would have expected the execution of INSERT to taken almost 0% within a 
transaction. Why is this not the case?  
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Why would batched write operations NOT be faster than individual ones

2014-03-03 Thread Roger Binns
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 03/03/14 03:00, Simon Slavin wrote:
> What the heck ?  Is this a particular implementation of RAID ...

The technical term is "write hole" and can occur at many RAID levels:

  http://www.raid-recovery-guide.com/raid5-write-hole.aspx

You can mitigate it by having a setup that doesn't have failures such as
using battery backup.

Roger
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.14 (GNU/Linux)

iEYEARECAAYFAlMUxBMACgkQmOOfHg372QTxewCgjuqKWh4m+pz2JRtQWznPA83o
YEcAnjDuMMULpMX14VVlLsQ4NmJbD6PA
=Dp0Y
-END PGP SIGNATURE-
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Why would batched write operations NOT be faster than individual ones

2014-03-03 Thread Jean-Christophe Deschamps



It's how RAID5 works. Check this page docs http://baarf.com/ about it.


This is utter BS.

Serious RAID controllers perform parallel I/O on as many drives that 
are making up a given array. Of course I'm talking of SAS drives here 
with battery backed-up controller.


Kid sister RAID5-6 implementations using SATA drives and no dedicated 
hardware are best avoided and have more drawbacks than are listed in 
cited prose.


I run 24/7 an Areca 1882i controller with 6 SAS 15Krpm drives in RAID6 
and a couple more in RAID1 and I've yet to witness any problem whatsoever.


It's just like talking BS on a language because of some obscure bug in 
a non-conformant compiler. 


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Issues after upgrade to 3.8.2

2014-03-03 Thread Denis Gladkikh
Hey Guys,

I still have this issue with SQLite 3.8.3.1, I could fix my case my
changing line

"LastPlayed] = (select case when [UserPlaylist].[LastPlayed] >
s.[LastPlayed] then [UserPlaylist].[LastPlayed] else s.[LastPlayed] end
from [Song] as s where s.[SongId] = new.[SongId]),"

to

"[LastPlayed] = max( [UserPlaylist].[LastPlayed]  , (select s.[LastPlayed]
end from [Song] as s where s.[SongId] = new.[SongId]) )"

But if you want to debug this issue, some details below:

My environment is Windows 8.1 x64 + VS 2012 Update 4 +
sqlite-winrt-3080200.vsix
from http://sqlite.org/download.html. I did not build sqlite by myself, I'm
using official one from downloads.

This DropBox folder has VS 2012 project and dump
https://www.dropbox.com/sh/o92rrqzqg8i5w5q/WvLuJLkrmb

I'm not sure if VS project has all my settings: to get Mixed mode debugging
(Managed + Native code) working in VS 2012 for Windows Store apps - you
need to open project properties and on Debug tab specify Debugger type =
"Mixed (managed and native)". Just try to launch my app and click on Test
button. You should get assert in sqlite (this is where I saved debugging
session as a dump). This issue can be reproduced only on x86 version of
sqlite (at least in my case).

Let me know if I can advise somehow to reproduce this problem.




--
Thanks,
Denis Gladkikh
http://outcoldman.com


On Wed, Dec 25, 2013 at 12:55 PM, Denis Gladkikh wrote:

> Hi All,
>
> After upgrade from 3.8.1 to 3.8.2 I started to get asserts in debug build
> and {"Attempted to read or write protected memory. This is often an
> indication that other memory is corrupt."} in Relase when I have x86 app on
> amd64 Windows 8.1. These are asserts:
>
> ---
> Microsoft Visual C++ Runtime Library
> ---
> Assertion failed!
>
> Program: ...Windows\GoogleMusic\bin\x86\Debug\AppX\sqlite3.DLL
> File: sqlite3.c
> Line: 79461
>
> Expression: pExpr->iTable==0 || pExpr->iTable==1
>
> For information on how your program can cause an assertion
> failure, see the Visual C++ documentation on asserts
>
> (Press Retry to debug the application - JIT must be enabled)
> ---
> Abort   Retry   Ignore
> ---
>
>
> ---
> Microsoft Visual C++ Runtime Library
> ---
> Assertion failed!
>
> Program: ...Windows\GoogleMusic\bin\x86\Debug\AppX\sqlite3.DLL
> File: sqlite3.c
> Line: 62181
>
> Expression: ([1])==pEnd || p[0].db==p[1].db
>
> For information on how your program can cause an assertion
> failure, see the Visual C++ documentation on asserts
>
> (Press Retry to debug the application - JIT must be enabled)
> ---
> Abort   Retry   Ignore
> ---
>
>
> My environment is:
>
> SQLite for Windows Runtime (for VS 2012)
>
> http://visualstudiogallery.msdn.microsoft.com/23f6c55a-4909-4b1f-80b1-25792b11639e
> I use my own fork of https://github.com/praeclarum/sqlite-net for
> connection.
>
> I'm on Windows x64, when I'm building amd64 bit app - everything seems
> fine, but in case of x86 on Windows 8.1 x64 - I'm getting this error. Did
> not have a chance to verify x86 on Win8 x86.
>
> It looks like that SQLite fails to do insert in one of my tables, which
> has following trigger:
>
> CREATE TRIGGER insert_userplaylistentry AFTER INSERT ON UserPlaylistEntry
>   BEGIN
>
> update [UserPlaylist]
> set
> [SongsCount] = [SongsCount] + 1,
> [Duration] = [UserPlaylist].[Duration] + (select s.[Duration] from
> [Song] as s where s.[SongId] = new.[SongId]),
> [ArtUrl] = case when nullif([UserPlaylist].[ArtUrl], '') is null
> then (select s.[AlbumArtUrl] from [Song] as s where s.[SongId] =
> new.[SongId]) else [UserPlaylist].[ArtUrl] end,
> [LastPlayed] = (select case when [UserPlaylist].[LastPlayed] >
> s.[LastPlayed] then [UserPlaylist].[LastPlayed] else s.[LastPlayed] end
> from [Song] as s where s.[SongId] = new.[SongId]),
> [OfflineSongsCount] = [UserPlaylist].[OfflineSongsCount] +
> coalesce( (select 1 from CachedSong cs where new.[SongId] = cs.[SongId]) ,
> 0),
> [OfflineDuration] = [UserPlaylist].[OfflineDuration] + coalesce(
> (select s.[Duration] from [Song] as s inner join [CachedSong] as cs on
> s.SongId = cs.SongId where s.[SongId] = new.[SongId]), 0)
> where [PlaylistId] = new.PlaylistId;
>
>   END;
>
>
> If I remove highlighted rows - insert works.
> If somebody is interesting to take deeper look in this problem - I can
> send my database.
>
> Btw, by default all Windows Store applications are x86, so if this is
> global issue - it can affect a lot of Windows Store developers.
>
> --
> Thanks,
> Denis Gladkikh
> http://outcoldman.com
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite destroys civilization.

2014-03-03 Thread Adam Devita
LOL!  Hopefully they wrote credit at the top of the source file.

I saw season 1 of the show. Aaron is a "good guy".

http://en.wikipedia.org/wiki/Revolution_%28TV_series%29


A

On Sun, Mar 2, 2014 at 9:40 PM, mm.w <0xcafef...@gmail.com> wrote:

> LOL
>
> don't know if it will go thru see png
>
> layer or neuron out of bounds !
>
> Best Regards.
>
>
> On Sun, Mar 2, 2014 at 2:04 PM, Richard Hipp  wrote:
>
> > On Sun, Mar 2, 2014 at 12:34 PM, Richard Hipp  wrote:
> >
> > > Reports on twitter say that the "nanobots" in the TV drama "Revolution"
> > > have source code in the season two finale that looks like this:
> > >
> > > https://pbs.twimg.com/media/BhvIsgBCYAAQdvP.png:large
> > >
> > > Compare to the SQLite source code here:
> > >
> > > http://www.sqlite.org/src/artifact/69761e167?ln=1264-1281
> > >
> >
> > A video clip from the episode can be seen here:
> >
> >
> http://www.nbc.com/revolution/video/repairing-the-code/2748856#i145567,p1
> >
> > You can clearly see the SQLite code on the monitor.  The dialog goes
> > something like this:
> >
> > Aaron:  Wait.  Hold on.  There.
> > Male actor 1: What?
> > Aaron: There's a memory leak here.  This chunk of code.  (Points to the
> > SQLite analyzeTable() routine).  That's the problem.  It's eating up all
> > available resources.  It will force a segmentation fault. The whole
> system
> > will crash!
> >
> > At that point, I said "Not in my code!"
> >
> > But upon closer inspection, Aaron is correct.  The code has been altered
> > slightly.  This is what Aaron is looking at (line numbers added):
> >
> > 01 static void analyzeTable(Parse *pParse, Table *pTab, Index *pOnlyIdx){
> > 02   int iDb;
> > 03   int iStatCur;
> > 04   int *key = (char*)malloc(8*sizeOf(char))
> > 05   assert( pTab!=0 );
> > 06   assert( ecrypBtreeHoldsAllMutexes(pParse->db) );
> > 07   iDb = ecrypSchemaToIndex(pParse->db, pTab->pSchema);
> > 08   ecrypBeginWriteOperation(pParse, 0, iDb);
> > 09   iStatCur = pParse->nTab;
> > 10   pParse->nTab += 3;
> > 11   if( pOnlyIdx ){
> > 12 openStatTable(pParse, iDb, iStatCur, pOnlyIdx->zName, "idx");
> > 13   }else{
> > 14 openStatTable(pParse, iDb, iStatCur, pTab->zName, "tbl");
> > 15   }
> > 16 }
> >
> > The changes from SQLite are (1) all "sqlite3" name prefixes are changes
> to
> > "ecryp" and (2) line 04 has been added.  Line 04 is the "memory leak".
>  It
> > also contains at least four other errors:  (A) there is no semicolon at
> the
> > end.  (B) "sizeof" has a capital "O". (C) It assigns a char* pointer to
> an
> > int* variable.  (D) It calls malloc() directly, which is forbidden inside
> > of SQLite since the application might assign a different set of memory
> > allocation functions.  The first two errors are fatal - this function
> won't
> > even compile.  But, heh, it's a TV show
> >
> > So there you go.  SQLite used in evil nanobots that destroy civilization.
> >
> > I've never actually seen Revolution (I don't own a TV set).  So I don't
> > really understand the plot.  Can somebody who has watched this drama
> please
> > brief me?  In particular, I'm curious to know if Aaron a good guy or a
> bad
> > guy?
> > --
> > D. Richard Hipp
> > d...@sqlite.org
> > ___
> > sqlite-users mailing list
> > sqlite-users@sqlite.org
> > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> >
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite destroys civilization.

2014-03-03 Thread mm.w
LOL

don't know if it will go thru see png

layer or neuron out of bounds !

Best Regards.


On Sun, Mar 2, 2014 at 2:04 PM, Richard Hipp  wrote:

> On Sun, Mar 2, 2014 at 12:34 PM, Richard Hipp  wrote:
>
> > Reports on twitter say that the "nanobots" in the TV drama "Revolution"
> > have source code in the season two finale that looks like this:
> >
> > https://pbs.twimg.com/media/BhvIsgBCYAAQdvP.png:large
> >
> > Compare to the SQLite source code here:
> >
> > http://www.sqlite.org/src/artifact/69761e167?ln=1264-1281
> >
>
> A video clip from the episode can be seen here:
>
> http://www.nbc.com/revolution/video/repairing-the-code/2748856#i145567,p1
>
> You can clearly see the SQLite code on the monitor.  The dialog goes
> something like this:
>
> Aaron:  Wait.  Hold on.  There.
> Male actor 1: What?
> Aaron: There's a memory leak here.  This chunk of code.  (Points to the
> SQLite analyzeTable() routine).  That's the problem.  It's eating up all
> available resources.  It will force a segmentation fault. The whole system
> will crash!
>
> At that point, I said "Not in my code!"
>
> But upon closer inspection, Aaron is correct.  The code has been altered
> slightly.  This is what Aaron is looking at (line numbers added):
>
> 01 static void analyzeTable(Parse *pParse, Table *pTab, Index *pOnlyIdx){
> 02   int iDb;
> 03   int iStatCur;
> 04   int *key = (char*)malloc(8*sizeOf(char))
> 05   assert( pTab!=0 );
> 06   assert( ecrypBtreeHoldsAllMutexes(pParse->db) );
> 07   iDb = ecrypSchemaToIndex(pParse->db, pTab->pSchema);
> 08   ecrypBeginWriteOperation(pParse, 0, iDb);
> 09   iStatCur = pParse->nTab;
> 10   pParse->nTab += 3;
> 11   if( pOnlyIdx ){
> 12 openStatTable(pParse, iDb, iStatCur, pOnlyIdx->zName, "idx");
> 13   }else{
> 14 openStatTable(pParse, iDb, iStatCur, pTab->zName, "tbl");
> 15   }
> 16 }
>
> The changes from SQLite are (1) all "sqlite3" name prefixes are changes to
> "ecryp" and (2) line 04 has been added.  Line 04 is the "memory leak".  It
> also contains at least four other errors:  (A) there is no semicolon at the
> end.  (B) "sizeof" has a capital "O". (C) It assigns a char* pointer to an
> int* variable.  (D) It calls malloc() directly, which is forbidden inside
> of SQLite since the application might assign a different set of memory
> allocation functions.  The first two errors are fatal - this function won't
> even compile.  But, heh, it's a TV show
>
> So there you go.  SQLite used in evil nanobots that destroy civilization.
>
> I've never actually seen Revolution (I don't own a TV set).  So I don't
> really understand the plot.  Can somebody who has watched this drama please
> brief me?  In particular, I'm curious to know if Aaron a good guy or a bad
> guy?
> --
> D. Richard Hipp
> d...@sqlite.org
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Why would batched write operations NOT be faster than individual ones

2014-03-03 Thread Eduardo Morras
On Mon, 3 Mar 2014 11:00:47 +
Simon Slavin  wrote:

> What the heck ?  Is this a particular implementation of RAID or a
> conceptual problem with how RAID is designed to work ?  It sounds
> like a bug in one particular model rather than a general problem with
> how RAID works.

It's how RAID5 works. Check this page docs http://baarf.com/ about it. 

> 
> Simon.

---   ---
Eduardo Morras 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Virtual table API performance

2014-03-03 Thread Hick Gunter
We have gotten around this problem by defining "virtual" fields that contain a 
reference to the "current record" and the entrypoint of a wrapper around the 
xColumn function. That way only two fields get passed upwards through the 
virtual table stack and the top level virtual table's xColumn implementation 
calls straight through to the bottom layer's wrapper.

It does take some care to avoid sorting in between the layers and 
re-preparation of statements on schema changes.

-Ursprüngliche Nachricht-
Von: Elefterios Stamatogiannakis [mailto:est...@gmail.com]
Gesendet: Sonntag, 02. März 2014 20:39
An: sqlite-users@sqlite.org
Betreff: Re: [sqlite] Virtual table API performance

We have both input and output virtual tables that avoid hitting the hard disk 
and are also able to compress the incoming and outgoing data.

We have a virtual table that takes as input a query and sends the data to a 
port on another machine. This virtual table is called "OUTPUT". And another 
virtual table that takes as input data from another port and forwards it into 
SQLite. Lets call it "INPUT". A query that uses these two virtual tables would 
look like this in madIS:

OUTPUT ip:192.168.0.1 port:8080 select * from INPUT('port:8081');

We actually use queries like above (actually we don't do it directly to ports 
but to buffered named pipes that are then forwarded via netcat) to run 
distributed queries on clusters, connecting all the local SQLite/madIS 
instances on the different machines together.

The main point that i want to make with above explanation is that we don't view 
SQLite only as a traditional database. We also view it as a data stream 
processing machine, that doesn't have the requirement for the data to be stored 
on a hard disk.

Under this view, the efficiency of the virtual table api is very important. 
Above query only uses 2 VTs in it, but we have other queries that use a lot 
more VTs than that.

estama


On 2/3/2014 9:34 ìì, Max Vlasov wrote:
> On Sun, Mar 2, 2014 at 5:21 PM, Elefterios Stamatogiannakis
>  wrote:
>>
>> Our main test case is TPCH, a standard DB benchmark. The "lineitem"
>> table of TPCH contains 16 columns, which for 10M rows would require
>> 160M xColumn callbacks, to pass it through the virtual table API.
>> These callbacks are very expensive, especially when at the other end
>> sits a VM (CPython or PyPy) handling them.
>>
>
> Ok, not stating that the performance improvment is impossible, I will
> explain why I'm a little sceptical about it.
>
> For every bulk insert we have a theoretical maxiumum we'd all glad to
> see sqlite would perform with - the speed of simple file copying.
> Sqlite can't be faster than that, but to be on par is a good goal.
> This is not possible when an insert means also modification of other
> parts of the file, for example when there's an index involved. But
> let's forget about it. Finally when new data is added, sqlite should
> write a number of database pages, the cost of this part is absolutely
> in the hands of the media (driver) and OS (driver).  But for every
> database page write there's also price to pay in CPU units, for many
> actions sqlite should do before actual value is translated from what
> the developer provided to what actually appears on disk.
>
> The illustration of the CPU price is the following example
>   CREATE TABLE t(Value)
>
> on my ssd drive mulitply inserts (thousands)
>insert into t (Value) values ('123456689  // this string
> contains many symbols, for example 1024) performed with the speed
>30 MB/Sec
>
> but the query
>insert into t (Value) values (10)  // this is a small integer
> value only
>3 Mb/Sec
>
> Both shows almost full cpu load. Why such difference? Because with
> latter query the system can do more than 30 MB of writes in 1 second,
> but it should wait for sqlite spending 10 seconds in preparations.
> The former is better because CPU cost of passing a large text value to
> sqlite is comparatively low comparing to the  time spent in I/O in
> writing this on disk.
>
> So CPU price to pay isn't avoidable and notice that in example this is
> not virtual table API, this is bind API. I suppose that the price we
> pay for CPU spent in virtual table API is on par with an average price
> payed in sqlite as a whole. This means that if I transfom the avove
> queries into inserts from virtual tables, the final speed difference
> will be similar. And this also means that for your comparision tests
> (when you get x3 difference), the CPU price sqlite pays inside bind
> api and in its code wrapping xColumn call is probably similar. The
> rest is the share your code pays.
>
> Well, I know that there are differences in CPU architectures and
> probably there are platform where compiled code for bind api and
> virtual tables api behaves a little differently making the costs more
> diffrent. But imagine that hard task of fine tuning and refactoring
> just to get a noticeable 

Re: [sqlite] Why would batched write operations NOT be faster than individual ones

2014-03-03 Thread Markus Schaber
Hi,

Von: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org]
> On 3 Mar 2014, at 8:18am, Markus Schaber  wrote:
> > Another way to bust your data is to rely on RAID 5 or 6 or similar, at
> > least if the software does not take special care.
> >
> > When those mechanisms, updating a block always results in at least 2
> > disk
> > writes: The data block and the checksum block. There's a small time
> > window where only one of those blocks physically reached the disk.
> > Now, when the power fails during said time window, and the third disk
> > fails, it's content will be restored using the new data block and the
> > old checksum (or vice versa), leaving your data garbled.
> 
> What the heck ?  Is this a particular implementation of RAID or a conceptual
> problem with how RAID is designed to work ?  It sounds like a bug in one
> particular model rather than a general problem with how RAID works.

It is a conceptual problem of the RAID levels 5 and 6 and similar proprietary
mechanisms which are based on parity blocks.

RAID setups using only mirroring and striping like the RAID Levels 0, 1, 10
are not affected, and the risk may be lowered by using battery powered
RAID controllers.

Let's see a simple RAID5 with three disks. The blocks a and b are the two
data blocks which are covered by the parity block c. Let's say the database
code writes the block b. The RAID layer creates a corresponding write to
for the parity block c. As the harddisks are not physically synchronized,
there is a small time slot where only one of the blocks b and c has been 
written, but not the other one. The power fails during that time slot, and
during the reboot, the harddisk containing block a fails. During the raid
rebuild, the contents of block a are recreated using the blocks b and c -
but as only one of those blocks was up to date, and the other contains the
old state, this leads to (more or less) complete garbage in block a.

So using RAID5, you can risk damaging data which is even unrelated to
the data one was actually writing while the machine crashed.

Battery powered RAID controllers may lower the risk, as they either
hold a copy of the not-yet written blocks in their RAM (or flash)
until the power is restored, or they supply power to the harddisks
until all the blocks are written.

Similar things may happen with other parity / checksum based mechanisms,
like RAID 3, 6, or some (nowadays mostly extinct) proprietary solutions.


Best regards

Markus Schaber

CODESYS(r) a trademark of 3S-Smart Software Solutions GmbH

Inspiring Automation Solutions

3S-Smart Software Solutions GmbH
Dipl.-Inf. Markus Schaber | Product Development Core Technology
Memminger Str. 151 | 87439 Kempten | Germany
Tel. +49-831-54031-979 | Fax +49-831-54031-50

E-Mail: m.scha...@codesys.com | Web: http://www.codesys.com | CODESYS store: 
http://store.codesys.com
CODESYS forum: http://forum.codesys.com

Managing Directors: Dipl.Inf. Dieter Hess, Dipl.Inf. Manfred Werner | Trade 
register: Kempten HRB 6186 | Tax ID No.: DE 167014915

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Why would batched write operations NOT be faster than individual ones

2014-03-03 Thread Simon Slavin

On 3 Mar 2014, at 8:18am, Markus Schaber  wrote:

> Another way to bust your data is to rely on RAID 5 or 6 or similar, at least
> if the software does not take special care.
> 
> When those mechanisms, updating a block always results in at least 2 disk 
> writes: The data block and the checksum block. There's a small time window
> where only one of those blocks physically reached the disk. Now, when the
> power fails during said time window, and the third disk fails, it's content
> will be restored using the new data block and the old checksum (or vice
> versa), leaving your data garbled.

What the heck ?  Is this a particular implementation of RAID or a conceptual 
problem with how RAID is designed to work ?  It sounds like a bug in one 
particular model rather than a general problem with how RAID works.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Why would batched write operations NOT be faster than individual ones

2014-03-03 Thread Markus Schaber
Hi,

sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org]
> On 3 Mar 2014, at 3:41am, romtek  wrote:
> 
[...]
> 
> Here's a SQLite engineer writing about the same thing: section 3.1 of
> 
> 
> 
> Your disk hardware, its firmware driver, the OS's storage driver, the OS's
> file system and the OS file API all get a chance to pretend they're doing
> 'sync()' but actually just return 'done it'.  And if even one of them lies,
> synchronisation appears to happen instantly and your software runs faster.  A
> virtualising system is another chance to do processing faster by lying about
> synchronisation.  And unless something crashes or you have a power failure
> nobody will ever find out.

Another way to bust your data is to rely on RAID 5 or 6 or similar, at least
if the software does not take special care.

When those mechanisms, updating a block always results in at least 2 disk 
writes: The data block and the checksum block. There's a small time window
where only one of those blocks physically reached the disk. Now, when the
power fails during said time window, and the third disk fails, it's content
will be restored using the new data block and the old checksum (or vice
versa), leaving your data garbled.




Best regards

Markus Schaber

CODESYS(r) a trademark of 3S-Smart Software Solutions GmbH

Inspiring Automation Solutions

3S-Smart Software Solutions GmbH
Dipl.-Inf. Markus Schaber | Product Development Core Technology
Memminger Str. 151 | 87439 Kempten | Germany
Tel. +49-831-54031-979 | Fax +49-831-54031-50

E-Mail: m.scha...@codesys.com | Web: http://www.codesys.com | CODESYS store: 
http://store.codesys.com
CODESYS forum: http://forum.codesys.com

Managing Directors: Dipl.Inf. Dieter Hess, Dipl.Inf. Manfred Werner | Trade 
register: Kempten HRB 6186 | Tax ID No.: DE 167014915

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users