On 12 Feb 2011, at 4:11pm, Jim Wilcoxson wrote:

> I don't understand how you can do 360K commits per second if your system is
> actually doing "to the platter" writes on every commit.  Can someone clue me
> in?

My field of expertise, I'm afraid.  The answer is "Hard disks lie.".

Almost all hard disks you can buy in a mundane manner these days have onboard 
buffering.  They accept a write command, tell the computer it has been 
executed, but queue up the actual writes so that they can be done in an 
efficient manner (e.g. when the disk has rotated to the right position).  They 
may not even be done in the right order !  This technique is used to make the 
disk appear to work faster: computer benchmarks report that disks that do this 
say "I got it !" more quickly.  The computer has no way to really know when 
data has reached the magnetic surface of the disk.

Compensating for this behaviour is a big part of what SQLite does in 
journaling.  The source code looks like the programmers are paranoid but really 
they're just taking all possibilities into account.

Many (no longer all !) disks can be modified not to do this (they use 
write-through caching instead) simply by moving a jumper or two.  But if you 
did this in a production computer -- any computer used for mundane daily life 
like writing Word documents -- you would soon put it back because the drop in 
speed is really noticeable.  I've done it in a demonstration and you can hear 
the groans.  The systems this is normally done in are mission-critical servers, 
where losing even an instant of data in a power-cut would be very costly.  The 
canonical example of this is diagnostic logging device in a dangerous 
situation.  Consider, for example, the 'Black Box' device in an airplane.

Simon.
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to