On 01/19/15 14:10, Currell Berry wrote:
> I infer from your response that soft updates possess:
> 
> 1. increased overhead over default FFS settings.
> 2. increased implementation complexity over default FFS settings.

for a "he stated" definition of "you infer", sure.

> Also, I infer that journaling and soft updates provide equivalent data 
> safety

um. I think we have a terms issue here with "data safety"...

> guarantees "in theory." Do they provide equivalent guarantees in 
> practice?

Being there are many journaled file systems in Linux, if you wish to get
to real life, you will have to specify one.

But ...
Being that FFS+soft updates has been in development and production
longer than just about any currently used Linux file system ("of the
week" -- sorry, I just feel the urge to add those three words after
referencing Linux file systems), and almost all the BSD file system
works goes into FFS, rather than split up among lots of options, I'd put
my money (and data) on FFS+softupdates.  But that's me.  I tend to put
my money where my mouth is -- I have no UPSs in use, and if it would
take longer to login and halt a machine than to wait for an fsck, I just
wack the power button or yank the cord.

Keep in mind, what softupdates promises is /file system/ integrity.
Journaling does similar.  If the power goes out or the system crashes
mid-Big Data Write, the goal is to get the file system back into sane
shape so the system can come back up and resume its tasks, NOT that the
1.7TB of a 1.8TB write will be sitting on disk waiting for you, or that
your database is consistent.

It is entirely likely -- probable, in fact -- that you will find your
actively written file truncated to zero bytes.  Depending on your
application, this is probably a GOOD thing -- if you find a zero byte
file, that normally means something went wrong (or hasn't yet gone
right).  A 1.7TB file?  You have no idea if that's complete or not.

If you want true "data safety", you probably want some kind of
application transaction tracking BEYOND the file system.

Nick.

> 
> Thank you,
> Currell
> 
> ------ Original Message ------
> From: "Alexandre Ratchov" <a...@caoua.org>
> To: currellbe...@gmail.com
> Cc: misc@openbsd.org
> Sent: 1/19/2015 4:44:59 AM
> Subject: Re: What are the disadvantages of soft updates?
> 
>>On Mon, Jan 19, 2015 at 03:59:34AM +0000, currellbe...@gmail.com wrote:
>>>  Hello,
>>>
>>>  The FAQ[1] states that soft updates result in "a large performance 
>>>increase
>>>  in disk writing performance," and links to a resource[2] which claims 
>>>that
>>>  soft updates, in addition to being a performance enhancement, "can 
>>>also
>>>  maintain better disk consistency." Resource 2 links to several 
>>>academic
>>>  papers[3][4], which while they are a bit above my level, contain 
>>>discussions
>>>  of how soft updates can increase performance and speed recovery on 
>>>crash.
>>>
>>>  My question is: what are the downsides of soft updates?
>>
>>- softdep consumes more cpu in kernel mode, which hurts interactive
>>   programms on very slow machines. It has the reputation of
>>   consuming more memory.
>>
>>- the softdep code is more complex (likely to have more bugs).
>>
>>>  Also, does journaling provide a better data-safety guarantee?
>>
>>They are not the same. On OpenBSD, softdep makes cerain operations
>>much faster while ensuring that upon power loss, all
>>inconsistencies can be automatically fixed by fsck on next boot.
>>
>>Journaling would write data twice (first in the journal, then in
>>the filesystem) and would allow last operations to be replayed on
>>next boot, so no need to run fsck, which in turn makes system boot
>>fast after a power loss.
>>
>>In theory, from data safety point of view they are equivalent.

Reply via email to