On Fri, 3 Aug 2007, Damon Atkins wrote:

[ ... ]
> UFS forcedirectio and VxFS closesync ensure that what ever happens your files 
> will always exist if the program completes. Therefore with Disk Replication 
> (sync) the file exists at the other site at its finished size. When you 
> introduce DR with Disk Replication, general means you can not afford to lose 
> any save data. UFS forcedirectio has a larger performance hit than VxFS 
> closesync.

Hmm, not quite.

forcedirectio, at least on UFS, is bound on the I/O operations meeting 
certain criteria. These are explained in directio(3C):

      DIRECTIO_ON     The system behaves as though the application
                      is  not  going to reuse the file data in the
                      near future. In other words, the  file  data
                      is not cached in the system's memory pages.

                      When  possible,  data  is  read  or  written
                      directly  between  the  application's memory
                      and the device when  the  data  is  accessed
                      with  read(2)  and write(2) operations. When
                      such transfers are not possible, the  system
                      switches  back  to the default behavior, but
                      just for that  operation.  In  general,  the
                      transfer  is possible when the application's
                      buffer is  aligned  on  a  two-byte  (short)
                      boundary,  the  offset into the file is on a
                      device sector boundary, and the size of  the
                      operation is a multiple of device sectors.

                      This advisory  is  ignored  while  the  file
                      associated   with   fildes  is  mapped  (see
                      mmap(2)).

So, it all depends on how exactly your workload looks like. If you're 
doing non-blocked writes or writes to nonalinged offsets, and/or mmap 
access, directio is not being done, the advisory AND (!) the mount option 
notwithstanding.

As far as the hot backup consistency goes:

Do a "lockfs -w", then start the BCV copy, then (once that started) do a 
"lockfs -u".
A writelocked filesystem is "clean", needs not to be fsck'ed before being 
able to mount it.

The disadvantage is that write ops to that fs in question will block while 
the lockfs -w is active. But then, you don't need to wait until the BCV 
finished - you only need the consistent state to start with, and can 
unlock immediately as the copy started.

Note that fssnap also writelocks temporarily. So if you have used 
UFS snapshots in the past, "lockfs -w";<BCV start>;"lockfs -u" is not 
going to cause you more impact.

"lockfs -f" is only a best-try-if-I-cannot-writelock. It's no guarantee 
for consistency, because by the time the command returns something else 
can already be writing again.


FrankH.


>
> Cheers
>
>
> This message posted from opensolaris.org
> _______________________________________________
> zfs-discuss mailing list
> [email protected]
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
_______________________________________________
zfs-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to