Hello,

  I have 3 servers setup in following way:
  • Two servers (server1 and server2) are connected using DRBD, which is used as volume for shared storage. This shared storage is formated using OCFS2. Server1 has RAID10, while server2 has RAID1 under DRBD.
  • All three servers are accessing shared storage using iSCSI target on server1. Even when DRBD is in Primary/Primary mode, server1 and server2 must use iSCSI, and not access DRBD directly, because they don't see server3 as member of cluster, when it access it trough iSCSI. Must be because iSCSI doesn't see changes happening directly on device below, which is DRBD.
  • Shared storage has about 1.7 milion files of about 250GB total size. Everyday backup is made to remote server using rsync. This takes a lot of time (few hours) because if many files and conflict with running web application that is reading/write to shared volume while backup is in progress.
  Hard drives on server2 are mostly idle. They write block of data from time to time.It would be more efficient for backup if it runs from OCFS2 copy on DRBD of server2. I would mount it read-only. It would be possible to create snapshot of LVM volume below DRBD, and to mount it read-only.

  Here finally comes my question. Do you think it's safe to mount OCFS2 in this way, since changes will happen from other nodes which will not be in cluster with backup process? Read-only mount, if possible, would prevent changes to go from backup process to live nodes. However, I assume file system might become corrupted for backup side? Is there flush method of some kind that I can initiate on all three servers or iSCSI target, before making snapshot on server2 and mounting OCFS2 in read-only and/or local mode?

  Does anyone have suggestion on how to make efficient backups of many files? DRBD is nice backup feature, but corruption in file system is distributed to both drives. Having separate file system copy is safe, even when it's not real-time.

  Did anyone try using lscynd with OCFS2? Lsync2 monitors changes in file system using inotify and transfers only changed folder to backup server using rsync. I was reading that inotify works with delay with OCFS2 when changes happen on remote node. However, few seconds delay is not problem for me.

  Thanks,
  Nikola
 

_______________________________________________
Ocfs2-users mailing list
Ocfs2-users@oss.oracle.com
http://oss.oracle.com/mailman/listinfo/ocfs2-users

Reply via email to