> > You're probably thinking of this message from Linus in 2001: > > Dump was a stupid > > program in the first place. Leave it behind."
This was a comment that Linux blurted out without much thought. He doesn't like dump because it operates on the block device instead of the mounted filesystem, but he never gave any hooks for any programs to access the filesystem directly in kernel memory. His comments live forever in legacy; officially from his perspective the only guaranteed way to backup data is to unmount and dump the unmounted block device. If you ever do "linux from scratch" you'll compile the ext3 code, and you'll see that dump/restore is written by the same guys who write the ext3 code. Nobody knows the filesystem better than them. If you want to know you're getting *everything* (including broken softlinks, hard links, character special devices, weird permissions such as sticky bit, and so on) then dump is the smartest thing to do. (But in some cases you're able to do something simple like tar or rsync, depending on the content you want to backup.) Ideally you're able to unmount the filesystem before dumping, but realistically, you'll have to acknowledge that files which some user never closes can't be backed up. If you have, for example, a MySQL daemon, then your mysql database files will never close, and you can't just copy the files. Any other daemon that never shuts down, same is true. You have to go read the daemon's manual to find the recommended backup procedure for those things. If you've got some users who never logout and never close Cadence (or application du jour), their files are at risk no matter what you do. So, if you're able to acknowledge a certain amount of risk caused by files that never close, or files that have not yet been written to disk, then dump is good. If you need 100% risk free, then your only option is to unmount occasionally. _______________________________________________ bblisa mailing list [email protected] http://www.bblisa.org/mailman/listinfo/bblisa
