i'm using dump to backup up filesystem of about 1.5 million files and 60GB size (14 DVD+RW disks), will be larger for sure after a time.

once per about 2 months - full dump, once a week -1 dump, every other day -2 dump.

works fine. i used this under NetBSD, now under FreeBSD.

but have 2 questions:

1) i'm using dump with FFS snapshot (-L) so in dump's point of view NO changes to filesystem (snapshot) should occur. but - dump size often turns out to be few percent above this calculated. last time after "99.99% - finishing soon" i have to add 14-th DVD while it calculated it to need 13 DVD. why?


anyway snapshot feature is excellent when making large backups! i can start it under screen one day, and another day in case of lack of time, without the problem that too many changes will be partially dumped.

2) today - after making full dump, i (just to check) try to do full restore on some place.
started

/sbin/restore -rvf /dev/dvd

and after extracting directory lists it started to do "make node..."
but after some thousands of directories it stops, getting 100% CPU, then goes forward, then again hogs the CPU, goes forward etc..

after about one hour (!) on PII/400 machine is goes well after that phase and start to restore.

there was exactly the same behaviour in NetBSD with it's restore so it's not FreeBSD specific.

it restores data after all but why it needs to work so hard before restoring files?
_______________________________________________
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"

Reply via email to