On May 30, 2010, at 7:33 PM, David Magda wrote:

Why not simply have a script that runs and checks for pool usage and then deletes snapshots with that attribute if necessary? Why do you need to have have it built into ZFS?

That's certainly possible and I suspect most people here could knock that out in about 20 minutes. The problem is that you get into all kinds of race conditions and manual bookkeeping. For instance, what happens if a disk-full condition occurs 2 minutes before the cron job would have run that would've averted it? At what level do you trigger deletions that would both 1) provide enough of a safety margin that disk-fulls are unlikely, but 2) allow the snapshots to take advantage of as much storage as possible?

IMHO this shouldn't be built into the file system. You have one script to automatically generate snapshots, and another to monitor usage and delete old ones.

I'm not opposed to that approach at all, with the exception that I'd like for the deletion script to be triggerable from the filesystem. And as I said in another post, I'd like that as a generic cross- filesystem feature. Maybe you'd like a UFS-based /tmp/log directory that a certain daemon fills with rotating logfiles, and you'd like a script to automatically delete the oldest one whenever the filesystem fills. Or maybe it'd be nice to get an email when /var is over 90% full? I can think of a lot of uses for that mechanism other than the specific case of destroying ZFS snapshots.

Good summary in this post:

http://mail.opensolaris.org/pipermail/zfs-discuss/2006-May/ 002313.html

I disagree with the cons of the summary. It's made to sound like ZFS would be responsible for making tough decisions about what to keep and discard, when that could really be simplified to deleting the snapshot with the lowest integer value of a certain attribute and re-trying failed writes until either the write succeeds or there are no more filesystems to delete. Then have a regular cron job - probably even the one that creates the snapshots in the first place - that assigns priorities appropriates. It would require a small amount of kernel could, but it could be very simple code with no decision-making responsibilities.

Generally I don't think this is the "Unix Way". I don't want my kernel doing stuff behind my back.

But we have all sorts of daemons that do stuff behind our back. I have a nightly Amanda daemon that decides what and how much to back up and when to overwrite old backups. The difference, as I see it, is that in the ZFS case the kernel would have a very small amount of extra work to do. That kernel code would eliminate the need for a lot of potentially-flakey userspace code.

There's already an useful creation tool for OpenSolaris:

        http://src.opensolaris.org/source/xref/jds/zfs-snapshot/

That's actually the easy part. From the scripts I downloaded at the link in the original post, I have that running in production on my system today.

There's also an auto-scrub script:

        http://blogs.sun.com/constantin/entry/new_opensolaris_zfs_auto_scrub



That just scrubs the pools, ie verifies checksums and data consistency.
--
Kirk Strauser




_______________________________________________
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"

Reply via email to