1) The dataset creating service seems rather awkwardly named.
I'd suggest dropping the "varshare" aspect of the name since it
seems an implementation detail. Perhaps just
svc:/application/pkg/repositories-setup?
Yep. I checked for precedent, and see we have:
svc:/network/npiv_config
svc:/system/boot-config
svc:/network/routing-setup
svc:/network/socket-config
svc:/system/install-setup
svc:/system/zones-install
so svc:/application/pkg/repositories-setup seems fine.
Another option is to view it as a filesystem activity, and use something
like:
svc:/system/filesystem/pkg-repositories
which, despite this usage being fairly application-specific, it does suggest
something filesystem-related (note that no pkg(5) repositories actually get
created here, it's just the dataset creation)
I think either works although I'd prefer to see it under
/application/pkg since it is pkg(5) specific.
2) The "crontab_period" property is rather novel but supplying
a crontab(4) fragment seems, again, awkward. I realize breaking
this into five separate properties is probably overkill but
still, I'm not crazy with the crontab fragment approach
(although I'm not sure what sort of syntactic sugar might make
sense especially when we're talking about smf(5) properties).
Yes, I understand. I've been here before with the zfs-auto-snapshot service:
everything feels awkward when writing an SMF service to perform some task
periodically unless we go all-out and implement our own daemon process or
much better still, meld SMF with cron, which has been suggested before, but
doing either of those would delay this putback pretty significantly.
In the auto-snapshot implementation, we used:
interval : minutes | hours | days | months
period : How many (m,h,d,m) do we wait between snapshots
offset : The offset into the time period we want
The trouble was dealing with user-specified values that didn't neatly break
down into schedules that could be described with crontab(4) causing us to
write a crontab entry that would potentially miss snapshots at the edges of
snapshot intervals.
Likewise, when the system was turned off, we'd miss snapshots. Yes, the data
wasn't changing, but having monthly snapshots on a laptop scheduled to fire
at midnight turned out to be bad, unless you're a night-owl. (the
Python-based auto-snapshot rewrite ended up using their own daemon process
for this reason)
By exposing the underlying implementation ("Hey, we're using cron!") we avoid
some of those problems, admittedly passing them on to the user.
An alternate implementation is to use at(1) to schedule a job on
service-start, then reschedule it each time the job completes (or fails)
which allows us more flexibility in the time-spec.
Unfortunately the interfaces to add and remove at-jobs and to identify which
at-jobs have been created by the service requires us to dig around in
/var/spool/cron/atjobs, which really sucks - I'd rather not have to go that
route.
I'm still open to suggestions, but think that exposing the crontab to the
user is the least worst solution here :-/
I don't have a good alternate suggestion here and certainly crontab(1)
is well understood from a sysadmin perspective so I'll withdrawn my
concern. I do appreciate the data from the experiences with
zfs-auto-snapshot - it is a similar sort of service we're talking about
here.
I'll try and finish review the actual webrev as soon as I finish two
others ones I'm going through now.
_______________________________________________
pkg-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/pkg-discuss