Hello,

On 3/6/2006 7:12 PM, Jason Balicki wrote:
I've searched a bit, but I came up empty handed on this.

I'm going to simplify this scenario a bit because the
reality is more complex than this, but hopefully this
gets my point across:

I have a client trying to backup a MS SQL database.  We
use RunBefore and RunAfter commands to stop and start the
DB.  Usually this works fine, but if we run out of space
on a tape then Bacula can't get any farther until someone
mounts another tape.  However, because the application
is critical, someone usually will manually start the
services before the job can finish.  I can't fix this
behavior.

I figured that I could solve this problem by spooling,
and I can -- sort of.  The job gets spooled but the
services are still down until the job can be written
to tape.  This is causing a bit of frustration on the
client end because they're tired of having to manually
start services, and the client doesn't want to have
to check to see if the job has finished spooling.  They
barely understand the concept.

A solution to this problem would be a "Client Run After Spool"
command where I could restart the services after the
spool had finished.  Theoretically the FD is pretty much
done at that point, correct?

Right, and the "Client Run after spool Job" sounds like an interesting idea.

 We have a dedicated HD
we use for spooling, so all jobs can fit there with
no problem and we should never have to despool during a
job.  Although, even if we did the Run After wouldn't
run until the job was completely done spooling but
before it was written to tape, letting the application
become usable way before the job is done.

Is there a way to make this happen?  Or is this an
uncommon scenario?  If so, does anyone have any
other suggestions on getting around this?

Difficult.

I see several approaches:
- Wait until job migration is working in a stable version of Bacula.
- Convince Kern or anybody else to implement the directive you miss.
- Work around the problem.

The latter could be done like this:
Create a run before job which spawns a background process that observes the job status and, once it finds the job is done with spooling, triggers the database start script on your atabase servers.

I see several difficulties there, but I'm also sure one could get this working...

Arno

Thanks,

--J(K)

PS:  in case anyone cares:  some of the complexities
are that there are multiple MSDE SQL DB boxes, each
with different people responsible for them (and are
*not* trained admins) and each having their own silly
reasons to want to play power games with the completely
separate person responsible for the backups.  It's idiotic
office politics, but I don't work there so I can't do
anything about it. :(



-------------------------------------------------------
This SF.Net email is sponsored by xPML, a groundbreaking scripting language
that extends applications into web and mobile media. Attend the live webcast
and join the prime developer group breaking into this new coding territory!
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=110944&bid=241720&dat=121642
_______________________________________________
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users


--
IT-Service Lehmann                    [EMAIL PROTECTED]
Arno Lehmann                  http://www.its-lehmann.de


-------------------------------------------------------
This SF.Net email is sponsored by xPML, a groundbreaking scripting language
that extends applications into web and mobile media. Attend the live webcast
and join the prime developer group breaking into this new coding territory!
http://sel.as-us.falkag.net/sel?cmd=lnk&kid=110944&bid=241720&dat=121642
_______________________________________________
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

Reply via email to