See my comment in between your reply.
On Thu, 2005-01-27 at 11:32, Raimund Jacob wrote:
Filip Sergeys wrote:
hello!
> 2) dynamically build a command script that will be executed by dbmcli in
> the very last step. Start with bringing database in admin mode and do
> util_connect
this is what i wanted to do "interactivly" so that the controlling
script notices errors when they happen. but creating that all-in-one
scriptfile is a start.
> 3) look for the counter file and extract the lognumber of the last
> applied log file
that's what i do by looking at the backup-history. or at least that was
the plan.
> 6) if the lognumber in the .arch file is equal to the last applied log
> number -> apply that log again. (this is needed because then you are
> sure to have applied the last log PAGE, as already described in earlier
> mails). The first log you apply should always start with "recover_start
> ARCH LOG $lognr"
> 7) subsequent logfiles are applied with the command "recover_replace
> ARCH </path/to/logfiles/> $lognr"
those two points i learned the hard way, yesterday. seems my guessing
was correct.
> 9) end with recover_cancel
this is where my kernel crashes. but this probably (?) happens because
my LOG_SEGMENT_SIZE is too small. i added another log-volume once and
did not increase it. do i have to set this parameter on both the
exporting and the importing host?
Since both instances are failover of each other I keep them identical.
So I changed the parameter in both. I have not tested if it is required
on the exporting machine.
> 10) util_release (don't forget this one)
oh-kay :) i did not do that yet, but only due to the above.
> 11) bring the database back to sleep
would it do any harm to leave it in admin mode?
I guess not, it was just a decission I made.
> I have been using this for over a year and it works perfectly. So hope
> it can help you too.
thank you very much. i'll look into it. your messages makes me hope that
it is possible :)
> if recover_replace fails at a certain point and you need to restart.
> First do a db_restartinfo because the last file your
> imported is not the last file in the database (has something to do with
> save points)
> You also have to start with recover_start again for the first log and
> then recover_replace for subsequent logs.
> So if the script fails, it needs a manual interaction. I guess I can
> improve the script on that point, but I haven't had the time.
i guess that's what tilo mentioned. i hope it's enough to look at the
last log-restore that returned Errorcode 0 and start with this one.
do you inspect the Errorcodes of the recover_start / recover_replace
calls? how do you know that manual inspection is required? or can you
send me the output of your script when it runs successfully?
I use a little different approach here. SInce there are many scripts
running I don't do error checking every script separatly because most of
the time when something goes wrong I want to solve it manually. Therefor
all scripts scheduled in cron send all output to log files. At regular
intervals (5 min) I run a watchers script. That script processes all
changed logfiles for certain error messages or the absence of "approved
OK" messages(I use awk to do that). If it encounters an error message it
sends me a mail with the content of the logfile. Then I intervene.
thanks again,
Raimund
--
7. RedDot Anwendertagung der RedDot Usergroup e.V. am 31.1.2005
Pinuts pr�sentiert neue Entwicklung, http://www.pinuts.de/news
Pinuts media+science GmbH http://www.pinuts.de
Dipl.-Inform. Raimund Jacob [EMAIL PROTECTED]
Krausenstr. 9-10 voice : +49 30 59 00 90 322
10117 Berlin fax : +49 30 59 00 90 390
Germany
--
MaxDB Discussion Mailing List
For list archives: http://lists.mysql.com/maxdb
To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED]
--
*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*
* System Engineer, Verzekeringen NV *
* www.verzekeringen.be *
* Oostkaai 23 B-2170 Merksem *
* 03/6416673 - 0477/340942 *
*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*