Hi Raimund,

I have attached a script we use to obtain the same result as you want,
maybe you can use some ideas of it.
I solved to problem of knowing which backup file to load next a little
different by keeping some kind of counter in a separate file (the
filename actually contains the counter eg: last.121. 121 was the last
applied logfile number)
This is our situation explained in brief so you better understand the
script:

 There is master database on one server and a slave database on
another.  In the beginning we made a fullbackup from the masterdb that
we applied on the slavedb without bringing the slavedb online
afterwards. If you bring it online, you can't applied additional backups
anymore.
SInce the fullbackup on the masterdb i have activated autologging. So at
regular intervals the logs are archived. The location where those logs
are written to is actually a drbd device. A drbd device is a network
mirroring device so anything written to that device is immediately
available on the slavedb.
Every 12 hours on the slavedatabase, a script (the script attached) is
run. The script has the following steps:

1) copy the archived logfiles of the drbd device to a local directory
2) dynamically build a command script that will be executed by dbmcli in
the very last step. Start with bringing database in admin mode and do
util_connect
3) look for the counter file and extract the lognumber of the last
applied log file
4) build a loop that scans over every .arch file (archived logfile)
5) if the lognumber in the .arch file is lower than the last applied log
number -> skip it
6) if the lognumber in the .arch file is equal to the last applied log
number -> apply that log again. (this is needed because then you are
sure to have applied the last log PAGE, as already described in earlier
mails). The first log you apply should always start with "recover_start
ARCH LOG $lognr"
7) subsequent logfiles are applied with the command "recover_replace
ARCH </path/to/logfiles/> $lognr"
8) update your counter file with the last lognumber you applied
9) end with recover_cancel
10) util_release (don't forget this one)
11) bring the database back to sleep
12) execute the dynamically build script

I have been using this for over a year and it works perfectly. So hope
it can help you too.

if recover_replace fails at a certain point and you need to restart.
First do a db_restartinfo because the last file your
imported is not the last file in the database (has something to do with
save points)
You also have to start with recover_start again for the first log and
then recover_replace for subsequent logs.
So if the script fails, it needs a manual interaction. I guess I can
improve the script on that point, but I haven't had the time.

Two other things I found out and are important otherwise they led to an
emergency shutdown:
1) the LOG_SEGMENT_SIZE parameter needs to be at least 1/3 of your total
log area size
2) Your dataspace needs to be large enough while importing logs (because
temporary data is written somewhere)

Regards,

Filip Sergeys


===== script start ============
#!/bin/sh

export PATH=$PATH:/var/maxdb/programs/bin
if [ ! -e /proc/drbd ];then
{
        echo "Drbd not running !"
        exit
}
fi
/bin/rm /var/maxdb/logs/arch*
/usr/bin/sudo mount /dev/nb0 /var/maxdb/logs/syncrologs
/bin/cp /var/maxdb/logs/syncrologs/arch* /var/maxdb/logs/

echo "db_admin" > util_script.txt
echo "util_connect" >> util_script.txt

counter="0"
lastarchfile=`ls /var/maxdb/logs/last*`
lastarchnr=${lastarchfile##*.}
for file in `ls /var/maxdb/logs/arch.* | sort`
do
        lognr=${file##*.}
        if [ "$lognr" -lt "$lastarchnr" ]; then
        continue
        fi
        if [ "$counter" -eq 0 ]; then
        echo "recover_start ARCH LOG $lognr" >> util_script.txt
        else
        echo "recover_replace ARCH /var/maxdb/logs/arch $lognr" >>
util_script.txt
        fi
        counter=$(( $counter + 1 ))
done

/bin/rm /var/maxdb/logs/last*
/usr/bin/touch /var/maxdb/logs/last.$lognr
/usr/bin/sudo umount /var/maxdb/logs/syncrologs

echo "recover_cancel" >> util_script.txt
echo "util_release" >> util_script.txt
echo "db_stop" >> util_script.txt

dbmcli -d eccentxd -u dbm,sappy -i util_script.txt
/bin/rm util_script.txt
==== end script ==========


On Wed, 2005-01-26 at 18:22, Raimund Jacob wrote:

    hello tilo, all!
    
    >>> now, is it correct that the approach is this: scan the output of 
    >>> backup_history_list for the last log id that was succefully imported. 
    >>> then identify all logbackup-files with an extension that is 
    >>> numerically higher than this and recover_start <medium> <extension of 
    >>> file> for each of them.
    >> If you end log restores in between individual log files with
    >> recover_cancel, it might be needed to reapply one or more log backups
    >> more than once. So if you use recover_cancel in your script, you should
    >> use the "Used LOG Page"-value of db_restartinfo and the 8-th and 9-th
    >> column of backup_history_list and backup_history_listnext to find the
    >> log backup needed for continuing the log restore. The "Used LOG
    >> Page"-value must be in the interval given of 8-th and 9-th column of
    >> backup_history_list/next.
    
    this log recovery is giving me a headache... i now have a script that 
    looks at backup_history_list of the importing host to determine which 
    was the last log fragment that was successfully imported. then it looks 
    at all available files and tries to insert them using recover_start. the 
    reason for one-file-at-a-time is that i am doing this with dbmcli-calls 
    in a simple shell script.
    
    playing around with the dbmcli interactivly i get this impression:
    look at db_restartinfo to find out what "Used Log Page" the instance 
    currently is at. looking at all available files (medium_label xxx) find 
    out, wich fragment has that page (might be at begin/end of a fragment or 
    in between when recover_cancel was used).
    
    even if one fragment ends with this page, recover_start from that 
    fragment and treat -8020 as an ok to recover_replace the next fragment. 
    continue with this with all the fragments you want to insert.
    
    this seems to work, but i cannot stop. when i quit dbmcli (or use 
    backup_cancel or recover_cancel) the kernel dies with "ERR     8 Admin 
       ERROR 'cancelled' CAUSED EMERGENCY SHUTDOWN". db_restartinfo and the 
    backup_history_list (after another backup_history_open) show that the 
    log was successfully imported. question: how do i "commit" this backup 
    session without going offline. i cant find a command for that in the 
    'help' output.
    
    question: does that sound like it might be correct? is this actually 
    scriptable? seems i need 'expect' or a more clever script for the 
    complex dbmcli interaction.
    
    isnt this supposed to be a standard problem? how do people do that? 
    manually with the dbmgui?
    
    thanks for your patience,
        Raimund
    
    -- 
    7. RedDot Anwendertagung der RedDot Usergroup e.V. am 31.1.2005
    Pinuts pr�sentiert neue Entwicklung, http://www.pinuts.de/news
    
    Pinuts media+science GmbH                 http://www.pinuts.de
    Dipl.-Inform. Raimund Jacob               [EMAIL PROTECTED]
    Krausenstr. 9-10                          voice : +49 30 59 00 90 322
    10117 Berlin                              fax   : +49 30 59 00 90 390
    Germany
    
    
    -- 
    MaxDB Discussion Mailing List
    For list archives: http://lists.mysql.com/maxdb
    To unsubscribe:    http://lists.mysql.com/[EMAIL PROTECTED]
    

-- 
*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*
* System Engineer, Verzekeringen NV *
* www.verzekeringen.be              *
* Oostkaai 23 B-2170 Merksem        *
* 03/6416673 - 0477/340942          *
*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*

Reply via email to