>> database replication to support having a hot failover NT database server

>We gave up on using eNp-Ty for database work two years ago.

The NT OS was never a part of the problem.  NT ran fine, it was a dedicated
DB2/NT database server.  Linux is nice, if you can't run with the big dogs
on OS/390. :-)  We have used Linux/390.  It works well.

>> 1) Database replication adds a great many I/Os (and a little CPU use) to
the
>> production database server.  The number of database disk I/O's roughly
>> triple as captured changes are written to "mirror" database tables and
then
>> read from "mirror" database tables and then deleted from "mirror"
database
>> tables (also triples the number of writes to the log drive).

> Linux uses memory much more efficiently to cache in the database.

The bufferpool contents are managed by the same code (DB2 code) on both
platforms.  CPU and memory speed are not the problem.  OS doesn't matter.  A
disk I/O is a disk I/O.  Changed data (and index) pages get written to disk
when the update is captured, and changed data (and index) pages get written
to disk when the updates are deleted from the source database.  Log file
writes occur for all database updates.  All the cache in the world can only
eliminate the I/Os to read the changed data from the capture tables, you are
pretty much stuck with the rest.     


>Our production database aren't quite the size of yours, but our primary
>database runs "write only" with very rare reads from disk, no matter how
>busy it gets.

>I would strongly advise having a battery backed write back SCSI Raid
>controller (128MB+ on the controller) running a RAID 10 for any production
>database.

A database with few updates would probably have no problem being replicated.
As long as you can handle having the number of disk I/Os tripled,
replication can work and work well.  The smaller the database the better,
the fewer updates the better.

We had the battery backed cached SCSI raid cards.  It was a nice, powerful
PC.  The battery backed write benefits the application as the application is
returned control before the disk I/O occurs.  The disk I/O still has to
occur, the heads still need to dance left and right.  This was a database
that had a 90% hit ratio that was still doing over 100 million disk reads a
day, most from 9-4 in prime time.  

>> 2) When you have a complete database on one platform and empty database
on
>> the other, the built in process to obtain initial synchronization of the
two
>> databases did not work.  The database was too big.  We had to play tricks
>> with restoring the production database and "tricking" replication into
>> thinking that it had been replicating the database all along.  Kind of
>> tricky.  We were on V6, this may be better now.

>Use the DJRA tool https://www6.software.ibm.com/dl/datajoiner/djra-p/

The DJRA was utilized.  It is not perfect.  The size of the database
matters.  One of the tables was 30+ gig with over 70 indexes (gotta love
purchased apps).  Log file full stopped us very early.  When I was doing
this (things may have changed now) the LOAD command didn't work for us
(either wasn't supported or we hit a bug, can't recall) and there was no
support for turning off logging with ALTER TABLE NOT LOGGED INTITIALLY.

>> 6) Setting up and monitoring database replication is a very manual, time
>> consuming process.  We probably spent 10-20 hours a month monitoring,
>> tweaking and tuning the process AFTER it stabilized.

>DJRA is a must have.  It makes this much easier.

DJRA has it's limitations.  It has no mechanism to call the DBA on call, for
instance.  People ran reports off the target database.  If the data on the
target database was not within 15 minutes of being up to date with the
source database the users called.  I like staring at DJRA screens as much as
the next guy, it just did not meet our needs.  We had a MVS job that ran
every 15 minutes and called the DBA on call if the target database was not
up to date.  

>> We eventually ported the DB2/NT database to DB2/390.  We now have no
>> failover (other than 72 hour mainframe DR failover), but the database is
>> sitting on rock solid hardware and rock solid OS.

>Well, everything works better when it's not eNp-Ty inside, but at the
>moment our budget does not allow us to add a mainframe.

If you are good, maybe Santa will bring you a mainframe next year...

-KGK

:::  When replying to the list, please use 'Reply-All' and make sure
:::  a copy goes to the list ([EMAIL PROTECTED]).
***  To unsubscribe, send 'unsubscribe' to [EMAIL PROTECTED]
***  For more information, check http://www.db2eug.uni.cc
-
:::  When replying to the list, please use 'Reply-All' and make sure
:::  a copy goes to the list ([EMAIL PROTECTED]).
***  To unsubscribe, send 'unsubscribe' to [EMAIL PROTECTED]
***  For more information, check http://www.db2eug.uni.cc

Reply via email to