2008/11/27 David Jurke <[EMAIL PROTECTED]>

>  Whoa!
>
>
>
> Okay, I need to go talk to the DBAs about this lot, lots of it is too far
> on the DBA side for me to comment intelligently on it. It does sound
> promising, though - if we back up daily only the current month's data (the
> rest will be in "static" partitions), only the weekly(?) full backup will
> have space/time issues.
>
>
>
> But, after some thought...
>
>
>
Basically, as I understand it, your first group of comments are about not
> backing up empty space, as per your example if there is only 10GB data in a
> 100GB data file. However, our database is growing rapidly, and our DBAs tend
> to allocate smaller tablespace files more frequently (rather than huge files
> seldom),
>

Bad idea(tm)

Oracle makes checkpoints ( something similar to a 'stamp of time' ) in each
datafile header and other Oracle critical files and mem structures, to be
able to restore the whole scenario to a known state if a crash occurs. This
'marks' are doing often, and of course they have a cost in CPU/IO time.
Having more datafiles more time will be spent coordinating/executing  the
stamp ( it's an exclusive operation ).

Again, could be very useful to know what release of Oracle RDBMS are you
running ( 9iR2, 10gR2...? ) and what platform/architecture ( Linux
x86_x86_64, itanium2...? ) but I think that you will have no problems to
define bigger datafiles ( think in mind that Oracle defines 128GB datafile
as the maximum for a 'small file' type ). We are using tablespaces of 360GB
each one using groups 128GB databafiles. Another reason to use bigger
datafiles, less time expended by the DBA creating datafiles ;)

> so at any time there's probably not more than 20 GB of unused space in the
> database files, which is less than 10% of our database currently, and the %
> will only decrease. So yes there would be a benefit, but not huge.
>
>
Ok, rman goes to rescue again ;) . If you're in 10gR2, you can define a
'tracking file' where Oracle will store a little info about what blocks are
changed. Then, you make a first full backup, but after that, you can use
this tracking file with rman and it will only backup the data blocks
changed. The tracking file could be 'reseted' at convenience ( after each
backup, for example ). I did not say it before, but rman is able to do
full/cumulative/differential backups.


>
> Your RMAN example still backs up the entire database to disk and then later
> to tape, which leaves us with the problems of disk space and backup
> duration. As mentioned above, these won't be mitigated very much by only
> backing up data and not empty space.
>

rman have a lot of options, of course you can backup only a particular
tablespace or a group of them, only a datafile even if the tablespace has
more than one, only archive logs, etc...


http://download.oracle.com/docs/cd/B19306_01/backup.102/b14191/toc.htm


What I'd like to do is halve the backup time and remove the requirement for
> intermediate disk storage by backing up the tablespaces (RMAN or otherwise)
> straight to tape.
>

Oracle provides a software api for the backup solution developers ( Legato,
Tivoli, etc...) but is offered under $$$, don't know the prize or the
conditions


> For which the only solution anyone's suggested which would actually work
> with Bacula is a variation of Kjetil's suggestion, running multiple backup
> tasks, one per tablespace. A little ugly in that there will be a LOT of
> backup jobs and hence a lot of emails in the morning, but it would work.
>

do it with rman ( datafile per datafile  or tablespace per tablespace ).
rman will NOT block the datafile header and no extra redo info will
generated.


>
>
> The DBAs are already talking about partitioning and making the older
> tablespaces read-only and only backing them up weekly or fortnightly or
> whatever, which solves the problem for the daily backups but still leaves us
> with a weekly/fortnightly backup which won't fit in the backup staging disk
> and won't complete before the next backup is due to kick in. It may be that
> we have to just accept that and not do daily backups over the weekend, say,
> working around the disk space issue somehow.
>

disk is cheap today, think about it. ¿Do you need the latest and fastest
disks? IMMO, no.


>
> For various reasons our hot backup site isn't ready yet. The business have
> agreed that in the interim an outage of several days is acceptable, while we
> restore from tape and start it all up. At this stage (it's a work in
> progress), an outage of this application doesn't affect external customers,
> only internally, and the pain is not great. Long term we will have a hot
> database replica at another site ready to step up to production status in
> the event of problems, but as I said this isn't ready yet. I don't know
> whether it will be Oracle Data Guard, but it'll be equivalent. We already do
> this for other applications/databases, the DBAs are well on top of this
> stuff. I don't know the details.
>
>
The Data Guard option is the best, a lot of functionality required for the
synchronization ( I propose always LogWriter async mode over Arch async,
less data loss if things goes bad in primary site and very little overhead
). And you can backup the database in the standby site, freeing your primary
site of this task. If you are designing your contingency site is the moment
to see if you can put a SAN with snapshot capabilities there and your
problems will be near solved ;)





> Cheers,
>

Regards

D
-------------------------------------------------------------------------
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
_______________________________________________
Bacula-users mailing list
Bacula-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/bacula-users

Reply via email to