This is more of a general or best practice question for on-line DB backups.  
I'll get a page for a DB that is about to fall over because it cant pass the 
archive logs to TSM fast enough and the archive log space is filling.  When I 
look at the log size they are passing its usually something small like 4-12K 
multiple times per min.  I suggest that they increase their log size as their 
backup client is spending more time packaging and connecting to the backup 
server than its taking to send the 4-1k of logs.

I understand that these logs can be dictated by time or size, but one would 
think that there's a happier medium to sending 4-12k or 4-12GB per log file.  
I've see 1-200MB log files that seem to be a good middle of the road.

Do other experience this with their DBA's and what do you do?

Regards,

Charles

+----------------------------------------------------------------------
|This was sent by charles_h...@uhc.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+----------------------------------------------------------------------

Reply via email to