I may have missed a large part of this thread; seems that normal backup stg works just fine (notwithstanding the courier damaging media in transit -- maybe need a "closed container with padding" contract, like Paul Seay is doing). Your concern becomes (1) the recovery plan (DRM solves this) and (2) the time it takes to complete 50 servers; most folks will tell you, the business will survive if you can just identify the mission-critical servers, and recover them first.
The *real* solution here, as anywhere, depends on how much it's worth (X dollars) to get data back (in Y hours). Most managers just need to understand the cost associated with faster recovery times -- so, you calculate the cost of filespace vs. node-based collocation for a given example server; use your best guess about which server situation the business depends on the most --- OR, get the customer to classify the service for their apps & servers, using just 3 categories (mission critical, production, non-production). For the mission-critical, calculate the cost of the varying collocation settings... if you can winnow the list down to just one or two file-servers that need collocation, you'll be okay(all the other data can be restored, it will just take longer for some than others). For most offsite DR's, imho, you may get away with no collocation for the offsite tapes; mission-critical data base servers are (generally) backed up daily (full-online or full-snapshot or BCV's) so the data is already clumped (no need for collocation). It's the file servers that will bite you on a DR; carefully configured with high-level directories, to allow for multi-session restore, and properly identifying/isolating the key server or two that need offsite collocation -- this, also, means a separate onsite storage pool, to minimize the amount of data getting collocation treatment. And, there are (other) varying choices to be made about collocation (ie, onsite vs. offsite, controlling number of tapes in the pool, etc.). The question of separating active from inactive data is (essentially) answered with backupsets and export (filedata=active); implementing this for the new MOVE NODEDATA got a "concerned" response --- to do it requires the aggregates be re-built, which becomes very time-consuming. Seems like an offsite reclamation "feature" would be nice... try to articulate a way of getting just the active versions reclaimed, then submit to development for review (via SHARE it would get a good peer review and visibility with developers). Hey, I like the way Gerald said it: backup stgpool <somepool> <copypool> filetype=active This has its drawbacks, but would seem to come closer to what's desired than the speed of backupset or export. Alternatively, there IS the point about most customers end up using point-in-time parameters when doing filesystem restores. Hope this helps. Don France Technical Architect - Tivoli Certified Consultant Professional Association of Contract Employees (P.A.C.E.) San Jose, CA (408) 257-3037 [EMAIL PROTECTED] ----- Original Message ----- From: "Rob Schroeder" <[EMAIL PROTECTED]> To: <[EMAIL PROTECTED]> Sent: Tuesday, April 30, 2002 12:16 PM Subject: copy storage pools > Here is my dilemma. I have 50 Win2k servers. Our auditors demand a > complete disaster recovery plan, and I only have one data center. I have > about 2 terabytes of data active. There are a couple oracle servers, sql > servers, data servers and a whole bunch of application servers. I cannot > duplicate 60 3590E tapes everyday with a backup storage pool command. I > also cannot specify 50 generate backup sets and expect my operators to do > it right, much less promptly. Yet, I still need to have offsite copies of > my data. You may say that's the cost of doing infinite incrementals, but > tell that to the companies using TSM that worked in the WTC, or had their > building ruined by a tornado last week, or the one that will burn to the > ground next week from arson. Am I supposed to gamble my billion dollar > business on that? > > Rob Schroeder > Famous Footwear > [EMAIL PROTECTED]