electronic vaulting for archive

2011-09-13 Thread Mehdi Salehi
Hi,
Electronic vaulting for backup data is possible by active data pools and
equipments like TS7650G. The question is how to vault archive data which are
not inherently permitted in active data pools?

Thank you,
Mehdi


TSM 6.2 installer

2011-09-13 Thread Ehresman,David E.
Anyone had the TSM 6.2.1 server (AIX) installer fail saying there is not enough 
space when in fact there is plenty of space?  If so, what did you do?

I'm trying to install on AIX 6.1 TL6.

David Ehresman


Re: TSM 6.2 installer

2011-09-13 Thread Zoltan Forray/AC/VCU
Dumb question.Did you have the correct permission/root?  When I had a
similar issue, it was because I wasn't root or the filesystem owner.



From:   Ehresman,David E. deehr...@louisville.edu
To: ADSM-L@VM.MARIST.EDU
Date:   09/13/2011 10:09 AM
Subject:[ADSM-L] TSM 6.2 installer
Sent by:ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU



Anyone had the TSM 6.2.1 server (AIX) installer fail saying there is not
enough space when in fact there is plenty of space?  If so, what did you
do?

I'm trying to install on AIX 6.1 TL6.

David Ehresman


Disk storage pool 100% utilized but NOT cached

2011-09-13 Thread Zoltan Forray/AC/VCU
I think I know the answer to this question but figured it would be a good
question to ask,

This morning I was getting reports that backups were failing due to
backups not being able to get space in the disk stgpool.


When I did a Q STG, this is what I get:

9:25:34 AM   MOON : q stg
Storage  Device   EstimatedPctPct  High  Low  Next Stora-
Pool NameClass NameCapacity   Util   Migr   Mig  Mig  ge Pool
Pct  Pct
---  --  --  -  -    ---  ---
ARCHIVEPOOL  DISK 205 G0.30.390   70  TS1130
BACKUPPOOL   DISK   6,369 G  100.0   33.990   70  TS1130

Why would a disk storage pool be 100% Utilized and only 34% migratable
when I don't have Caching turned on?


Zoltan Forray
TSM Software  Hardware Administrator
Virginia Commonwealth University
UCC/Office of Technology Services
zfor...@vcu.edu - 804-828-4807
Don't be a phishing victim - VCU and other reputable organizations will
never use email to request that you reply with your password, social
security number or confidential personal information. For more details
visit http://infosecurity.vcu.edu/phishing.html


Re: electronic vaulting for archive

2011-09-13 Thread Steven Langdale
My definition of Electronic vaulting is using electronic means to transfer
backup/archive data to your offsite repository.  This is really regardless
of what the target is.

I currently electronically vault to both ProtecTIER and TS3500 libraries.

So, in essence if your primary to copy stgpool is not done by physically
moving media, you are vaulting electronically.

Steven

On 13 September 2011 13:51, Mehdi Salehi ezzo...@gmail.com wrote:

 Hi,
 Electronic vaulting for backup data is possible by active data pools and
 equipments like TS7650G. The question is how to vault archive data which
 are
 not inherently permitted in active data pools?

 Thank you,
 Mehdi



Re: TSM 6.2 installer

2011-09-13 Thread Erwann SIMON
Hi David,

You need plenty of space in /opt mainly, but enough space is also needed in 
/usr, /tmp and /.
See install guide.
--
Best regards / Cordialement / مع تحياتي
Erwann SIMON

-Original Message-
From: Ehresman,David E. deehr...@louisville.edu
Sender: ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU
Date: Tue, 13 Sep 2011 13:56:11 
To: ADSM-L@VM.MARIST.EDU
Reply-To: ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] TSM 6.2 installer

Anyone had the TSM 6.2.1 server (AIX) installer fail saying there is not enough 
space when in fact there is plenty of space?  If so, what did you do?

I'm trying to install on AIX 6.1 TL6.

David Ehresman


Re: Disk storage pool 100% utilized but NOT cached

2011-09-13 Thread Prather, Wanda
...because some huge backup is in progress occupying the other 66%.
It can't migrate yet because it's still in progress.



-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Zoltan 
Forray/AC/VCU
Sent: Tuesday, September 13, 2011 10:23 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Disk storage pool 100% utilized but NOT cached

I think I know the answer to this question but figured it would be a good 
question to ask,

This morning I was getting reports that backups were failing due to backups not 
being able to get space in the disk stgpool.


When I did a Q STG, this is what I get:

9:25:34 AM   MOON : q stg
Storage  Device   EstimatedPctPct  High  Low  Next Stora-
Pool NameClass NameCapacity   Util   Migr   Mig  Mig  ge Pool
Pct  Pct
---  --  --  -  -    ---  ---
ARCHIVEPOOL  DISK 205 G0.30.390   70  TS1130
BACKUPPOOL   DISK   6,369 G  100.0   33.990   70  TS1130

Why would a disk storage pool be 100% Utilized and only 34% migratable when I 
don't have Caching turned on?


Zoltan Forray
TSM Software  Hardware Administrator
Virginia Commonwealth University
UCC/Office of Technology Services
zfor...@vcu.edu - 804-828-4807
Don't be a phishing victim - VCU and other reputable organizations will never 
use email to request that you reply with your password, social security number 
or confidential personal information. For more details visit 
http://infosecurity.vcu.edu/phishing.html


Re: Disk storage pool 100% utilized but NOT cached

2011-09-13 Thread Richard Sims
On Sep 13, 2011, at 10:22 AM, Zoltan Forray/AC/VCU wrote:

 Why would a disk storage pool be 100% Utilized and only 34% migratable
 when I don't have Caching turned on?

A considerable factor is that Pct Migr represents committed data.

Richard Sims


Re: TSM 6.2 installer

2011-09-13 Thread Ehresman,David E.
Yes, running as root.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Zoltan 
Forray/AC/VCU
Sent: Tuesday, September 13, 2011 10:12 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] TSM 6.2 installer

Dumb question.Did you have the correct permission/root?  When I had a 
similar issue, it was because I wasn't root or the filesystem owner.



From:   Ehresman,David E. deehr...@louisville.edu
To: ADSM-L@VM.MARIST.EDU
Date:   09/13/2011 10:09 AM
Subject:[ADSM-L] TSM 6.2 installer
Sent by:ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU



Anyone had the TSM 6.2.1 server (AIX) installer fail saying there is not enough 
space when in fact there is plenty of space?  If so, what did you do?

I'm trying to install on AIX 6.1 TL6.

David Ehresman


Re: Disk storage pool 100% utilized but NOT cached

2011-09-13 Thread Zoltan Forray/AC/VCU
Yes, that is kinda what I guessed.  I have the 4TB SQL backup still 
running from Saturday.  It is only 50% done.

There has got to be a better way to handle SQL backups of LARGE databases. 
 This is causing numerous headaches/problems with other backups by 
pre-allocating the space or if going to tape, locking the tape drives for 
the duration.

Del,  I know we discussed this and stripes (I have it configured for 3) 
for improving the speed, but this goes back to my question about breaking 
up the SQL backup into multiple chunks, especially with this new issue of 
it locking 4TB of disk space to contain the backup, until it finishes.  I 
can't have 1-backup detrimentally effecting everyone else and at the speed 
it is running, it will take almost a week to finish.

Suggestions?



From:   Billaudeau, Pierre p.billaud...@saq.qc.ca
To: ADSM-L@VM.MARIST.EDU
Date:   09/13/2011 12:22 PM
Subject:Re: [ADSM-L] Disk storage pool 100% utilized but NOT 
cached
Sent by:ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU



Hi Zoltan,
Is it possible you had a very large (or several) backup running 
using the remaining 66% (over 4 tb) of your backup pool ? Current backups 
freezes backup data space on the disk pool so it is not eligible for 
migration.

Pierre Billaudeau
Analyste en stockage
Livraison des Infrastructures Serveurs
Société des Alcools du Québec
514-254-6000 x 6559


-Message d'origine-
De : ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] De la part de 
Zoltan Forray/AC/VCU
Envoyé : 13 septembre 2011 10:23
À : ADSM-L@VM.MARIST.EDU
Objet : [ADSM-L] Disk storage pool 100% utilized but NOT cached

I think I know the answer to this question but figured it would be a good
question to ask,

This morning I was getting reports that backups were failing due to
backups not being able to get space in the disk stgpool.


When I did a Q STG, this is what I get:

9:25:34 AM   MOON : q stg
Storage  Device   EstimatedPctPct  High  Low  Next Stora-
Pool NameClass NameCapacity   Util   Migr   Mig  Mig  ge Pool
Pct  Pct
---  --  --  -  -    ---  ---
ARCHIVEPOOL  DISK 205 G0.30.390   70  TS1130
BACKUPPOOL   DISK   6,369 G  100.0   33.990   70  TS1130

Why would a disk storage pool be 100% Utilized and only 34% migratable
when I don't have Caching turned on?


Zoltan Forray
TSM Software  Hardware Administrator
Virginia Commonwealth University
UCC/Office of Technology Services
zfor...@vcu.edu - 804-828-4807
Don't be a phishing victim - VCU and other reputable organizations will
never use email to request that you reply with your password, social
security number or confidential personal information. For more details
visit http://infosecurity.vcu.edu/phishing.html

--


Information confidentielle : Le présent message, ainsi que tout fichier 
qui y est joint, est envoyé à l'intention exclusive de son ou de ses 
destinataires; il est de nature confidentielle et peut constituer une 
information privilégiée. Nous avertissons toute personne autre que le 
destinataire prévu que tout examen, réacheminement, impression, copie, 
distribution ou autre utilisation de ce message et de tout fichier qui y 
est joint est strictement interdit. Si vous n'êtes pas le destinataire 
prévu, veuillez en aviser immédiatement l'expéditeur par retour de 
courriel et supprimer ce message et tout document joint de votre système. 
Merci.


Re: Disk storage pool 100% utilized but NOT cached

2011-09-13 Thread Del Hoobler
Hello Zoltan,

Using STRIPES breaks the backup into multiple chunks.
The STRIPES value that you specify will indicate how
many chunks are created at backup time.
If you still cannot get the backup speeds you need,
and you do not get useful suggestions from this list,
please open a PMR with IBM support. They can get the
TSM performance team involved and let you know how best
to tune your environment to get the best speeds possible.

Thanks,

Del



ADSM: Dist Stor Manager ADSM-L@vm.marist.edu wrote on 09/13/2011
12:48:12 PM:

 From: Zoltan Forray/AC/VCU zfor...@vcu.edu
 To: ADSM-L@vm.marist.edu
 Date: 09/13/2011 01:02 PM
 Subject: Re: Disk storage pool 100% utilized but NOT cached
 Sent by: ADSM: Dist Stor Manager ADSM-L@vm.marist.edu

 Yes, that is kinda what I guessed.  I have the 4TB SQL backup still
 running from Saturday.  It is only 50% done.

 There has got to be a better way to handle SQL backups of LARGE
databases.
  This is causing numerous headaches/problems with other backups by
 pre-allocating the space or if going to tape, locking the tape drives
for
 the duration.

 Del,  I know we discussed this and stripes (I have it configured for 3)

 for improving the speed, but this goes back to my question about
breaking
 up the SQL backup into multiple chunks, especially with this new issue
of
 it locking 4TB of disk space to contain the backup, until it finishes.
I
 can't have 1-backup detrimentally effecting everyone else and at the
speed
 it is running, it will take almost a week to finish.

 Suggestions?



Re: Disk storage pool 100% utilized but NOT cached

2011-09-13 Thread Billaudeau, Pierre
Hi Zoltan,
Is it possible you had a very large (or several) backup running using 
the remaining 66% (over 4 tb) of your backup pool ? Current backups freezes 
backup data space on the disk pool so it is not eligible for migration.

Pierre Billaudeau
Analyste en stockage
Livraison des Infrastructures Serveurs
Société des Alcools du Québec
514-254-6000 x 6559


-Message d'origine-
De : ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] De la part de Zoltan 
Forray/AC/VCU
Envoyé : 13 septembre 2011 10:23
À : ADSM-L@VM.MARIST.EDU
Objet : [ADSM-L] Disk storage pool 100% utilized but NOT cached

I think I know the answer to this question but figured it would be a good
question to ask,

This morning I was getting reports that backups were failing due to
backups not being able to get space in the disk stgpool.


When I did a Q STG, this is what I get:

9:25:34 AM   MOON : q stg
Storage  Device   EstimatedPctPct  High  Low  Next Stora-
Pool NameClass NameCapacity   Util   Migr   Mig  Mig  ge Pool
Pct  Pct
---  --  --  -  -    ---  ---
ARCHIVEPOOL  DISK 205 G0.30.390   70  TS1130
BACKUPPOOL   DISK   6,369 G  100.0   33.990   70  TS1130

Why would a disk storage pool be 100% Utilized and only 34% migratable
when I don't have Caching turned on?


Zoltan Forray
TSM Software  Hardware Administrator
Virginia Commonwealth University
UCC/Office of Technology Services
zfor...@vcu.edu - 804-828-4807
Don't be a phishing victim - VCU and other reputable organizations will
never use email to request that you reply with your password, social
security number or confidential personal information. For more details
visit http://infosecurity.vcu.edu/phishing.html

--


Information confidentielle : Le présent message, ainsi que tout fichier qui y 
est joint, est envoyé à l'intention exclusive de son ou de ses destinataires; 
il est de nature confidentielle et peut constituer une information privilégiée. 
Nous avertissons toute personne autre que le destinataire prévu que tout 
examen, réacheminement, impression, copie, distribution ou autre utilisation de 
ce message et de tout fichier qui y est joint est strictement interdit. Si vous 
n'êtes pas le destinataire prévu, veuillez en aviser immédiatement l'expéditeur 
par retour de courriel et supprimer ce message et tout document joint de votre 
système. Merci.


Follow-up: Question about Hybrid method upgrade to TSM V6.2.2 and server-to-server export

2011-09-13 Thread Prather, Wanda
Posting a follow-up to this thread.
Been there, done that, didn't like it when I tried it.

Customer had a TSM 5.5 server with ~100G data base.  Moving to 6.2.2 on new 
hardware.

Testing showed that the EXTRACT for the upgrade took only 2 hours, but the 
INSERTDB would take at least 10 hours.  Business-critical backups run from 
AS400 systems during the day, with normal Win and Linux filesystem backups at 
night, plus some Oracle TDP's, Exchange, etc.  So as to not miss the AS400, 
Oracle, and Exchange backups, we decided to do a hybrid upgrade.

The short version of the steps:

*Do as much ahead of time as possible (like upgrading the TIP, for 
example, and running fibre to the new server)

*Shut down V5.5

*Run the extract

*Bring V5.5 back up in production

*Let normal backups run overnight to V5.5

*On 6.2.2 run the INSERTDB overnight, taking as long as needed

*Return in the morning, bring up the 6.2.2 server

*Define disk storage pools, attach some tape drives (some are still 
attached to V5.5)

*Verify 6.2.2 is ready for prime time by testing some backups  restores

*Disable sessions on both servers

*Swap IP addresses, clean up DNS alias to point to 6.2.2

*Bring both servers back up

*6.2.2 is now production, but the data is 24 hours behind

*Do a SET SERVERNAME on 5.5 server to OLD

*Define server-to-server connections

*Run EXPORT node * from V5.5 to V6.2.2 with fromdate/fromtime set to 
merge the data from the past 24 hours into V6.2.2

Well, that all worked well (we had done two test runs beforehand, don't expect 
to get this right the first try!) up until the EXPORT NODE, where I ran smack 
into IC74464.

I didn't know that, of course, until trying it.  What I expected to see was 
V5.5 mounting the tapes that were written during the previous 24 hours.  What I 
saw instead was old tapes being mounted and just sitting for minutes at a time. 
 I finally cancelled the Export and tried again with just the AS400 data, that 
ran tickety-boo.  Tried the Oracle TDP data, and that ran tickety-boo.  Tried 
again with an ordinary filesystem client, and again the process just wilted and 
I eventually cancelled it.

So research brought me to IC74464, which says:
Directory processing: The FROMDATE / TODATE parameter does not
apply to directories. All directories in a file space are
processed even if the directories were not backed up in the
specified date range.

So, given that all our clients backed up overnight, the V5.5 server would 
eventually have mounted every tape required to export every directory from 
every client, in order to pick up the last 24 hours of filesystem backups.

I don't see any way it is possible or practical to complete this.  Most of the 
changed files would have been backed up anyway during the overnight production 
backups the first night after implementing V6.2.2, long before we picked them 
up with the export.  The only thing we are missing are versions of files that 
changed twice, once during the INSERTDB run and again before the first night of 
V6.2.2 production.  This process might (and it's just a might) work in a site 
that has a DIRMCpool on disk or a VTL for the primary storage pool, but even 
then I think it will take an unreasonably long time.


We got all our TDP backups across, but we're giving up on those missing files, 
I don't see any way to get them.  We will just leave the V5.5 server around for 
a month, with 1 tape drive.  We can still restore those files from it, if 
somebody needs one.  After a month or so, we'll just declare a 98% victory to 
be sufficient and shut it down.

W


Re: Follow-up: Question about Hybrid method upgrade to TSM V6.2.2 and server-to-server export

2011-09-13 Thread Xav Paice
We hit the exact same issue in test - so we took a rather butcher-ish approach:
- When starting up the 'old' server following the extract, we defined a new, 
temporary stgpool and directed all new data to that pool.  All other volumes 
were marked 'read only'.
- normal night's backup ran, to the temp pools.
- The tapes were moved from the old server to the new one's library.  That's 
because there was a new library, but it's not essential, just that the old 
server doesn't need to know about the tapes, nor should it have access to think 
they're scratch and start making use of tapes that have useful data on them (!).
- started the 'new' server, and ensured that it had access to the tape data 
(quick test restore was enough for me)
- immediately before running the export node command we DELETED THE OLD VOLUMES 
on the old server - I hated doing that, really I did.  The old server only left 
had in it's database the data that was available to it, i.e. the stuff in the 
temporary disk stgpool.
- With the 'old' server now only containing a tiny amount of new data, the 
export node * ran just fine.

It's a lot of fiddling, especially with changing the destination of copygroups, 
but it meant it worked and the import is a 40+ hour job for that one.


- Wanda Prather wprat...@icfi.com wrote:

 From: Wanda Prather wprat...@icfi.com
 To: ADSM-L@VM.MARIST.EDU
 Sent: Wednesday, 14 September, 2011 7:34:02 AM
 Subject: [ADSM-L] Follow-up:  Question about Hybrid method upgrade to TSM 
 V6.2.2 and server-to-server export

 Posting a follow-up to this thread.
 Been there, done that, didn't like it when I tried it.

 Customer had a TSM 5.5 server with ~100G data base.  Moving to 6.2.2
 on new hardware.

 Testing showed that the EXTRACT for the upgrade took only 2 hours, but
 the INSERTDB would take at least 10 hours.  Business-critical backups
 run from AS400 systems during the day, with normal Win and Linux
 filesystem backups at night, plus some Oracle TDP's, Exchange, etc.
 So as to not miss the AS400, Oracle, and Exchange backups, we decided
 to do a hybrid upgrade.

 The short version of the steps:

 *Do as much ahead of time as possible (like upgrading the TIP,
 for example, and running fibre to the new server)

 *Shut down V5.5

 *Run the extract

 *Bring V5.5 back up in production

 *Let normal backups run overnight to V5.5

 *On 6.2.2 run the INSERTDB overnight, taking as long as
 needed

 *Return in the morning, bring up the 6.2.2 server

 *Define disk storage pools, attach some tape drives (some are
 still attached to V5.5)

 *Verify 6.2.2 is ready for prime time by testing some backups
  restores

 *Disable sessions on both servers

 *Swap IP addresses, clean up DNS alias to point to 6.2.2

 *Bring both servers back up

 *6.2.2 is now production, but the data is 24 hours behind

 *Do a SET SERVERNAME on 5.5 server to OLD

 *Define server-to-server connections

 *Run EXPORT node * from V5.5 to V6.2.2 with fromdate/fromtime
 set to merge the data from the past 24 hours into V6.2.2

 Well, that all worked well (we had done two test runs beforehand,
 don't expect to get this right the first try!) up until the EXPORT
 NODE, where I ran smack into IC74464.

 I didn't know that, of course, until trying it.  What I expected to
 see was V5.5 mounting the tapes that were written during the previous
 24 hours.  What I saw instead was old tapes being mounted and just
 sitting for minutes at a time.  I finally cancelled the Export and
 tried again with just the AS400 data, that ran tickety-boo.  Tried the
 Oracle TDP data, and that ran tickety-boo.  Tried again with an
 ordinary filesystem client, and again the process just wilted and I
 eventually cancelled it.

 So research brought me to IC74464, which says:
 Directory processing: The FROMDATE / TODATE parameter does not
 apply to directories. All directories in a file space are
 processed even if the directories were not backed up in the
 specified date range.

 So, given that all our clients backed up overnight, the V5.5 server
 would eventually have mounted every tape required to export every
 directory from every client, in order to pick up the last 24 hours of
 filesystem backups.

 I don't see any way it is possible or practical to complete this.
 Most of the changed files would have been backed up anyway during the
 overnight production backups the first night after implementing
 V6.2.2, long before we picked them up with the export.  The only thing
 we are missing are versions of files that changed twice, once during
 the INSERTDB run and again before the first night of V6.2.2
 production.  This process might (and it's just a might) work in a
 site that has a DIRMCpool on disk or a VTL for the primary storage
 pool, but even then I think it will take an unreasonably long time.


 We got all our TDP backups across, but we're giving up 

Re: electronic vaulting for archive

2011-09-13 Thread Mehdi Salehi
Thanks Steven, would you please explain more how you vault archive data?

Mehdi