AW: AW: 6.1 experience so far

2009-12-15 Thread Stefan Holzwarth
Sorry, I do not have details about that problem any longer. I experienced it 
some weeks ago and gave up.
After changing the default queries and my customized one's according to the tsm 
blog they work again :-)

Thanks a lot
Stefan Holzwarth


 -Ursprüngliche Nachricht-
 Von: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] Im 
 Auftrag von Wanda Prather
 Gesendet: Montag, 14. Dezember 2009 20:16
 An: ADSM-L@VM.MARIST.EDU
 Betreff: Re: AW: 6.1 experience so far
 
 1) When you say hang:  if you enter q session, do you see a 
 hung session
 for the admin id the reporter uses?
 3) If you look at the actlog, is the last query the Daily 
 Reporter issues
 before it hangs a SELECT against the EVENTS table?
 
 If so it may be a known bug querying the events table, I can 
 send you more
 info to get around it...
 
 
 
 On Mon, Dec 14, 2009 at 3:08 AM, Stefan Holzwarth
 stefan.holzwa...@adac.dewrote:
 
  What changes did you made to tsm operational reporting ?
  In our environment most of the reports hang and gave no result.
  Regards
  Stefan Holzwarth
 
 
   -Ursprüngliche Nachricht-
   Von: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] Im
   Auftrag von Sam Sheppard
   Gesendet: Samstag, 12. Dezember 2009 01:44
   An: ADSM-L@VM.MARIST.EDU
   Betreff: 6.1 experience so far
  
   After viewing the experiences of others on the list 
 (particularly Mr.
   Forray's) and fearing I would jinx myself, I hesitated to
   post this, but
   decided to go ahead and post our adventures so far.
  
   We had a visit from our Servergraph rep a couple of weeks ago
   and during
   the conversation discovered that we seemed to be alone, 
 at least among
   their Southern California customers, in implementing TSM 
 Version 6 in
   production.  We began in September and started with 
 Version 6.1.2.  We
   are approaching completion of our project to migrate our 
 existing TSM
   5.5.3 servers, two on z/OS and one on Solaris, to TSM Version
   6 on a new
   AIX 6.1 P-520 server.
  
   Our total database size for the three existing servers is 
 about 120GB.
   We are sharing a 3494 ATL with 8 TS1120 drives between 
 the Solaris box
   and the Version 6 server, with the Version 6 server acting as the
   library manager. So we may be somewhat on the small end 
 of the average
   customer.
  
   Since we started on a fresh box, it looks like we have 
 avoided many of
   the pitfalls associated with upgrading in place from 
 version 5, but we
   did experience what in hindsight look like fairly minor problems:
  
   IC62978 - active logs fill up due to DB2 table reorg
   processes. Fix
   was to specify the undocumented ALLOWTABLEREORG NO option.
  
   IC63373 - while running a large image backup (around 
 600GB) and
   several other clients, received message ANS1316e and ANR0526W,
   indicating recovery log out of space, even though we 
 have 30GB and
   it's not even close to full. Solution is to do the 
 following to
   change a DB2 variable from its standard setting:
  
 1. Use the following db2 command to determine the 
 number of log
 volumes used:
db2 get db cfg for TSMDB1
 2. Multiply the value for the LOGPRIMARY parameter 
 by 90%.  This
 value should be reflected in NUM_LOG_SPAN.
  
 Update NUM_LOG_SPAN by issuing the following db2 command:
db2 update db cfg for TSMDB1 using NUM_LOG_SPAN 
 newValue
 You may need to restart the TSM server, which will 
 restart the
 db2 database as well.
  
   IC63637 - We have a large (30-40TB) amount of archived
   data to move
   from our existing server(s) to version 6. The good news
   is that the
   large archived image backups exported 
 server-to-server very fast,
   around 60MB/sec. The bad new is, the Version 6 library manager
   function periodically reclaims a tape drive being used by the
   library client, in our case, causing the large
   EXPORT/IMPORT process
   being run to fail and mark the file being exported at the
   time to be
   flagged, causing a copy pool tape to be requested if the
   process is
   restarted. The fix for this was to install version
   6.1.2.1 and then
   replace the DSMSERV module with a fix version.
  
   Database backups suddenly failed for 5 days in a row, but then
   started working again when support requested various
   documentation.
   Looks like DB2 communicates with the TSM server with 
 its own OPT
   file, specifying 'localhost' as the TCPSERVERADDRESS,
   which appeared
   to be failing even though all other functions in the TSM
   server were
   working fine. Waiting for reoccurence.
  
   Export Node function apparently does not copy the
   MAXNUMMP setting.
  
   A (relatively) long list of quirks in the ISC, which we forced
   ourselves to use while our Servergraph license was 
 updated. Some
   of these were only related

AW: 6.1 experience so far

2009-12-14 Thread Stefan Holzwarth
What changes did you made to tsm operational reporting ?
In our environment most of the reports hang and gave no result.
Regards
Stefan Holzwarth


 -Ursprüngliche Nachricht-
 Von: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] Im 
 Auftrag von Sam Sheppard
 Gesendet: Samstag, 12. Dezember 2009 01:44
 An: ADSM-L@VM.MARIST.EDU
 Betreff: 6.1 experience so far
 
 After viewing the experiences of others on the list (particularly Mr.
 Forray's) and fearing I would jinx myself, I hesitated to 
 post this, but
 decided to go ahead and post our adventures so far.
 
 We had a visit from our Servergraph rep a couple of weeks ago 
 and during
 the conversation discovered that we seemed to be alone, at least among
 their Southern California customers, in implementing TSM Version 6 in
 production.  We began in September and started with Version 6.1.2.  We
 are approaching completion of our project to migrate our existing TSM
 5.5.3 servers, two on z/OS and one on Solaris, to TSM Version 
 6 on a new
 AIX 6.1 P-520 server.
 
 Our total database size for the three existing servers is about 120GB.
 We are sharing a 3494 ATL with 8 TS1120 drives between the Solaris box
 and the Version 6 server, with the Version 6 server acting as the
 library manager. So we may be somewhat on the small end of the average
 customer.
 
 Since we started on a fresh box, it looks like we have avoided many of
 the pitfalls associated with upgrading in place from version 5, but we
 did experience what in hindsight look like fairly minor problems:
 
 IC62978 - active logs fill up due to DB2 table reorg 
 processes. Fix
 was to specify the undocumented ALLOWTABLEREORG NO option.
 
 IC63373 - while running a large image backup (around 600GB) and
 several other clients, received message ANS1316e and ANR0526W,
 indicating recovery log out of space, even though we have 30GB and
 it's not even close to full. Solution is to do the following to
 change a DB2 variable from its standard setting:
 
   1. Use the following db2 command to determine the number of log
   volumes used:
  db2 get db cfg for TSMDB1
   2. Multiply the value for the LOGPRIMARY parameter by 90%.  This
   value should be reflected in NUM_LOG_SPAN.
 
   Update NUM_LOG_SPAN by issuing the following db2 command:
  db2 update db cfg for TSMDB1 using NUM_LOG_SPAN newValue
   You may need to restart the TSM server, which will restart the
   db2 database as well.
 
 IC63637 - We have a large (30-40TB) amount of archived 
 data to move
 from our existing server(s) to version 6. The good news 
 is that the
 large archived image backups exported server-to-server very fast,
 around 60MB/sec. The bad new is, the Version 6 library manager
 function periodically reclaims a tape drive being used by the
 library client, in our case, causing the large 
 EXPORT/IMPORT process
 being run to fail and mark the file being exported at the 
 time to be
 flagged, causing a copy pool tape to be requested if the 
 process is
 restarted. The fix for this was to install version 
 6.1.2.1 and then
 replace the DSMSERV module with a fix version.
 
 Database backups suddenly failed for 5 days in a row, but then
 started working again when support requested various 
 documentation.
 Looks like DB2 communicates with the TSM server with its own OPT
 file, specifying 'localhost' as the TCPSERVERADDRESS, 
 which appeared
 to be failing even though all other functions in the TSM 
 server were
 working fine. Waiting for reoccurence.
 
 Export Node function apparently does not copy the 
 MAXNUMMP setting.
 
 A (relatively) long list of quirks in the ISC, which we forced
 ourselves to use while our Servergraph license was updated. Some
 of these were only related to Firefox 3.5.4. The worst was a Java
 problem that 'unchecked' the 3 'enable sessions' boxes in the
 'Sessions' display of the Server Properties window when 
 you left the
 display and then came back, causing all sessions to be disabled
 necessitating a server restart. Using IE, however, the ISC has
 become almost bearable and performs much better than previous
 versions.
 
 The Operational Reporter is not officially supported in Version 6,
 something we missed, but is easily modified to supply most of the
 info needed.
 
 We have not seen the dreaded huge increase in database size and after
 the setting of the ALLOWREORGTABLE option, we haven't had any log
 problems either. We are currently running full database backups on
 Monday, Wednesday, and Friday, with incrementals in between. Full DB
 backup of the 45GB database takes about 6 minutes to a TS1120 drive.
 As noted, the current size of our DB is around 45GB with about 2/3 of
 our 350 client having been moved. However, the largest of 
 them, several
 Windows file/print servers containing in the neighborhood

Upgrade from 6.1.2 to 6.1.2.1

2009-11-09 Thread Stefan Holzwarth
Just doing my first update of a tsm server with 6.1.2 on windows2008.
The ibm documentation is not clear about post steps after install
package.

Do I have to run the dsmupgdx.exe utility in my usagecase or not?

I tried to execute the utility local but it can not connect to the
server (file and print for this reason activated, no firewall).
So I decided to start the tsm instance without. Instance works, but does
not show a upgrade db done as in 5.x times.

Kind regards
Stefan Holzwarth


AW: How big is your V6.1 server log file?

2009-10-27 Thread Stefan Holzwarth
Have a look at IC62978 and its workaround ALLOWREORGTABLE NO  for dsmserv.opt
It helped a lot in our environment with uncontroled grow of active logspace.

Regards
Stefan Holzwarth

 -Ursprüngliche Nachricht-
 Von: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] Im 
 Auftrag von Zoltan Forray/AC/VCU
 Gesendet: Dienstag, 27. Oktober 2009 14:27
 An: ADSM-L@VM.MARIST.EDU
 Betreff: How big is your V6.1 server log file?
 
 Another failure with my V6.1.0.2 server.
 
 This morning the ONLY user (a total of 3-active nodes) of my V6.1.2.0
 server reported his backups failing with his dsmerror.log showing:
 10/26/2009 23:21:15 ANS1316E The server does not have enough 
 recovery log
 space to continue the current operation
 The activelogsize is set to 6 and I run 3-FULL DB backups daily
 (followed by BACKUP VOLHIST).  This mornings check shows the 
 log at under
 200MB used (a full DB backup had already run at 22:00:00).
 How big do I need to make this?  Should I max it out at 128G?
 What are your experiences/settings for activelogsize, for 
 folks running a
 V6 server in production?
 


AW: TSM 6.1 and the ever expanding DB

2009-10-02 Thread Stefan Holzwarth
We want also to go into production with 6.1.2. All setup is finished.
But with about 20 nodes (all export/import) we continously have trouble with 
full active log and full archivelog. 
Our active log size is 16Gbyte and it seems to be enough for this small setup. 
But sometimes the log usage explodes and goes rapidly up to the limit.
I opened my second PMR

Regards
Stefan Holzwarth


 -Ursprüngliche Nachricht-
 Von: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] Im 
 Auftrag von Zoltan Forray/AC/VCU
 Gesendet: Freitag, 2. Oktober 2009 15:15
 An: ADSM-L@VM.MARIST.EDU
 Betreff: Re: TSM 6.1 and the ever expanding DB
 
 Join the club.  I am beginning to wonder if anyone is 
 successfully using
 V6.1, trouble-free.
 
 Monday I decided to put my 6.1.2 server into production and 
 am wondering
 if this was a really bad decision.
 
 I have had to bounce it 5-times due to it simply hanging/going
 non-responsive eventhough the only activity has been 
 exporting a large
 node from another server.
 
 The primary active log has been expanded 3-times (from 20GB to 60GB)
 eventhough I run 3-full DB backups daily.
 
 I had to reserve 300GB for the archivelog space.
 
 The DB has grown to 65GB for 4-nodes eventhough the original 
 server with
 250-nodes is only 80GB used.
 
 The diagnostic information for DB/log errors is fairly 
 useless.  The book
 says to go to DB2 to get it to explain the SQL? errors, 
 eventhough in
 other places the book says to not mess with DB2 (pay no 
 attention to the
 man behind the curtain..).  I am having to become way more
 knowledgeable in DB2 than I ever wanted to be  (Damn it, 
 Jim.I am the
 backup/TSM administrator - not a DBA! - apologies to DeForest Kelley)
 
 Just got my 5th SQL error this week (10/2/2009 8:49:46 AM ANR0162W
 Supplemental database diagnostic information:  -1:22003:-413 
 ([IBM][CLI
 Driver][DB2/LINUXX8664] SQL0413N  Overflow occurred during 
 numeric data
 type conversion.  SQLSTATE=22003)
 
 I have to run 3-full DB backups every day (along with the now added
 3-BACKUP VOLHIST)  just to try to keep ahead of what I 
 consider normal,
 daily activity (never had to do this on V5.x - daily DB 
 incrementals  use
 to be more than enough - heaven help me if I get this server up to the
 size of my biggest V5 server which has a 150GB DB - I could 
 never backup
 the DB fast enough to keep it from crashing).
 
 ---
 
 How about an informal poll.
 
 How many folks are running V6.1.2 servers in production?
 
 How big (occupancy?  DB size?  Number of active nodes?)
 
 What platform?
 
 
 
 From:
 Gill, Geoffrey L. geoffrey.l.g...@saic.com
 To:
 ADSM-L@VM.MARIST.EDU
 Date:
 10/01/2009 08:12 PM
 Subject:
 [ADSM-L] TSM 6.1 and the ever expanding DB
 Sent by:
 ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU
 
 
 
 I'm finding that what I know about how the DB works in 5.5 doesn't
 really equal how it works in 6.1. On a Linux box I brought up 
 to migrate
 clients to a 6.1 server I created a 20GB log and 100GB DB. There 'will
 be' about 150 nodes moved to this instance but currently about 20 are
 backing up. My 5.5 server, on AIX 5.3, has a 125GB DB about 
 50% used, a
 11GB log and it backs up 500+ clients per day with no issues.
 
 
 
 Last nights backup on the new box is telling me there is no more space
 in the database so backups are failing. After backing up 
 systems for 30
 days? I find that way out of whack from how 5.5 works and it 
 seems to be
 telling me I need more than 10 times the space to keep 6.1 up. I can't
 believe 20 computers have eaten up 100GB of DB space in such a short
 period of time.
 
 
 
 I have a case open with IBM to discuss but I'm wondering what 
 others are
 finding that are using 6.1. Perhaps I'm missing something in my setup
 that is causing the problem (I hope) because if not I don't 
 want to even
 think about how much disk I have to add to the current box so I can
 upgrade it and make it run with the 400+ systems that will stay on it.
 
 
 
 Anyone else seeing this or have an idea what I may have missed?
 
 
 
 Geoff Gill
 TSM/PeopleSoft Administrator
 
 SAIC M/S-B1P
 
 4224 Campus Pt. Ct.
 
 San Diego, CA  92121
 (858)826-4062 (office)
 
 (858)412-9883 (blackberry)
 


AW: AW: TSM 6.1 and the ever expanding DB

2009-10-02 Thread Stefan Holzwarth
Seem's to me I'm hit by IC63373 -despit eit should be fixed in 6.1.2.
Will change the value and report results next week.
Regards
Stefan Holzwarth

ERROR DESCRIPTION:   
 If in the Tivoli Storage Manager server options file dsmserv.opt 
 the size of the active log (ACTIIVELOGSIZE) is changed, 
 the DB2 configuration parameter NUM_LOG_SPAN does not get   
 updated correctly.   
  
 The NUM_LOG_SPAN setting is used by Tivoli Storage Manager to   
 manage how much of  the active log a transaction can span.   
 This value should represent 90% of the number of primary log 
 files (LOGPRIMARY) which each is be default 512MB in size.   
  
 So for the default Tivoli Storage Manager active log size of 
 2GB, 4 primary log volumes are used and Tivoli Storage Manager   
 sets NUM_LOG_SPAN to 90% of this size (3 volumes).  When the 
 size of the active log (ACTIIVELOGSIZE) is increased in the 
 dsmserv.opt file, the NUM_LOG_SPAN is not updated to represent   
 90% of the new log size in DB2. This can cause in the worst case 
 a crash of the Tivoli Storage Manager server in case of a long   
 running transaction which spans over more volumes than allowed   
 by the NUM_LOG_SPAN value.   

 LOCAL FIX:   
 The work around is to manually calculate 90% of the log size and 
 update the db2 database to correctly set NUM_LOG_SPAN.   
  
 Calculate correct value of NUM_LOG_SPAN (each log size is fixed 
 at 512MB):   
 1. Take the new value of activelogsize in dsmserv.opt in and 
 divide that by 512MB. This value is the total number of log 
 volumes reflected in db2 value LOGPRIMARY.   
 2. Multiple LOGPRIMARY by 90%. This value should be reflected in 
 NUM_LOG_SPAN.   
  
 (alternate method involving slightly less arithmetic)   
 Calculate correct value of NUM_LOG_SPAN: 
 1. Use the following db2 command to determine the number of log 
 volumes used:   
db2 get db cfg for TSMDB1 
 2. Multiply the value for the LOGPRIMARY parameter by 90%.  This 
 value should be reflected in NUM_LOG_SPAN.   
  
 Update NUM_LOG_SPAN by issuing the following db2 command:   
 db2 update db cfg for TSMDB1 using NUM_LOG_SPAN newValue   
 You may need to restart the TSM server, which will restart the   
 db2 database as well.  

 -Ursprüngliche Nachricht-
 Von: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] Im 
 Auftrag von Kelly Lipp
 Gesendet: Freitag, 2. Oktober 2009 17:50
 An: ADSM-L@VM.MARIST.EDU
 Betreff: Re: AW: TSM 6.1 and the ever expanding DB
 
 That last paragraph made my head hurt!  I had the 
 opportunity to take a database class in college.  Didn't 
 want to know it then, don't want to know it now.
 
 I recall one of the design centers for the DB2 thing was to 
 ensure that a TSM admin didn't need to become a DB2 admin.  I 
 don't even know the lingo!
 
 I'll echo Rick's comments: you pioneers, you go!  Those 
 arrows don't hurt that much.  That which doesn't kill you 
 makes you and all of us stronger.
 
 Kelly Lipp
 Chief Technical Officer
 www.storserver.com
 719-266-8777 x7105
 STORServer solves your data backup challenges. 
 Once and for all.
 
 
 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] 
 On Behalf Of Richard Rhodes
 Sent: Friday, October 02, 2009 9:26 AM
 To: ADSM-L@VM.MARIST.EDU
 Subject: Re: [ADSM-L] AW: TSM 6.1 and the ever expanding DB
 
 I've been watching this discussion with great interest, and 
 more than a
 little fear.
 
 We are going to implement the v6 ISC/SC shortly on a 
 standalone Win server,
 but we aren't planning to upgrade the TSM servers until next 
 year.  A BIG
 thanks to all you bleeding edge types out there
 
 IBM has a interesting/hard problem -  TSM is used to backup 
 TSM.  I assume
 the requirement for multiple backups before a archive log is 
 deleted is to
 ensure that multiple backups occur for each archive log.They are
 effectively throwing disk space at the archive logs to ensure 
 they have
 good over lapping backups of them.
 
 I wonder if IBM isn't eventually going to have to implement 
 some process
 that will periodically backup archive logs, make a second 
 copy of them on
 different media, generate a Vol_Hist

TSM 6.1.2 DB Archivelog handling

2009-09-29 Thread Stefan Holzwarth
We just started our new TSM 6 environment and are having problems
controlling amount of archived log files.
I could not find any parameter for setting retention or number of
logfiles in first and/or second archlog directory.
Also full dbbackups do not remove any of that files.

What do I miss?

Kind regards
Stefan Holzwarth


AW: TSM 6.1.2 DB Archivelog handling

2009-09-29 Thread Stefan Holzwarth
Thanks for that document - its a missing part in the log handling puzzle.
What do you think triggers the deleting of the archive logs - backup db or 
backup volhist?
I would guess  backup volhist and sort the admin schedules accordingly.

Kind regards
Stefan Holzwarth



 -Ursprüngliche Nachricht-
 Von: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] Im 
 Auftrag von Erwann Simon
 Gesendet: Dienstag, 29. September 2009 11:30
 An: ADSM-L@VM.MARIST.EDU
 Betreff: Re: TSM 6.1.2 DB Archivelog handling
 
 Hi,
 
 See also this technote : 
 http://www-01.ibm.com/support/docview.wss?uid=swg21399352
 It's been said that TSM also needs to sucessefully write its 
 volhist?out 
 (volhist.dat now !) file in order to allow deletion of archived logs, 
 even if the DB backup was sucessfull.
 The volhist.dat file is now required for restoring the TSM DB.
 
 --
 Best regards / Cordialement / مع تحياتي
 Erwann SIMON
 
 
 Grigori Solonovitch a écrit :
  You need to run at least 2 full backups to clean the both 
 log and arc
  
  Grigori G. Solonovitch
  
  Senior Technical Architect
  
  Information Technology  Bank of Kuwait and Middle East  
 http://www.bkme.com
  
  Phone: (+965) 2231-2274  Mobile: (+965) 99798073  E-Mail: 
 g.solonovi...@bkme.com
  
  Please consider the environment before printing this Email
  
  
  -Original Message-
  From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] 
 On Behalf Of Stefan Holzwarth
  Sent: Tuesday, September 29, 2009 11:30 AM
  To: ADSM-L@VM.MARIST.EDU
  Subject: [ADSM-L] TSM 6.1.2 DB Archivelog handling
  
  We just started our new TSM 6 environment and are having problems
  controlling amount of archived log files.
  I could not find any parameter for setting retention or number of
  logfiles in first and/or second archlog directory.
  Also full dbbackups do not remove any of that files.
  
  What do I miss?
  
  Kind regards
  Stefan Holzwarth
  
  Please consider the environment before printing this Email.
  
  This email message and any attachments transmitted with it 
 may contain confidential and proprietary information, 
 intended only for the named recipient(s). If you have 
 received this message in error, or if you are not the named 
 recipient(s), please delete this email after notifying the 
 sender immediately. BKME cannot guarantee the integrity of 
 this communication and accepts no liability for any damage 
 caused by this email or its attachments due to viruses, any 
 other defects, interception or unauthorized modification. The 
 information, views, opinions and comments of this message are 
 those of the individual and not necessarily endorsed by BKME.
 


AW: Some advice about compression=yes to perform IMAGE backup

2009-09-20 Thread Stefan Holzwarth
I'm a big fan of compression at the client side!

Compression at the client could even give you better performance.
It depends on the data and your environment.

Some pro's for client side compression:

Disk Storage pools at TSM server are more effective because there is more space
Only option if you have no tapes with hardware compression
Less IO at the TSM server (backup copypool, migration, reclamation)
Most CPUs in physical servers are underutilized and very powerful
Less network bandwidth needed (some of the possible bottlenecks)
We have very good experience with SQL TDP compression rates


Regards
Stefan Holzwarth

 -Ursprüngliche Nachricht-
 Von: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] Im 
 Auftrag von Skylar Thompson
 Gesendet: Sonntag, 20. September 2009 06:25
 An: ADSM-L@VM.MARIST.EDU
 Betreff: Re: Some advice about compression=yes to perform IMAGE backup
 
 admbackup wrote:
  Hi.
 
  I am need some advice about using compression=yes for image backups
 
 
  I need to perform image backups of mulitple disks on a 
 windows 2008 server.  Most of them have like 1.45T of size.
 
  We are running out of tapes and I was thinking in using 
 compression.  I know that it is recommended to set 
 compressalways=yes on the TSM server when using compression, 
 but I am not using compression for all the backups.  Is this 
 parameter transparent for the client servers that dont use 
 compression=yes?
 
  Also, how recommended is using compression for image 
 backups??  I know that it is going to increasse the time that 
 the backup takes but I have a lot of time windows to perform 
 those image backups (All the weekend)
 
 
 What kind of tapes do you use? You should probably stick with hardware
 compression if you can. Remember to not only think of the 
 amount of time
 the backup takes, but the amount of time the restore is going to take.
 Hardware compression is going to take buy you performance, 
 but software
 compression is going to lose you performance.
 
 --
 -- Skylar Thompson (skyl...@u.washington.edu)
 -- Genome Sciences Department, System Administrator
 -- Foege Building S048, (206)-685-7354
 -- University of Washington School of Medicine
 


Returncode for a TDP CMD Script

2009-09-11 Thread Stefan Holzwarth
Hi,

we are using a Windows cmd Script for tdp Backups.
At the TSM Server the schedule is defined as:

tsm: TSMAq sched tdp sql23full_SERVER33 f=d
Policy Domain Name: TDP
 Schedule Name: SQL23FULL_SERVER33
   Description: FULL Backup SQL d...@23:00
Action: Command
   Options:
   Objects: c:\adsm32\tdpsql\sqlbackup.cmd
/tdpmode:full
  Priority: 5


The tdpsql.exe returncode is stored in a variable %tdperror% within the
script sqlbackup.cmd and used for sending an error email. The CMD Job
ends with the line

exit /b %tdperror%   and should return the original error code to the
tsm scheduler.

Sometimes the backup gets an error and the error email is sent
correctly.
But from the tsm server sight the schedule is always successful.

I don't have an idea where to look further.
Instead of using exit /b I used a freeware tool errorlvl.exe to set the
returncode manually - no change in behavior.

Kind regards
Stefan Holzwarth


Extended or basic edition for TDP only setup

2009-02-09 Thread Stefan Holzwarth
Hi, 

since TSM 6.1 brings deduplication only for extended edition I wonder
whether it would help to split our tsm server (basic license) into a
basic for standard backup and an extended one for tdp activity.  As I
remember do tdp agents cost the same regardless of the server-license.
Is this a good idea?

Regards
Stefan Holzwarth


AW: SQL TDP lan free backup on VMWARE

2009-01-28 Thread Stefan Holzwarth
We backup 3 different Exchange Servers (DL380 G5) one after one - all with 
identical results over 1 GB ethernet nic - see example:
TSM Server is also windows x86 using only disks.

01/18/2009 18:05:06 
=
01/18/2009 18:05:06 Request   : Backup
01/18/2009 18:05:06 SG List   : *  
01/18/2009 18:05:06 Backup Type   : FULL  
01/18/2009 18:05:06 Database Name :   
01/18/2009 18:05:06 Buffers   : 3  
01/18/2009 18:05:06 Buffersize: 1024  
01/18/2009 18:05:06 Exchange Server   : VEX01003  
01/18/2009 18:05:06 TSM Node Name :   
01/18/2009 18:05:06 TSM Options File  : 
c:\adsm32\TDPExchange\dsm_VEX01003EXCH.opt  
01/18/2009 18:05:06 Mount Wait: Yes  
01/18/2009 18:05:06 Quiet : No  
01/18/2009 18:05:06 
-
01/18/2009 19:19:37 Total storage groups requested for backup:  2
01/18/2009 19:19:37 Total storage groups backed up: 2
01/18/2009 19:19:37 Total storage groups expired:   52
01/18/2009 19:19:37 Total storage groups excluded:  0
01/18/2009 19:19:37 Throughput rate:52,493.79 Kb/Sec
01/18/2009 19:19:37 Total bytes transferred:240,195,943,048
01/18/2009 19:19:37 Elapsed processing time:4,468.46 Secs
01/18/2009 19:19:37 --- SCHEDULEREC OBJECT END EXCHBACKUP 01/18/2009 19:19:37


Optfile used:
NODename  VEX01003EXCH
CLUSTERnode   YES
COMMMethodTCPip
TCPPort   1502
TCPServeraddress  SDE14001-GB4
TCPWindowsize 256
TCPBuffSize   512
COMPRESSION   OFF
RESOURCEUTILIZATION   3
...

Regards,
Stefan Holzwarth

 -Ursprüngliche Nachricht-
 Von: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] Im 
 Auftrag von Schaub, Steve
 Gesendet: Dienstag, 27. Januar 2009 18:02
 An: ADSM-L@VM.MARIST.EDU
 Betreff: Re: SQL TDP lan free backup on VMWARE
 
 Howard,
 
 Can you share the Exchange server physical config 
 (cpu/mem/disk) as well
 as the .cfg you use for these Exchange servers?  Are they x86 
 Ex2003 or
 x64?  Did it run this well out of the box or did you need to 
 tweak it to
 reach this point?  We have several Ex2003 servers, one of which has a
 total of 909gb in 4 SG's and is taking 16.5hrs to do a full 
 backup (.857
 gb/min).  it is gig attached, and the tsm server is using a multi-gig
 trunked line.  I just cant seem to squeeze any more speed out of this
 backup and I would love to find out how you are doing it.
 
 Thanks,
 
 Steve Schaub
 Systems Engineer, Windows
 BlueCross BlueShield of Tennessee
 steve_sch...@bcbst.com
 
 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] 
 On Behalf Of
 Howard Coles
 Sent: Monday, January 26, 2009 5:13 PM
 To: ADSM-L@VM.MARIST.EDU
 Subject: Re: [ADSM-L] SQL TDP lan free backup on VMWARE
 
 We have 4 Storage Groups per Node, and 3 nodes.  However, 1 node is
 about 600 GB, the other two add up to about 600 GB.  They all 
 backup at
 the same time of the night, when they do a full backup on 
 each box.  We
 have other Exchange nodes that backup at various times but these three
 (being the largest) I have backing up at a time when they can 
 have more
 bandwidth.  We back them up via the LAN (1 GB NICS each) directly to
 disk.  They all take about 4 (sometimes 5) hours each to backup unless
 the LAN is congested which means the total backup time for 
 the 1.2 TB is
 around 4 to 4.5 hours, which is about 4.5 - 5 GB per Min.  Which means
 we're pushing the TSM Server's NIC to about the edge. :-D
 
 The TSM server has 1, 1GB NIC (Teamed in Failover mode).  Note: The 1
 Largest node only takes 4 hours to do a full backup.  We used to have
 them in a direct to SAN backup, but discovered that doesn't work much
 faster than doing the LAN backup, and has the added burden of 
 tying up 1
 or more Tape drives per node, which could be doing other things.  Now,
 if I had a 1.2 TB single system, this would be a no brainer for Direct
 over SAN backup. But until it gets bigger than 8-9 hundred GB 
 it really
 does work out to be equal either way.  
 
 See Ya'
 Howard
 
 
  -Original Message-
  From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] 
 On Behalf
  Of Len Boyle
  Sent: Monday, January 26, 2009 3:11 PM
  To: ADSM-L@VM.MARIST.EDU
  Subject: Re: [ADSM-L] SQL TDP lan free backup on VMWARE
  
  Howard,
  
  Can you give the list a little more details on the Exchange system
  backup of 1.2 TB in about 4 or so hours.
  
  How many exchange storage groups do you have and how many of them
  backup up in parallel?
  How large are they?
  Are you using multiple nics of 1gig each or a 10gig nic?
  
  Thanks len
  
  -Original Message-
  From: ADSM

TDP 5.5.2 and SQL2005 with SQL2008 Components

2009-01-26 Thread Stefan Holzwarth
From Installation Update - DP SQL V5.5.2 PTF

The following prerequisites are required and are installed during setup
if they are not already installed.
Prerequisites
* Microsoft Core XML Services (MSXML) 6.0
* Microsoft SQL Server 2008 Management Objects
* Microsoft SQL Server 2008 Native Client
* Microsoft SQL Server System CLR Types
* Microsoft .NET Framework 2.0

I'm a little bit concerned about installing sql 2008 components for sql
server 2005 that seem to be required since 5.5.2.
Does anyone had trouble doing this?

Regards
Stefan Holzwarth


AW: TDP 5.5.2 and SQL2005 with SQL2008 Components

2009-01-26 Thread Stefan Holzwarth
Hi Del, that are good news.
Thank you for your explanation.
Kind regards,
Stefan Holzwarth 

 -Ursprüngliche Nachricht-
 Von: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] Im 
 Auftrag von Del Hoobler
 Gesendet: Montag, 26. Januar 2009 16:36
 An: ADSM-L@VM.MARIST.EDU
 Betreff: Re: TDP 5.5.2 and SQL2005 with SQL2008 Components
 
 Hi Stefan,
 
 Just so you know... this was done under recommendation and 
 guidance from
 Microsoft.
 Microsoft changed their SQL Server 2008 libraries in such a 
 way that they
 would not allow you to connect to both SQL Server 2005 and 
 SQL Server 2008
 with the same executable unless you used the SQL Server 2008 
 libraries.
 Microsoft instructed us (and other 3rd-party vendors) to prereq and
 redistribute these. They stated that they would not support
 upward compatibility of SQL Server 2005 libraries to connect to
 a SQL Server 2008 server. Yes, TSM could have shipped multiple
 DP/SQL executables, one for SQL Server 2005 and one for SQL 
 Server 2008,
 but that would complicate things for many customers. In addition,
 these SQL Server 2008 components can coexist with the SQL Server 2005
 components, so it should not affect your SQL Server 2005 server
 or any other applications still using SQL Server 2005 
 components at all.
 If you do see any problems, we certainly want to know... and we will
 work with you and Microsoft to find any issues. We do not expect it.
 
 Thanks,
 
 Del
 
 
 
 ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU wrote on 01/26/2009
 08:48:50 AM:
 
  [image removed]
 
  TDP 5.5.2 and SQL2005 with SQL2008 Components
 
  Stefan Holzwarth
 
  to:
 
  ADSM-L
 
  01/26/2009 08:50 AM
 
  Sent by:
 
  ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU
 
  Please respond to ADSM: Dist Stor Manager
 
  From Installation Update - DP SQL V5.5.2 PTF
 
  The following prerequisites are required and are installed 
 during setup
  if they are not already installed.
  Prerequisites
  * Microsoft Core XML Services (MSXML) 6.0
  * Microsoft SQL Server 2008 Management Objects
  * Microsoft SQL Server 2008 Native Client
  * Microsoft SQL Server System CLR Types
  * Microsoft .NET Framework 2.0
 
  I'm a little bit concerned about installing sql 2008 
 components for sql
  server 2005 that seem to be required since 5.5.2.
  Does anyone had trouble doing this?
 
  Regards
  Stefan Holzwarth
 


AW: Interesting problem in MS-Win restoer -- anyone seen something like this before?

2008-03-10 Thread Stefan Holzwarth
In the past we had 2 times a similar problem:
first time (tsm server on mvs) the ip sequence number did a wrap around and was 
not properly handled by mvs. (So large restores always stopped at different 
positions)
sencond time (tsm server on nt) we realized that our network backbone did 
corrupt our ip pakets without crc errors! we could view the problem by adding 
crc tothe communication layer within tsm. The restore to that time (a exchange 
db) was no more possible since the backup data were not correct. The restore 
always stopped at the same position.
Regards
Stefan Holzwarth

 -Ursprüngliche Nachricht-
 Von: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Im 
 Auftrag von Kauffman, Tom
 Gesendet: Montag, 10. März 2008 15:05
 An: ADSM-L@VM.MARIST.EDU
 Betreff: Interesting problem in MS-Win restoer -- anyone seen 
 something like this before?
 
 We've been trying to restore a 38 GB file to a Win2003 SP1 
 server; the restore comes to a near halt at 31.8 GB.
 
 At the TSM server side (5.5.0.0) we see 'sendw', and if we 
 leave everything alone the restore continues at an incredibly 
 slow pace (20 MB in 8 hours).
 
 On the client side - CPU utilization drops to between 1 and 3 
 percent, but task manager shows 'system idle process' using 
 99% of the cpu.
 
 The results have been the same with both the TSM 5.1.6 client 
 and the 5.5 client, using both the GUI and the command line.
 
 The only errors that show up on either system occur when we 
 kill the restore.
 
 And (FWIW) there is no anti-virus scanner running while the 
 restore is running.
 
 I'm lost - any ideas/suggestions?
 
 TIA
 
 Tom Kauffman
 NIBCO, Inc
 
 
 CONFIDENTIALITY NOTICE: This email and any attachments are for the
 exclusive and confidential use of the intended recipient. If 
 you are not
 the intended recipient, please do not read, distribute or 
 take action in
 reliance upon this message. If you have received this in error, please
 notify us immediately by return email and promptly delete this message
 and its attachments from your computer system. We do not waive
 attorney-client or work product privilege by the transmission of this
 message.
 


TDP SQL - cleanup of deleted databases

2008-03-05 Thread Stefan Holzwarth
We are useing TDP SQL for MS SQL2005 and have setup our policies.
One example:
Policy Domain Name: TDP
   Policy Set Name: STANDARD
   Mgmt Class Name: SQLDBDATAOBJECTS
   Copy Group Name: STANDARD
   Copy Group Type: Backup
  Versions Data Exists: No Limit
 Versions Data Deleted: No Limit
 Retain Extra Versions: 30
   Retain Only Version: 60
 Copy Mode: Modified
Copy Serialization: Shared Static

I'm not clear about the following thing:
If we delete a db withing the sql instance does it disappear after 60
days? (as files and directories do with baclient)
Or - do we have to manually inactivate the last active backup of a
database, that has been allready deleted from MS SQL. 
(we do our sql backups with target=*)

Regards
Stefan Holzwarth


AW: Backing up PST files

2008-02-14 Thread Stefan Holzwarth
We are using subfile for pst files since about 2 years and are happy with it.

The fileserver has around 150 userhomes all with pst. Most user have pst 
between 1 and 2 gb, some above.
We needed subfile because we did the daily backup over a 2 Mbit connection for 
about all together of 500 GByte data.

Some problems so far:
full subfile cache directory because tsm client did no proper housekeeping of 
this dir. (had to delete all files and start new after a year)
Restores of pst files sometimes a little problematic if administrator has no 
ntfs access to that file. 

I think the size limit for cachdir can be overcome by using more logical nodes 
on that server - each one with his own cache dir.

Regards
Stefan Holzwarth

 -Ursprüngliche Nachricht-
 Von: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Im 
 Auftrag von Paul Zarnowski
 Gesendet: Donnerstag, 14. Februar 2008 20:03
 An: ADSM-L@VM.MARIST.EDU
 Betreff: Re: Backing up PST files
 
 At 12:33 PM 2/14/2008, Del Hoobler wrote:
 The earlier description on subfile backup is not correct.
 
 Thanks Del, Andy, Bill..  I thought I recalled something 
 about this from
 one of the technical sessions I attended at SHARE or one of the Oxford
 Symposiums.  What you said matches what I had originally thought.
 
 All of this confirms the fact that subfile backup doesn't work for
 files 2GB, which leaves me with my problem of how to manage 
 and backup PST
 files, when they can (and will) grow 2GB.  Bill, thanks for 
 your comments
 on this.  I'll pursue ez-extract and try to figure out how it 
 would work in
 our environment.
 
 If anyone else has any ideas on how our users can keep their PST files
 below 2GB, or how effectively subfile backup works with 2GB 
 PST files, I'd
 appreciate hearing of them. Thanks.
 
 ..Paul
 
 
 
 --
 Paul ZarnowskiPh: 607-255-4757
 Manager, Storage Services Fx: 607-255-8521
 719 Rhodes Hall, Ithaca, NY 14853-3801Em: [EMAIL PROTECTED]
 


AW: AW: AW: NetApp backup takes too long

2007-12-09 Thread Stefan Holzwarth
The actual tsm journaling agent watches for local disk io. So it has to be run 
on the system, that does the io - the nas device.
Since EMC or Netapp do not open their appliances to run agents for watching io 
there is no way.
Both provide an api (I believe its named content api) that provides external 
virus scanners with informations about what should be checked in realtime. If 
TSM journaling agent would support that api, it should work. You could even 
implement a kind of cdp (continuous data protection).
Regards
Stefan Holzwarth

 -Ursprüngliche Nachricht-
 Von: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Im 
 Auftrag von Wanda Prather
 Gesendet: Samstag, 8. Dezember 2007 20:41
 An: ADSM-L@VM.MARIST.EDU
 Betreff: Re: AW: AW: NetApp backup takes too long
 
 Yes, but in this case the TSM backup client is actually 
 running on a Windows
 host, yes?
 Can journaling be implemented in this case?
 
 
 
 On 12/7/07, Stefan Holzwarth [EMAIL PROTECTED] wrote:
 
  Netapp (or EMC NAS) devices do not allow to run journaling agents.
  Regards
  Stefan Holzwarth
   -Ursprüngliche Nachricht-
   Von: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Im
   Auftrag von Steve Stackwick
   Gesendet: Donnerstag, 6. Dezember 2007 16:41
   An: ADSM-L@VM.MARIST.EDU
   Betreff: Re: AW: NetApp backup takes too long
  
   You could also investigate journaling on the Windows 
 server. If the
   number of files changing daily is small, journaling could 
 cut down on
   the noodle through the filesystem delay that you're seeing.
  
   Steve
  
   On 12/6/07, Stefan Holzwarth [EMAIL PROTECTED] wrote:
We had a similiar setup and used 5 backupjobs for each
   volume at the same time.
For every volume of the nas server we split the work logicaly.
So batch 1 took all directories starting with a-e, bath 2
   all from f to h,
We could backup our nas device in about 12 hours with 
 11mio files.
   
Regards
Spex
   
 -Ursprüngliche Nachricht-
 Von: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Im
 Auftrag von Haberstroh, Debbie (IT)
 Gesendet: Donnerstag, 6. Dezember 2007 14:55
 An: ADSM-L@VM.MARIST.EDU
 Betreff: NetApp backup takes too long

 Good morning,

 I could use some suggestions for improving the backup time
 for our Network Appliance.  Below is the write up that my Sys
 Admin submitted describing the problem.  Thanks for the help.

 Situation:  We have a Network Appliance (NAS) hosting
 approximately 8 million Windows files (CIFS).  Due to disk
 constraints, we are not able to use snapshots and due to some
 other customer induced limitations, we cannot use NDMP for
 backups.  We have implemented a proxy/redirection server
 that backs up the CIFS files via a unc path name to a TSM
 5.33 host running AIX.  Our issue is in walking through 8
 million files per night in a backup job.  The nightly backup
 delta is approximately 40GB.  However, just to access and
 check 8 million files to see if they meet the backup criteria
 is taking too much time.  The CIFS backup is split into 3
 separate batch jobs that run simultaneously.  The longest job
 (about 3 million files) takes almost 20 hours to run.  Would
 NIC teaming gain us any time savings during the backup?  I
 feel the bottleneck may be our AIX system since the Windows
 server has to get the meta data for the CIFS file, check it
 against the TSM database, and determine if that file needs to
 be backed up.  That is a lot of traffic between Windows host,
 TSM server, and Network Appliance for every single file.
 During the backup time, the CPU is at about 70% on the
 Windows host, and the NIC is rarely higher than 50%.

 TSM Server Information:
 We are running TSM 5.3.3 on AIX 5.3.  The server is an IBM
 7026-6H1, 4 processors and only 2 Gb Ram.  The TSM database
 is almost 200 Gb with 300 clients.

 Windows Server Information:
 We are currently using the Windows TSM client version 5.33c
 under Windows 2003 Server Standard Edition on an HP DL380
 dual 2.8 GHz Xeon processor with 2.5 GB of RAM.  We have
 three batch files running the DSMC command line utility
 scheduled by the Windows scheduler.  We have a dual port HP
 NC7781 NIC card.  We are using only one port connected at 1GB.


 Debbie Haberstroh

   
  
  
   --
   Stephen Stackwick
   Jacob  Sundstrom, Inc.
   401 East Pratt St., Suite 2214
   Baltimore, MD 21202-3003
   (410) 539-1135 * (866) 539-1135
   [EMAIL PROTECTED]
  
 
 


AW: AW: NetApp backup takes too long

2007-12-07 Thread Stefan Holzwarth
Netapp (or EMC NAS) devices do not allow to run journaling agents.
Regards 
Stefan Holzwarth
 -Ursprüngliche Nachricht-
 Von: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Im 
 Auftrag von Steve Stackwick
 Gesendet: Donnerstag, 6. Dezember 2007 16:41
 An: ADSM-L@VM.MARIST.EDU
 Betreff: Re: AW: NetApp backup takes too long
 
 You could also investigate journaling on the Windows server. If the
 number of files changing daily is small, journaling could cut down on
 the noodle through the filesystem delay that you're seeing.
 
 Steve
 
 On 12/6/07, Stefan Holzwarth [EMAIL PROTECTED] wrote:
  We had a similiar setup and used 5 backupjobs for each 
 volume at the same time.
  For every volume of the nas server we split the work logicaly.
  So batch 1 took all directories starting with a-e, bath 2 
 all from f to h,
  We could backup our nas device in about 12 hours with 11mio files.
 
  Regards
  Spex
 
   -Ursprüngliche Nachricht-
   Von: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Im
   Auftrag von Haberstroh, Debbie (IT)
   Gesendet: Donnerstag, 6. Dezember 2007 14:55
   An: ADSM-L@VM.MARIST.EDU
   Betreff: NetApp backup takes too long
  
   Good morning,
  
   I could use some suggestions for improving the backup time
   for our Network Appliance.  Below is the write up that my Sys
   Admin submitted describing the problem.  Thanks for the help.
  
   Situation:  We have a Network Appliance (NAS) hosting
   approximately 8 million Windows files (CIFS).  Due to disk
   constraints, we are not able to use snapshots and due to some
   other customer induced limitations, we cannot use NDMP for
   backups.  We have implemented a proxy/redirection server
   that backs up the CIFS files via a unc path name to a TSM
   5.33 host running AIX.  Our issue is in walking through 8
   million files per night in a backup job.  The nightly backup
   delta is approximately 40GB.  However, just to access and
   check 8 million files to see if they meet the backup criteria
   is taking too much time.  The CIFS backup is split into 3
   separate batch jobs that run simultaneously.  The longest job
   (about 3 million files) takes almost 20 hours to run.  Would
   NIC teaming gain us any time savings during the backup?  I
   feel the bottleneck may be our AIX system since the Windows
   server has to get the meta data for the CIFS file, check it
   against the TSM database, and determine if that file needs to
   be backed up.  That is a lot of traffic between Windows host,
   TSM server, and Network Appliance for every single file.
   During the backup time, the CPU is at about 70% on the
   Windows host, and the NIC is rarely higher than 50%.
  
   TSM Server Information:
   We are running TSM 5.3.3 on AIX 5.3.  The server is an IBM
   7026-6H1, 4 processors and only 2 Gb Ram.  The TSM database
   is almost 200 Gb with 300 clients.
  
   Windows Server Information:
   We are currently using the Windows TSM client version 5.33c
   under Windows 2003 Server Standard Edition on an HP DL380
   dual 2.8 GHz Xeon processor with 2.5 GB of RAM.  We have
   three batch files running the DSMC command line utility
   scheduled by the Windows scheduler.  We have a dual port HP
   NC7781 NIC card.  We are using only one port connected at 1GB.
  
  
   Debbie Haberstroh
  
 
 
 
 -- 
 Stephen Stackwick
 Jacob  Sundstrom, Inc.
 401 East Pratt St., Suite 2214
 Baltimore, MD 21202-3003
 (410) 539-1135 * (866) 539-1135
 [EMAIL PROTECTED]
 


AW: NetApp backup takes too long

2007-12-06 Thread Stefan Holzwarth
We had a similiar setup and used 5 backupjobs for each volume at the same time.
For every volume of the nas server we split the work logicaly.
So batch 1 took all directories starting with a-e, bath 2 all from f to h,
We could backup our nas device in about 12 hours with 11mio files.

Regards
Spex

 -Ursprüngliche Nachricht-
 Von: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Im 
 Auftrag von Haberstroh, Debbie (IT)
 Gesendet: Donnerstag, 6. Dezember 2007 14:55
 An: ADSM-L@VM.MARIST.EDU
 Betreff: NetApp backup takes too long
 
 Good morning,
 
 I could use some suggestions for improving the backup time 
 for our Network Appliance.  Below is the write up that my Sys 
 Admin submitted describing the problem.  Thanks for the help.
 
 Situation:  We have a Network Appliance (NAS) hosting 
 approximately 8 million Windows files (CIFS).  Due to disk 
 constraints, we are not able to use snapshots and due to some 
 other customer induced limitations, we cannot use NDMP for 
 backups.  We have implemented a proxy/redirection server 
 that backs up the CIFS files via a unc path name to a TSM 
 5.33 host running AIX.  Our issue is in walking through 8 
 million files per night in a backup job.  The nightly backup 
 delta is approximately 40GB.  However, just to access and 
 check 8 million files to see if they meet the backup criteria 
 is taking too much time.  The CIFS backup is split into 3 
 separate batch jobs that run simultaneously.  The longest job 
 (about 3 million files) takes almost 20 hours to run.  Would 
 NIC teaming gain us any time savings during the backup?  I 
 feel the bottleneck may be our AIX system since the Windows 
 server has to get the meta data for the CIFS file, check it 
 against the TSM database, and determine if that file needs to 
 be backed up.  That is a lot of traffic between Windows host, 
 TSM server, and Network Appliance for every single file.  
 During the backup time, the CPU is at about 70% on the 
 Windows host, and the NIC is rarely higher than 50%.
 
 TSM Server Information:
 We are running TSM 5.3.3 on AIX 5.3.  The server is an IBM 
 7026-6H1, 4 processors and only 2 Gb Ram.  The TSM database 
 is almost 200 Gb with 300 clients.
 
 Windows Server Information:
 We are currently using the Windows TSM client version 5.33c 
 under Windows 2003 Server Standard Edition on an HP DL380 
 dual 2.8 GHz Xeon processor with 2.5 GB of RAM.  We have 
 three batch files running the DSMC command line utility 
 scheduled by the Windows scheduler.  We have a dual port HP 
 NC7781 NIC card.  We are using only one port connected at 1GB.
 
 
 Debbie Haberstroh
 


Preallocate Volumes in Storagepools

2007-11-06 Thread Stefan Holzwarth
Since we got heavy fragmentation in our storagepools of type file we
think about using predefined,preallocated volumes.
What's your experience with preallocated vols of type file? 
I've read about some problems because TSM deletes the new defined volume
before usage and let the new volume grow.
See http://www.adsm.org/lists/html/ADSM-L/2004-12/msg00306.html

Regards
Stefan Holzwarth


AW: Lost in TSM licensing

2007-06-27 Thread Stefan Holzwarth
Some (dmz) servers have no ftp connection into our company network.
And scripts at the clients are difficult to update.
 
Regards
Stefan Holzwarth
 -Ursprüngliche Nachricht-
 Von: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Im 
 Auftrag von William Boyer
 Gesendet: Mittwoch, 27. Juni 2007 20:15
 An: ADSM-L@VM.MARIST.EDU
 Betreff: Re: Lost in TSM licensing
 
 Instead of having to restore each *.LOG file every day, why 
 not just code a POSTSCHEDULECMD that FTP's all the *.LOG files to a
 central server as nodename.dsm*.log 
 
 Bill Boyer
 Life isn't about how fast you run, or how high you climb but 
 how well you bounce - ??
 
 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] 
 On Behalf Of Stefan Holzwarth
 Sent: Wednesday, June 27, 2007 1:50 PM
 To: ADSM-L@VM.MARIST.EDU
 Subject: AW: Lost in TSM licensing
 
 I do that every morning fully automated:
 Dsmsched.log of every tsm node is restored on a individual 
 name at a central server for reporting of backup problems.
 It's not that difficult.
 But you are right - IBM should solve the problem by reporting 
 the cpu count to the tsm server through the client.
 Regards,
 Stefan Holzwarth
 
  -Ursprüngliche Nachricht-
  Von: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] 
 Im Auftrag 
  von Gill, Geoffrey L.
  Gesendet: Mittwoch, 27. Juni 2007 18:30
  An: ADSM-L@VM.MARIST.EDU
  Betreff: Re: Lost in TSM licensing
  
  So what you are saying is I still have to manually restore 
 hundreds of 
  files from hundreds of systems to a location, and hopefully 
 the file 
  name is different on each one or it will overwrite it ever time I 
  restore it, and then manually go through each file to get the info. 
  Not to mention the fact  it could be some folks may not 
 even have the 
  web interface up and I actually don't have access to 
 restore the file.
  
  Sorry, for me, way too much work. 
  
   
  Geoff Gill
  TSM Administrator
  PeopleSoft Sr. Systems Administrator
  SAIC M/S-G1b
  (858)826-4062
  Email: [EMAIL PROTECTED]
  
  -Original Message-
  From: [EMAIL PROTECTED]
  [mailto:[EMAIL PROTECTED] On Behalf Of Matthew Warren
  Sent: Wednesday, June 27, 2007 9:20 AM
  To: ADSM-L@VM.MARIST.EDU
  Subject: Re: Lost in TSM licensing
  
  Hmm, there is always a scheduled TSM command that send's 
 it's output 
  to a file you have rights to look at later?
  
  On occaision for small adhoc tasks I have used the TSM scheduler to 
  initiate a command on a client, when I've needed to do the 
 same thing 
  across a lot of nodes.
  
  as TSM administrator, you could send the output to a file, let tsm 
  back it up, and then restore it elsewhere to get at it, if 
 you really 
  really had to!
  
  Matt.
  http://tsmwiki.com/tsmwiki
  
  
  
  
 Internet
 [EMAIL PROTECTED]
   
  To
   ADSM-L
 Sent by: ADSM-L@VM.MARIST.EDU cc
  
 27/06/2007 16:07
  Subject
   Re: 
 [ADSM-L] Lost in 
  TSM licensing
  
   Please respond to
  ADSM-L@VM.MARIST.EDU
  
  
  
  
  
  
  
  
   M$ offers a tool named MSINFO32 that returns a lot of information
  about a server.
  You can gather information from a remote server - provided you have
  enough rights on the remote machine.
  
  Ahhh, the ol don't have rights issue. Which is why I keep 
 saying the 
  best way to get the info is to build it into the client so 
 it can get 
  and report it on the tsm server. Otherwise with hundreds, if not 
  thousands of machines, I doubt any one person is going to have a 
  simple way to get the info. It turns in to a multi day manual task 
  which is absolutely stupid.
  
  Geoff Gill
  TSM Administrator
  PeopleSoft Sr. Systems Administrator
  SAIC M/S-G1b
  (858)826-4062
  Email: [EMAIL PROTECTED]
  
  
  
  This message and any attachments (the message) is intended solely 
  for the addressees and is confidential.
  If you receive this message in error, please delete it and 
 immediately 
  notify the sender. Any use not in accord with its purpose, any 
  dissemination or disclosure, either whole or partial, is prohibited 
  except formal approval. The internet can not guarantee the 
 integrity 
  of this message.
  BNP PARIBAS (and its subsidiaries) shall (will) not therefore be 
  liable for the message if modified.
  
  -
  
  Ce message et toutes les pieces jointes (ci-apres le
  message) sont etablis a l'intention exclusive de ses 
 destinataires 
  et sont confidentiels. Si vous recevez ce message par 
 erreur, merci de 
  le detruire et d'en avertir immediatement l'expediteur. Toute 
  utilisation de ce message non conforme a sa destination, toute 
  diffusion ou toute publication, totale ou partielle, est interdite, 
  sauf autorisation expresse. L'internet ne permettant pas d'assurer

Versions data deleted

2007-06-13 Thread Stefan Holzwarth
Short question, because I'm not shure:

Does versions data deleted=0 mean:

If file is deleted its backup version is also immediately deleted during
next backup/expiration -
despite the duration-setting for retain only backup version?

Kind regards
Stefan Holzwarth


AW: How do you backup your Celerra NAS ?

2007-06-13 Thread Stefan Holzwarth
2 virtual machines W2k3 R2 x64 doing incr backup (with compression) each night 
of complete NAS (~8 mio files and ~ 4,5T Byte). Each vm has 5 TSM nodes 
defined, that backup a part of each filesystem e.g. first directories starting 
with a-f then second  g-i, j-o, p-u, ... That gives very well balanced usage.
We use celera only in for cifs shares.

Regards Stefan Holzwarth

 -Ursprüngliche Nachricht-
 Von: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Im 
 Auftrag von Zoltan Forray/AC/VCU
 Gesendet: Mittwoch, 13. Juni 2007 15:48
 An: ADSM-L@VM.MARIST.EDU
 Betreff: How do you backup your Celerra NAS ?
 
 Looking for feedback on how folks backup there EMC Celerra 
 NAS ?  Are you
 using NDMP ?  Just regular backups via Windoze nodes ?
 
 
 Zoltan Forray
 Virginia Commonwealth University
 Office of Technology Services
 University Computing Center
 e-mail: [EMAIL PROTECTED]
 voice: 804-828-4807
 


TSM Oper. Report for Missed Files

2007-05-22 Thread Stefan Holzwarth
Hi,

I like to report missed files filtered by domain using an modified query
from operational reporter.

Original query example:

select nodename,substr(char(date_time), 1, 16) as TME,message from
actlog where (msgno=4005 or msgno=4007 or msgno=4018 or msgno=4037 or
msgno=4987) and (date_time between '2007-05-21 07:43:25' and '2007-05-22
07:43:24') order by nodename


After playing around with that query I have 2 problems:

1) inserting into the where part:
and nodename in (select nodename from nodes where domain_name='MAC') 
gives an error of an not allowed outer query

(working with explicit node_names works, but isn't ideal)


2) since the original query above works with date strings I have to
substitute that part with something like that being using in q actlog
begind=-24:00 ...


Do you have any ideas to solve that 2 problems?


AW: TSM Oper. Report for Missed Files

2007-05-22 Thread Stefan Holzwarth
Hi Richard, you did it.

My final query string is:

select nodename,substr(char(date_time), 1, 16) as TME,message from actlog \
where (msgno=4005 or msgno=4007 or msgno=4018 or msgno=4037 or msgno=4987) \
and nodename in (select node_name from nodes where domain_name='MAC') \
and CAST((CURRENT_TIMESTAMP-DATE_TIME) HOURS AS INTEGER)  48 \
order by nodename 

Kind regards
Stefan Holzwarth

 -Ursprüngliche Nachricht-
 Von: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Im 
 Auftrag von Richard Sims
 Gesendet: Dienstag, 22. Mai 2007 13:19
 An: ADSM-L@VM.MARIST.EDU
 Betreff: Re: TSM Oper. Report for Missed Files
 
 On May 22, 2007, at 3:11 AM, Stefan Holzwarth wrote:
 
  and nodename in (select nodename from nodes where
  domain_name='MAC') 
 
 ^^^
 That should instead be:NODE_NAME
 
 Note well the unfortunate field name variations across tables.
 
  2) since the original query above works with date strings I have to
  substitute that part with something like that being using 
 in q actlog
  begind=-24:00 ...
 
 I think you mean   begint=-24:00
 
 The Select prototype would be:
   SELECT * From ACTLOG where  -
CAST((CURRENT_TIMESTAMP-DATE_TIME) HOURS AS INTEGER)  24
 
   Richard Sims
 


AW: TSM only reads from COPY1 during DB backup

2007-04-25 Thread Stefan Holzwarth
Richard, 

I would say you can do it good (TSM) or better as suposed by Orville.
I do not see any reason why TSM should not take advantage of the mirrored 
disk's for db reading.

Regards
Stefan Holzwarth

 -Ursprüngliche Nachricht-
 Von: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Im 
 Auftrag von Richard Sims
 Gesendet: Mittwoch, 25. April 2007 17:49
 An: ADSM-L@VM.MARIST.EDU
 Betreff: Re: TSM only reads from COPY1 during DB backup
 
 On Apr 25, 2007, at 10:55 AM, Orville Lantto wrote:
 
  Reads should be safe from mirrored volumes and are commonly done in
  operating systems to load  balance.  Not taking advantage of the
  available IO resource is wasteful and puts an unnecessarily
  unbalanced load on an already IO stressed system.  It slows down db
  backups too.
 
 Then your issue is performance, rather than database voracity.
 This is addressed by the disk architecturing chosen for the TSM
 database, where raw logical volumes and RAID on top of high
 performance disks accomplishes that.  Complementary volume striping
 largely addresses TSM's symmetrical mirror writing and singular
 reading.  Whereas TSM's mirroring is an integrity measure rather then
 performance measure, you won't get full equivalence from that.
 Another approach, as seen in various customer postings, is to employ
 disk subsystem mirroring rather than TSM's application mirroring.  In
 that way you get full duality, but sacrifice the protections and
 recoverability which TSM offers.
 
 Richard Sims
 


AW: WinTel Bare Metal Restore

2007-04-17 Thread Stefan Holzwarth
I can confirm for HP servers, that the ASR recovery process works also with HP 
usb floppy drives.
Regards Stefan Holzwarth

 -Ursprüngliche Nachricht-
 Von: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Im 
 Auftrag von Prather, Wanda
 Gesendet: Dienstag, 17. April 2007 21:38
 An: ADSM-L@VM.MARIST.EDU
 Betreff: Re: WinTel Bare Metal Restore
 
 I believe someone posted to the list a long time ago that an 
 external/portable USB-connected floppy will work.
 But I haven't personally done it.
 
 
 
 From: ADSM: Dist Stor Manager on behalf of Len Boyle
 Sent: Mon 4/16/2007 10:30 PM
 To: ADSM-L@VM.MARIST.EDU
 Subject: Re: WinTel Bare Metal Restore
 
 
 
 Wanda
 
 With ASR is there a work around for a client that does not 
 have a floopy disk drive?
 
 len
 
 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] 
 On Behalf Of Prather, Wanda
 Sent: Monday, April 16, 2007 10:20 PM
 To: ADSM-L@VM.MARIST.EDU
 Subject: Re: [ADSM-L] WinTel Bare Metal Restore
 
 For XP and 2003, see the TSM Client Manual for Windows 
 instructions for ASR (Automated System Recovery).
 It's a MIcrosoft thing which TSM supports.
 Works, if you follow the instructions exactly.
 
 
 
 
 From: ADSM: Dist Stor Manager on behalf of Johnson, Milton
 Sent: Mon 4/16/2007 10:43 AM
 To: ADSM-L@VM.MARIST.EDU
 Subject: WinTel Bare Metal Restore
 
 
 
 Due to the lack of recent religious wars on this forum, I'm forced to
 ask:
 
 What is the best method to back-up and perform a reliable and 
 successful Bare Metal Restore of a WinTel platform (Windows NT/Server
 2000/2003/XP/etc.) using a TSM AIX server?  Methods requiring 
 a third party solution are acceptable.  Solutions allowing a 
 BMR to dis-similar hardware are preferable.
 
 Personally I view the need to do a BMR on a WinTel platform 
 as an opportunity to bring up another AIX server, but that is 
 yet another religious war.
 
 Thanks,
 Milton
 


AW: TSM on a virtual machine?

2006-10-22 Thread Stefan Holzwarth
I do not see problems doing that - despite addressing tapelibraries out of an 
vm...
The Cpu and RAM the TSM Server needs is most of the time during night. So a TSM 
server can easy help to grow your esx hardware base. We have a dedicated tsm 
server but are running tsm server for testing and smaller tasks also within an 
vm with rdm disks (ATA on cx700) as backup device. Never had any problems doing 
that...
Regards
Stefan Holzwarth
 -Ursprüngliche Nachricht-
 Von: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Im 
 Auftrag von Keith Arbogast
 Gesendet: Freitag, 20. Oktober 2006 18:07
 An: ADSM-L@VM.MARIST.EDU
 Betreff: TSM on a virtual machine?
 
 Is anyone running a production TSM server on a virtual machine?  If
 so, what has been your experience, good and bad?
 
 We are analyzing that possibility, and will do extensive testing
 beforehand, but we would like to know what experience others have had
 so we don't go down too many wrong paths.
 
 We are running VMWare ESX 2.5.3 on HP DL585 servers.  We would be
 backing up Windows and Redhat Linux virtual machines as well as AIX
 physical Web and App servers, and physical Oracle 10g databases.  We
 would run the most current version of TSM.
 
 We know of the various options available to back up virtual
 machines.  What we're asking is how does a virtual machine do as a
 TSM server?
 
 With my thanks,
 Keith Arbogast
 Indiana University
 


AW: inventory expiration time

2006-07-28 Thread Stefan Holzwarth
4CPU, Windows2003, 4GB Ram, CX700 Disks Raid5 (shared, 10k), TSM 5.3.2
During the same time there is expiration from an other smaller instance on the 
same machine that lasts about an hour.
Regards Stefan Holzwarth 

DB Speed Expiration
  ACTIVITY Date Examined Objects Examined Up/Hr 
 EXPIRATION 2006-06-27 2430008 925200 
 EXPIRATION 2006-06-28 2404404 1083600 
 EXPIRATION 2006-06-29 2453121 993600 
 EXPIRATION 2006-06-30 2535449 928800 
 EXPIRATION 2006-07-01 2600885 835200 
 EXPIRATION 2006-07-02 2751620 1004400 
 EXPIRATION 2006-07-03 2913433 99 
 EXPIRATION 2006-07-04 2560030 946800 
 EXPIRATION 2006-07-05 2568634 1098000 
 EXPIRATION 2006-07-06 2619294 1198800 
 EXPIRATION 2006-07-07 2835640 1044000 
 EXPIRATION 2006-07-08 2922019 932400 
 EXPIRATION 2006-07-09 3152030 1177200 
 EXPIRATION 2006-07-10 3307528 1004400 
 EXPIRATION 2006-07-11 2942075 1036800 
 EXPIRATION 2006-07-12 2947009 1134000 
 EXPIRATION 2006-07-13 3077056 914400 
 EXPIRATION 2006-07-14 3003355 878400 
 EXPIRATION 2006-07-15 2888349 972000 
 EXPIRATION 2006-07-16 3059609 1245600 
 EXPIRATION 2006-07-17 3226511 117 
 EXPIRATION 2006-07-18 2994471 968400 
 EXPIRATION 2006-07-18 2653356 2469600 
 EXPIRATION 2006-07-19 2937517 1011600 
 EXPIRATION 2006-07-20 2874305 1137600 
 EXPIRATION 2006-07-20 2585014 2818800 
 EXPIRATION 2006-07-21 3125122 882000 
 EXPIRATION 2006-07-22 3130365 828000 
 EXPIRATION 2006-07-23 3262415 1245600 
 EXPIRATION 2006-07-24 3308901 1245600 
 EXPIRATION 2006-07-25 3038719 1227600 
 EXPIRATION 2006-07-26 2990903 943200 
 EXPIRATION 2006-07-27 2971022 1296000 
 

 -Ursprüngliche Nachricht-
 Von: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Im 
 Auftrag von Dirk Kastens
 Gesendet: Freitag, 28. Juli 2006 08:30
 An: ADSM-L@VM.MARIST.EDU
 Betreff: Re: inventory expiration time
 
 Richard Hammersley schrieb:
  The machine is a new dual processor P520 with 4 gig of 
 memory attached
  to an EMC cx300 san.
 
 We also have a p520, but with 12 GB of memory. We're running TSM 5.3.3
 on AIX 5.3.
 Our inventory expiration runs once a week and takes more than 
 6 hours to
 complete:
 
 ANR0812I Inventory file expiration process 4218 completed: examined
 7907625 objects, deleting 1416779 backup objects, 191 archive 
 objects, 0
 DB backup volumes, and 0 recovery plan files. 0 errors were 
 encountered.
 
 --
 Regards,
 
 Dirk Kastens
 Universitaet Osnabrueck, Rechenzentrum (Computer Center)
 Albrechtstr. 28, 49069 Osnabrueck, Germany
 Tel.: +49-541-969-2347, FAX: -2470
 


AW: Paramter missing in tdpsqlc restore command

2006-07-19 Thread Stefan Holzwarth
As a working example we use: (restore goes also to another db serever)
tdpsqlc restore ZP1 /tsmoptfile=dsm_restore.opt /tsmnode=s030sql 
/fromsqlserver=S030 /INTO=ZQ1 /strip=4 /replace 
/relocate=ZP1DATA1,ZP1DATA2,ZP1DATA3,ZP1DATA4,ZP1LOG1,ZP1LOG2 
/to=J:\ZQ1DATA1\ZQ1DATA1.mdf,J:\ZQ1DATA2\ZQ1DATA2.mdf,J:\ZQ1DATA3\ZQ1DATA3.mdf,G:\ZQ1DATA4\ZQ1DATA4.mdf,I:\ZQ1LOG1\ZQ1LOG1.ldf,I:\ZQ1LOG2\ZQ1LOG2.ldf
Regards
Stefan Holzwarth

 -Ursprüngliche Nachricht-
 Von: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Im 
 Auftrag von Paul Dudley
 Gesendet: Donnerstag, 20. Juli 2006 05:13
 An: ADSM-L@VM.MARIST.EDU
 Betreff: Paramter missing in tdpsqlc restore command
 
 I am running the following tdpsqlc command and get a parameter missing
 error - possibly after /relocate  - can anyone point me in the right
 direction as to what I may be missing?
 
 
 
 I am trying to restore a database from another server to a different
 location on this server
 
 
 
 tdpsqlc restore Sales2000 full /fromsqlserver=nt_sales2_database
 /relocate /to=n:\Database\MSSQL\Data\Sales2000_Data.mdf
 
 
 
 Regards
 
 Paul
 
 
 
 
 
 
 
 Paul Dudley
 
 ANL IT Operations Dept.
 
 ANL Container Line
 
 [EMAIL PROTECTED]
 
 
 
 
 
 
 
 ANL DISCLAIMER
 
 This e-mail and any file attached is confidential, and 
 intended solely to the named addressees. Any unauthorised 
 dissemination or use is strictly prohibited. If you received 
 this e-mail in error, please immediately notify the sender by 
 return e-mail from your system. Please do not copy, use or 
 make reference to it for any purpose, or disclose its 
 contents to any person.
 


Image Backups

2006-06-23 Thread Stefan Holzwarth
Hi,

more and more I like the imagebackup feature of TSM. After doing several
testing (with BartPE for DR and P2V Vmware) I have left some
imagebackups at the TSM server. Now I want to clean up but I have no
idea how to find that backups. (There is a sql-select to backup or
contents table that needs to much time). Second problem is to selective
delete that images from the server without touching incrementals.

Any ideas?
Kind regards

Stefan Holzwarth


AW: Windows BMR/ASR over Network w/o DHCP ?

2006-05-19 Thread Stefan Holzwarth
Hi Timothy,

there is a tool called penetcfg (used in bartpe) that you can start from the 
commandline of the running recovery system. It does all for you if you can't 
use dhcp. But I don't know about licencing of that tool since we have no need.

Kind regards
Stefan Holzwarth

 -Ursprüngliche Nachricht-
 Von: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Im 
 Auftrag von Timothy Lin
 Gesendet: Mittwoch, 17. Mai 2006 21:28
 An: ADSM-L@VM.MARIST.EDU
 Betreff: Windows BMR/ASR over Network w/o DHCP ?
 
 Hi
 has anyone tried to get BMR/ASR working over the network w/o dhcp ?
 govt policy doesn't allow dhcp on certain network we have, 
 and there are
 a few boxes we'd like to have BMR/ASR working but those only have a
 floppy drive and CD drive, so restore from local media is 
 probably not a
 good idea ::winks::
 
 
 any idea if I can somehow bypass the dhcp and setup the 
 ip/mask manually
 while the ASR silent installs TSM and have it work ?
 I'd appreciate your inputs,
 
 Thanks!
 


AW: SAN Disk for TSM diskpool for backups ?

2006-04-21 Thread Stefan Holzwarth
Roger, I think that' not total correct. 
The speed of raid 5 depends of several things. Two of them are how big and 
sequential that write io's are. 
There are hardware vendors that implement  raid 5 in their san very tricky:
If the incoming data are large and sequential they are buffered by the san 
system in write cache until a full stripe can be written to all disks in a raid 
5 group (typical 4+1). In this way a raid 5 is faster than a raid 1 since the 
write penalty is only 20% and not  50% as in raid 1.
 
hope this was clear
regards
Stefan Holzwarth  



Von: ADSM: Dist Stor Manager im Auftrag von Roger Deschner
Gesendet: Do 20.04.2006 19:00
An: ADSM-L@VM.MARIST.EDU
Betreff: Re: SAN Disk for TSM diskpool for backups ?



You will experience slower client backups with ANY configuration that is
RAID5 for your disk storage pools. The reason is that client backup is
100% writes in the disk storage pools, and RAID5 is very slow at writes.
The throughput difference is significant - as much as 75% in our case.

After much experimentation, I have found that RAID1 is best for disk
storage pools. Not RAID5, not RAID10, but RAID1. RAID5 can save you a
little bit of money in disk drives, but you really pay for it in
performance of something that is 100% writes.

Roger Deschner  University of Illinois at Chicago [EMAIL PROTECTED]



On Wed, 19 Apr 2006, Justin Case wrote:

Need comments please reply

We are going to be testing Apple Xserver raid disks array as the disks for
backing up the clients nightly backups
to diskpool. Has any one tried or even is anyone using the  Apple Xserver
raid disks array for TSM server's diskpools
for nightly backups ???


What raid is being used ? We are in the mind set to use RAID5 (5+1) with 1
spare safety net and have a spare disks on site.
What issues have come up when using  Apple Xserver raid disks array ??
Any experiences that other TSM Admin's have had please reply with any
issues of success or problems ?


Thanks

Justin Case
Duke University



AW: Random Access Disk Pools

2006-04-04 Thread Stefan Holzwarth
Why don't you use random and sequential together?
In our diskonly setup we use 3 types:
2%  fibrechannel disk as primary pool with random access for daily backup 
sizelimit 2MByte
30% ATA disk as primary pool with random access volumes for backup no sizelimit 
migdelay=7days
68% ATA disk as primary pool with files device for migrationtarget of ATA Pool 
and for direct target of TDP agents
and
100% ATA as copypool on remote site. (Copy is done as an extra step since we 
got very high CPU load)
No problems since one year.

Kind regards 
Stefan Holzwarth



 -Ursprüngliche Nachricht-
 Von: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Im 
 Auftrag von Andrew Carlson
 Gesendet: Dienstag, 4. April 2006 19:15
 An: ADSM-L@VM.MARIST.EDU
 Betreff: Re: Random Access Disk Pools
 
 Rod,
 
 After spending 4 weeks using file device class disk pools, I would say
 use random access.  Here is why:
 
 The speed of the random access disk pools is phenomenally better than
 the file device class - not sure why though
 
 The volumes (from the presentation I just read from the other email)
 are supposed to be picked based on filesystems with no volume 
 mounted. 
 What I found is the it selects them in collation order.  This accessed
 the volumes on one raid group, giving the worst performance of all. 
 After using a kludgy method to spread the data around, the performance
 was better, but did not approach random access
 
 Small files were a problem in some cases.  Since I was collocating, if
 alot of small files were written to a volume (in this case, it was
 moving my dirmc pool), there can be alot of wasted space.  Apparently,
 a block of 256K is written to disk no matter how much data is being
 written.  If alot of small files are written to a volume, space can be
 wasted because the volume will fill before capacity is 
 reached (we were
 using predefined volumes)
 
 It takes alot of time to predefine the volumes.  We were finding it
 took about 19 hours to predefine 2TB.  We were able to run 8 of those,
 so it ended up taking 19 hours to predefine 16TB, but that is still a
 long time.
 
 Some portion of space is taken up by volumes that are not yet full
 (with predefined volumes at least) in a file device class.  
 This is not
 a worry with random access, but fragmented aggregates could 
 be a worry.
 
 My plan is to move data off of random access volumes on the 
 weekends to
 help prevent fragmentation. 
 
 If you have any other questions, please let me know. 
 
 --- Park, Rod [EMAIL PROTECTED] wrote:
 
  Let me ask again because I didn't get much feedback. How do we find
  the
  thread limit, and can people weigh in on whether they use big disk
  pools
  (50TB-200TB). The advantages/disadvantages of big disk pools versus
  devclass=file any gotchas either way. We are looking at buy a lot
  more
  disk and creating big diskpools to land data on and be the primary
  pool
  instead of tape. Thank in advance.
  
  -Original Message-
  From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] 
 On Behalf
  Of
  Andy Huebner
  Sent: Monday, March 27, 2006 11:43 AM
  To: ADSM-L@VM.MARIST.EDU
  Subject: Re: [ADSM-L] Random Access Disk Pools
  
  We found the limit.  There are some posts in this forum from the
  first
  of the year about the problem we ran into.
  
  Andy Huebner
  
  -Original Message-
  From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] 
 On Behalf
  Of
  Park, Rod
  Sent: Monday, March 27, 2006 6:45 AM
  To: ADSM-L@VM.MARIST.EDU
  Subject: Re: [ADSM-L] Random Access Disk Pools
  
  We use random access pools, how do you know what your thread limit
  iswe've never had any issues with ours but we're thinking about
  adding a lot more. What's the biggest reason you do/don't use
  devclass=file over disk storage poolsarguments either way?
  
  -Original Message-
  From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] 
 On Behalf
  Of
  Andy Huebner
  Sent: Friday, March 24, 2006 3:37 PM
  To: ADSM-L@VM.MARIST.EDU
  Subject: Re: [ADSM-L] Random Access Disk Pools
  
  Be careful with how many disk pool volumes you create.  Each volume
  uses
  1 thread, add this to all of the other threads in use, our 
 TSM server
  would die at around 1800 active threads.
  
  Andy Huebner
  
  -Original Message-
  From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] 
 On Behalf
  Of
  Andrew Carlson
  Sent: Friday, March 24, 2006 11:04 AM
  To: ADSM-L@VM.MARIST.EDU
  Subject: [ADSM-L] Random Access Disk Pools
  
  I have heard in the past that random access disk pools can become
  fragmented and practically unusable after a while.  I was wondering
  if anyone sees this in the real world?  I posted the other day about
  managing predefined volumes in a file type devclass, and the only
  answer I got said they were using random access pools.  I would MUCH
  rather have a random access pool, so if there is no problem with
  this, I will convert over to random access.  Thanks

Using CRC error checking during backup

2006-01-09 Thread Stefan Holzwarth
Hi,

until November 2005 everything in my TSM world looked fine, until we needed
a restore of our exchange db:

We were not able to restore (had been tested in the past) what we needed.
The restore always ended with failures regarding the data on the tsm server.

I invoked IBM and we found, that the data at the TSM Server were bad. We
were told to use CRC checking for our nodes.
Doing CRC checking we found, that a lot of our successful backups where
indeed bad!
Especially the tdp agents that send large objects have problems with crc's
since retries are time-consuming and you get easy new crc's.

So we are looking now with a lot of efforts into our network environment,
but I wonder
- why it is necessary to use crc in TSM despite there is TCP/IP that should
take care of that errors
- how crc checking is been done within tsm (e.g. it seems for each full
received backup object, differences data/protocol checking, number retries,
...)

(CRC checksums for the storagepools are not used at the moment, there is no
indication for that)

Does anyone has information's or experience's with crc checking?

Environment
TSM Server Windows 2003 5.3.1.2
TSM Clients 5.2.3.11 Windows 2000/2003
Backup to Disk Clariion CX700 with ATA Drives
GB Ethernet Backbone (4 Switches Nortel)


Re: WMI backup failure - return code 3029

2005-11-30 Thread Stefan Holzwarth
Hi, we use

include c:\adsm.sys\...* special to assign a managent class to system
objects. Is this (still) correct?

I think include and exclude to the same directory isn't a good idea.

Regards
Stefan Holzwarth

  _

From: ADSM: Dist Stor Manager on behalf of Andrew Raibeck
Sent: Tue 29.11.2005 19:46
To: ADSM-L@VM.MARIST.EDU
Subject: Re: WMI backup failure - return code 3029



 Is it safe to say that the C:\adsm.sys directory can be safely excluded
 from all Windows backups?

Yes. What the exclude does is exclude it from the backup of your C: drive.
But C:\adsm.sys is still backed up as a part of the system object or
system state backup.

Note that in version 5.3, TSM automatically excludes this directory.

I believe that the 5.2.4 client uses exclude c:\adsm.sys\...\*, so the
failures you see are probably on subdirectories of adsm.sys, not files. In
5.3, this has been changed to exclude.dir c:\adsm.sys.

Run the dsmc query inclexcl command to see how the adsm.sys dir is
currently excluded.

Regards,

Andy

Andy Raibeck
IBM Software Group
Tivoli Storage Manager Client Development
Internal Notes e-mail: Andrew Raibeck/Tucson/[EMAIL PROTECTED]
Internet e-mail: [EMAIL PROTECTED]

IBM Tivoli Storage Manager support web page:
http://www-306.ibm.com/software/sysmgmt/products/support/IBMTivoliStorageMan
ager.html
http://www-306.ibm.com/software/sysmgmt/products/support/IBMTivoliStorageMa
nager.html

The only dumb question is the one that goes unasked.
The command line is your friend.
Good enough is the enemy of excellence.

ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU wrote on 2005-11-29
11:17:20:

 Question regarding this proposed solution:

 Is it safe to say that the C:\adsm.sys directory can be safely excluded
 from all Windows backups?

 I've had many repetitive failures of that directory in various clients
 (mainly Win2K3), but the directory exists in other windows versions.  I
 just went ahead and added the exclusion to my windows cloptset and the
 failures have stopped.

 Thanks for your help.

 Sergio

 U. of Maryland
 Office of Information Technology

 Andrew Raibeck wrote:
  H
 
  Try doing the following:
 
  1) Delete the existing c:\adsm.sys directory structure.
 
  2) Add the following to the client options:
 
 exclude.dir c:\adsm.sys
 
  3) Retry the system object backup.
 
  Regards,
 
  Andy
 
  Andy Raibeck
  IBM Software Group
  Tivoli Storage Manager Client Development
  Internal Notes e-mail: Andrew Raibeck/Tucson/[EMAIL PROTECTED]
  Internet e-mail: [EMAIL PROTECTED]
 
  IBM Tivoli Storage Manager support web page:
  http://www-306.ibm http://www-306.ibm .
 com/software/sysmgmt/products/support/IBMTivoliStorageManager.html
 
  The only dumb question is the one that goes unasked.
  The command line is your friend.
  Good enough is the enemy of excellence.
 
  ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU wrote on 2005-11-29
  07:09:04:
 
 
 Hi *SMers,
 
 Client - TSM 5.2.4.4 on a W2K SP4 server.
 Server - TSM 5.2.4.5 on a W2K SP4 server.
 
 I'm experiencing a regular backup failure of the system objects and
the
 information returned to me from TSM is not really directing me towards
a
 cause.
 
 Dsmerror.log contains:
 11/29/2005 13:34:08 ANS1487E Backing up WMI repository failed.  No
files
 will be backed up.
 11/29/2005 13:34:14 Return code 3029 unknown
 11/29/2005 13:34:14 Unknown system error
 Please check the TSM Error Log for any additional information
 
 Dsmsched.log has less..
 
 Is there any documentation anywhere which brings some kind of meaning
to
 Return code 3029 unknown.
 
 Any assistance greatly appreciated.
 
 Regards
 Matthew
 
 TSM Consultant
 ADMIN ITI
 Rabobank International
 1 Queenhithe, London
 EC4V 3RL
 
 
 _
 
 This email (including any attachments to it) is confidential,
 legally privileged, subject to copyright and is sent for the
 personal attention of the intended recipient only. If you have
 received this email in error, please advise us immediately and
 delete it. You are notified that disclosing, copying, distributing
 or taking any action in reliance on the contents of this information
 is strictly prohibited. Although we have taken reasonable
 precautions to ensure no viruses are present in this email, we
 cannot accept responsibility for any loss or damage arising from the
 viruses in this email or attachments. We exclude any liability for
 the content of this email, or for the consequences of any actions
 taken on the basis of the information provided in this email or its
 attachments, unless that information is subsequently confirmed in
 writing. If this email contains an offer, that should be considered
 as an invitation to treat.
 _


Imagebackup questions

2005-11-15 Thread Stefan Holzwarth
Hi,

I want to use online imagebackup with BartPE or WinPE for fast recovery of
important windows servers and already did some backup restore tests
successfully. But I wonder whether it's possible to exclude the pagingfile
from c: and to use compression.
It seems that compression is ignored and no files are excluded from
imagebackup.
(Other solutions like ghost do compression and exclude paging file)

Can someone help

Kind regards
Stefan Holzwarth


AW: Platform change to Windows?

2005-11-09 Thread Stefan Holzwarth
Hi Tab,

we have a very similar environment.
200 Windows Nodes, EMC NAS Server (4TB), SQL Agents, Exchange Agents
about 15 Tbyte on Primary (only Disk) and 15 Tbyte  as Copypool.
2 TSM Server Instances: one for NT and one for NAS on the same Windows 2003
Server.
DB is 20GByte 76%used and 20 GB 55% used.
Every day about 500GB new data.
Server has 4 CPU, 4 Gbyte RAM.
CPU Usage low to medium.
Performance is good.

Regards,
Stefan Holzwarth


-Ursprüngliche Nachricht-
Von: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Im Auftrag von
Tab Trepagnier
Gesendet: Dienstag, 8. November 2005 18:14
An: ADSM-L@VM.MARIST.EDU
Betreff: Platform change to Windows?

We are considering migrating our TSM systems from AIX to Windows 2003.

I know that the experience of the forum participants is that AIX provides
superior I/O performance, but where is that threshold?

These are our system details.  I'd like for anyone with experience with a
system of similar size to share their experiences regarding Unix vs.
Windows.

We are currently running TSM 5.1.10 on a 2-way 6H1 with 4 GB RAM.
We are considering running TSM 5.3.2 on a 2-way or 4-way 3.0 GHz Xeon with
4 GB RAM.  All non-OS I/O would be via GigE network and redundant 2 Gbs
fiber.

TSM system details:

DB: 32 GB @ 83% utilization
Log:5 GB, roll-forward mode
Primary data:   16 TB with one copypool (another 16 TB to manage)
Nodes:175 backing up during a 10-hour window
Average daily incoming data:   ~ 200 GB; may be reduced via deployment of
TDP Oracle
Disk:   1 TB DAS, 3 TB SAN
Tape:   LTO-1, DLT8000, 3570XL, four SCSI drives each; libraries will be
consolidated
Daily copypool updates sent to vault
Semi-annual exports in the 2-5 TB range

Does that sound like a system that could reasonably be hosted on a modern
Windows system?  Is a 2-way adequate, or should we get a 4-way?

Thanks in advance.

Tab Trepagnier
TSM Administrator
Laitram, L.L.C.


AW: Working SATA Disk Pool Configuration

2005-08-12 Thread Stefan Holzwarth
We use Clariion ATA for diskonly backup in the following configuration:
2 Clariions on different locations, each with 15 Raidgroups 4+1 in Raid 5. 
(Raid 5 because of disktype vols with random access)

Each Raidgroup has 1188 Gbyte that are used as 2 LUN's with 594 GB each.

Within each lun there are 2 directories: one for disk vols (4 Gbyte) and on
for tapevols (4Gbyte)
We use a small amout of FC Disk 30 GB on the clariion as primary disk pool
with maxsize=50 Mbyte
Migration next to ATA Disk Pool (2 Tbyte) with 10 days retension (most
backup files should expire here) next to ATA Tapepool (~10 Tbyte) for long
term.

At the second site we have a large copypool with 4 GB sequential volumes.

Regards,
Stefan Holzwarth


-Ursprüngliche Nachricht-
Von: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Im Auftrag von
Hart, Charles
Gesendet: Freitag, 12. August 2005 14:30
An: ADSM-L@VM.MARIST.EDU
Betreff: Re: Working SATA Disk Pool Configuration

Yep 594GB Vols, @ 1-4 Vols per pool, so we have 594GB to 2TB TSM Pools.
Also I forgot to mention we are writing to 3592 Tape Drives connected to
2GBs SAN ports.

Regards,

Charles 

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Ben Bullock
Sent: Thursday, August 11, 2005 4:30 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Working SATA Disk Pool Configuration

Thanks for the info, we have similar hardware and are thinking about
finally moving TSM to the SAN.

one question though on the last TSM section, did you mean to
say, Defined 594~GB~ Disk Vols in TSM Storage Pool from 1-4 Vols per
pool. Just looking at the math, I'm not sure you could (or would want
to) make 594 individual volumes 


Thanks,
Ben

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Hart, Charles
Sent: Thursday, August 11, 2005 3:10 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Working SATA Disk Pool Configuration

Someone had asked how we configured our Clarrion SATA Disk for TSM Disk
Pools...   With this cfg we see 168GB Per Hour Data Transfer per stream
from Disk pool top Tape Pool

Clariion: CX700
Drives: 320 GB SATA
Raid Sets: Raid 3 (4+1) ((5 drives per set)) Number of Luns per Raid
Set: 2 Approx Lun Size: 594GB Stripe Size: 128K


AIX OS Cfg
Created 594GB Volumes with 64K Stripes

TSM
Defined 594 Disk Vols in TSM Storage Pool from 1-4 Vols per pool


AW: Select for archive size

2005-08-09 Thread Stefan Holzwarth
Richard,

as you mention you have to take care of the expiration that archivecontents
will have during their lifecycle. Therefore you have to query every
accountiong period for the actual size of the archives. I do not see any
possibility with perl since the commandline does not show estimate retrieve
size (as the gui does).

Kind regards
Stefan Holzwarth

-Ursprüngliche Nachricht-
Von: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Im Auftrag von
Richard Sims
Gesendet: Montag, 8. August 2005 16:56
An: ADSM-L@VM.MARIST.EDU
Betreff: Re: Select for archive size

On Aug 8, 2005, at 10:31 AM, Stefan Holzwarth wrote:

 Hi *SM-ers,

 some time ago i setup a dedicated TSM node for archiving of all
 kind of data
 from different users.
 Each archive is generated with an (internal) unique ID in the
 description
 field under the same archive-node. Sometimes the different
 archives come
 from the same filespace because we use a central networkshare for
 collection
 of files to archive.
 Now our users should pay for their archives. So i started searching
 for an
 select statement that gives me the size of an single archive (as in
 the gui
 when you estimate retrieve size).

 The problem is that the archives table shows no size information
 and the
 dsmc.exe has no estimate retrieve for automation...

Stefan -

There is no inherent capability for listing stored file sizes from
TSM server queries: what it shows, in the Contents table FILE_SIZE is
the Aggregate size; and that equals the file size only if the file is
as large as or larger than an Aggregate. You'll need to acquire sizes
via client-side queries, as for example performed through a Perl script.

The TSM accounting records could be used to charge for Archive and
Retrieve, in that they record username; but there is no like
accounting of deletion/expiration, to balance the books from that
viewpoint.

On a go-forward basis, it may be feasible in your environment to have
all archiving occur through an interface script which would charge
per observed file sizes involved in Archive, Retrieve, and Delete.

Richard Sims


Select for archive size

2005-08-08 Thread Stefan Holzwarth
Hi *SM-ers,

some time ago i setup a dedicated TSM node for archiving of all kind of data
from different users.
Each archive is generated with an (internal) unique ID in the description
field under the same archive-node. Sometimes the different archives come
from the same filespace because we use a central networkshare for collection
of files to archive.
Now our users should pay for their archives. So i started searching for an
select statement that gives me the size of an single archive (as in the gui
when you estimate retrieve size).

The problem is that the archives table shows no size information and the
dsmc.exe has no estimate retrieve for automation...

Any ideas?

Kind regards
Stefan Holzwarth


AW: Backup taking too long

2005-08-02 Thread Stefan Holzwarth
As i know resourceutil stays in the same filesystem and only starts a new
thread if there is something big to transport to the tsm server. Searching
for changed files isn't multithreaded.

I devided backup of 4 large netapp volumes ~1TByte/4 Mio Files) with 5 nodes
using the following 
include/exclude lists successfully:

for each node i set up a dsm.opt like that:
TCPBUFFSIZE   256
TCPWINDOWSIZE 63
TCPNODELAYYES
COMPRESSION YES

* Managementclass for NAS Backups for each volume
INCLUDE \\sznas01\tsmdata1$\...\*  DATAX
INCLUDE \\sznas01\tsmdata2$\...\*  DATAX
...

* For each volume
* for each node comment one range out
* EXCLUDE.DIR \\sznas01\tsmdata1$\[A-E]*
EXCLUDE.DIR \\sznas01\tsmdata1$\[F-J]*
EXCLUDE.DIR \\sznas01\tsmdata1$\[K-O]*
EXCLUDE.DIR \\sznas01\tsmdata1$\[P-T]*
EXCLUDE.DIR \\sznas01\tsmdata1$\[U-Z0-9$]*
...

The backup itself had a preschedcmd:
create new snapshots for all volumes
create new hidden shares to the root of the volumes

start backup with all 5 nodes at the same time
Netapp FS840 reached about 95 % CPU Util during backup
With one DL380 G3 (1 GB Ethernet) i could do the backup for 12 mio files in
about 20 hours

Stefan Holzwarth


-Ursprüngliche Nachricht-
Von: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Im Auftrag von
Scott Foley
Gesendet: Montag, 1. August 2005 18:19
An: ADSM-L@VM.MARIST.EDU
Betreff: Backup taking too long

I am backing up a large number of files and directories located on a
Network Appliance.  I am using a Windows 2000 client to do this.  It is
taking over 40 hours to do the backup.  I also run out of memory at
times (ANS1028S).   Journaling worked great when the files were on a
Windows system, but journaling works only on a local file system.  I
don't think I can use NDMP because the tape library is connected only to
the Tivoli server.  Most of the time (and memory) is spent determining
what files should be backed up.  There are over 1 million directories.


Total number of objects inspected: 7,819,693
Total number of objects backed up:  260,984
Total number of bytes transferred:36.22 GB
Data transfer time:4,579.15 sec
Elapsed processing time:   40:10:58

There are about 36 root directories so I could configure the Network
Appliance to have 36 shares and back up each separately if that would
help.  I think this would help with the memory problem.  I would
probably need to run multiple backups simultaneously though.

I have added the following to the dsm.opt file

RESOURceutilization 14
MEMORYEFFICIENTBACKUP yes

Any suggestions on how to speed the backup up?  


Windows 2000 SP2 with 512 Meg of Memory
Client  Version 5, Release 1, Level 7.0
Server  Version 5, Release 1, Level 9.0

Scott Foley


AW: asr and network card

2005-06-22 Thread Stefan Holzwarth
Hi Karin,
Have a look at nlite, a nice, small tool for preparing your our windows
install cd, including drivers,...

Kind regards,
Stefan Holzwarth


-Ursprüngliche Nachricht-
Von: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Im Auftrag von
Karin Dambacher
Gesendet: Mittwoch, 22. Juni 2005 12:02
An: ADSM-L@VM.MARIST.EDU
Betreff: asr and network card

Hi all

we want to asr a windows xp system, it was installed with dhcp and the
network adapter was not installed automatically, driver had to be
installed extra.
At rebooting no network is available. Has anyone this problem and know
what to do ? Is it possible to configure the windows startup cd with this
extra driver ?

Thanks, Karin


AW: Migration form Storagepool Disk to Storagepool Disk

2005-02-21 Thread Stefan Holzwarth
Andy,

it seems i did something wrong, since everything is working now as expected.

I have only trouble to find my mistake...

Thank you for help
Stefan Holzwarth

-Ursprüngliche Nachricht-
Von: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Im Auftrag von
Andrew Raibeck
Gesendet: Sonntag, 20. Februar 2005 21:59
An: ADSM-L@VM.MARIST.EDU
Betreff: Re: Migration form Storagepool Disk to Storagepool Disk

The only limitation of this kind is with regard to Centera pools. You
cannot migrate data into, or out of, a Centera disk pool (this is
mentioned in the Admin Guide).

I did a simple test of disk - disk migration using my local workstation
disk drives (SATA), and it worked fine.

What do Q STG F=D show and Q PR show at the time migration is running? Did
you examine the activity log for any related messages issued from the time
migration started that might point to a problem?

Regards,

Andy

Andy Raibeck
IBM Software Group
Tivoli Storage Manager Client Development
Internal Notes e-mail: Andrew Raibeck/Tucson/[EMAIL PROTECTED]
Internet e-mail: [EMAIL PROTECTED]

The only dumb question is the one that goes unasked.
The command line is your friend.
Good enough is the enemy of excellence.

ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU wrote on 2005-02-20
13:35:21:

 At the moment i try to install and configure our new TSM environment wth
TSM
 5.3, Windows 2003, FC and ATA Disk at 2 EMC Clariion - no tapes
 The storagepool hirarchy should be the following:

 1) Fast small FC Disks (RAID 5) for daily small and medium Files
 next to
 2) ATA Disk (RAID 5) with  migdelay = 10 days (daily changed files
should
 expire before migration to sequentail pool)
 next to
 3) large ATA squential Pool as last part of the chain and for large,
daily
 Files (TDP full ?)

 In my first tests i had trouble with migration from 1 to 2:
 mig process were starting, but doing nothing.
 The process could not be canceled.
 Everything else was ok (1-3, direct -2, 2-3)

 I am not sure whether it is possible to do migration from disk to disk.
 Anyone had the same experience, TSM bug?
 Kind regards,
 Stefan Holzwarth


Migration form Storagepool Disk to Storagepool Disk

2005-02-20 Thread Stefan Holzwarth
At the moment i try to install and configure our new TSM environment wth TSM
5.3, Windows 2003, FC and ATA Disk at 2 EMC Clariion - no tapes
The storagepool hirarchy should be the following:

1) Fast small FC Disks (RAID 5) for daily small and medium Files
next to
2) ATA Disk (RAID 5) with  migdelay = 10 days (daily changed files should
expire before migration to sequentail pool)
next to
3) large ATA squential Pool as last part of the chain and for large, daily
Files (TDP full ?)

In my first tests i had trouble with migration from 1 to 2:
mig process were starting, but doing nothing.
The process could not be canceled.
Everything else was ok (1-3, direct -2, 2-3)

I am not sure whether it is possible to do migration from disk to disk.
Anyone had the same experience, TSM bug?
Kind regards,
Stefan Holzwarth


Backup Windows 2003 AD

2005-01-31 Thread Stefan Holzwarth
Hi,

since we moved our active directory environment from windows 2000 to windows
2003 we see a backup error with
D:\windows\ntds\temp.edb. File is open and can not be saved
The file is described somewhere in the web as a scratchpad for AD.

I wonder if the file is needed for recovery.
If not - why there is not standard tsm exclude?
If yes - who should i backup the file?

Kind regards
Stefan Holzwarth


Reconstruction of aggregates

2004-12-07 Thread Stefan Holzwarth
We plan to move TSM from MVS to Windows2003 using only disk storage at 2
CX700 storage systems.

At the moment we discuss whether to use disk and/or file pools.

Since disk is much easier to handle and to use we look for a mechanism to
address the aggregates problem on disk only.

The idea is to have a kind of reclamation pool of type file to handle the
problem:
Large Disk pool --- move vol  Small File pool with immediate
reclamation  --- migration by high/lo watermark -- back to large disk pool

The problem now is to have an idea of what disk volumes should be moved. I
do not know any statistic that shows me which volumes should be reorganized.

Any idea?

Kind regards
Stefan Holzwarth


AW: ANS1950E Backup via Microsoft Volume Shadow Copy failed

2004-11-18 Thread Stefan Holzwarth
Hi Karel,
have a look at 
http://support.microsoft.com/default.aspx?scid=kb;en-us;833167
Kind regards
Stefan Holzwarth

-Ursprüngliche Nachricht-
Von: Bos, Karel [mailto:[EMAIL PROTECTED] 
Gesendet: Dienstag, 16. November 2004 09:31
An: [EMAIL PROTECTED]
Betreff: ANS1950E Backup via Microsoft Volume Shadow Copy failed


Hi,

I am looking for the reason for getting this message in the error.log:

11/16/2004 09:23:17 CreateSnapshotSet(): pAsync-QueryStatus() returns
hr=E_OUTOFMEMORY
11/16/2004 09:23:29 ANS1999E Incremental processing of '\\clusternode3\c$'
stopped.

11/16/2004 09:23:29 ANS1950E Backup via Microsoft Volume Shadow Copy failed.
See error log for more detail.

I have 4 cluster Win2003 cluster nodes configured with the same dsm.opt and
client version. Only one of them gives this message. All of them have the
same amount of CPU power and memory. The one failing is the least used. 

One other thing, after a reboot the first back up will run fine. All
thereafter fail.

Any ideas?

Regards,

Karel

Clientlevel 5.2.3.4
Serverlevel 5.2.3.4


AW: TSM and SATA Disk Pools

2004-11-15 Thread Stefan Holzwarth
Thank you Charles, very good overview.
I have one more thing to mention: (only my opinion)
You should try to setup storagpool hirarchy within the SATA/FC Disk-Pools.

As an idea:

FC Diskpool for Daily Backup --- 
(S)ATA Diskpool for storing the files as long as the policy allows new
versions (migdelay ~ versions+1) ---
(S)ATA Sequential Pool for long term (small volumes and collocation per node
to avoid a lot of reclamation after deleting nodes) ---
Not neccessary a Tapepool as last chain

Kind Regards 
Stefan Holzwarth


-Ursprüngliche Nachricht-
Von: Mark D. Rodriguez [mailto:[EMAIL PROTECTED] 
Gesendet: Montag, 15. November 2004 23:50
An: [EMAIL PROTECTED]
Betreff: Re: TSM and SATA Disk Pools


Charles,

There are some other limiting factors you must consider.  Although you
have 300+ clients how many do you schedule to back up at the same time?
Even if it is all of them what is your Maxsessions and MaxSchedsessions
values?  If I remember right you are running on a p630 that box can
probably handle (depending on the amount of memory, # of NICs and # of
processors) up to 200-300 concurrent sessions, but you probably have it
set much lower.  As with all things in life there are practical
limitations, 1200 volumes might seem to be a lot but I have worked in
large environments with several hundred volumes because that is what the
environment needed!  Another question are all of these clients going to
one disk pool and/or are some going straight to tape?

On another note that I did not address in my previous post, the original
topic was about SATA drives and their viability in an ITSM disk pool.  A
couple things to consider here:

   1. They are cheap so you can afford to have very large disk pools -
  Thats a good thing!
   2. SATA drives are typically large capacity (250GB and above) when
  used by IBM, EMC, LSI etc. - This is not so good, see me previous
  post more dirves is better.
   3. SATA drives are usually slower drives, 7200 or 10K rpm, FC drives
  can 15K rpm - Another performance hit.
   4. The reliability, i.e. failure rate, is not as good, but this might
  not be as important in a ITSM server as it might be in a
  prodcution DB server.
   5. In order to get good performance out of SATA you need to work a
  little harder and you probably want to go with RAID 10 or 50 to
  get the best performance/reliabilty.
   6. If you have to move huge amounts of data on a daily basis with a
  minimal amount of time, i.e. you need the best possible
  performance, than SATA is not your answer!
   7. But if you need large disk pools with reasonable performance at a
  great price than your going to love SATA.

Good Luck and let us know how it turns out

--
Regards,
Mark D. Rodriguez
President MDR Consulting, Inc.


===
MDR Consulting
The very best in Technical Training and Consulting.
IBM Advanced Business Partner
SAIR Linux and GNU Authorized Center for Education
IBM Certified Advanced Technical Expert, CATE
AIX Support and Performance Tuning, RS6000 SP, TSM/ADSM and Linux
Red Hat Certified Engineer, RHCE

===



Hart, Charles wrote:

Fantastic Read   Thank you very much for the info!  Just one of our TSM
server has 300+ Clients, currently using collocation and a client setting of
Resource of 4 we could potentially have to create 1200 volumes on Disk?

Regards,

Charles



-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Behalf Of
Mark D. Rodriguez
Sent: Monday, November 15, 2004 1:58 PM
To: [EMAIL PROTECTED]
Subject: Re: TSM and SATA Disk Pools


OK, so there seems to be some interest in how to layout disk pools on an
AIX system using JFS2 instead of raw lv's.  I will try to keep this as
general as possible so please remember you must make some choices based
on your particular environment.

* In general I would rather have more small disks than a few large
  as you will see.  However, this would not apply if the larger disk
  where 15K rpm vs. smaller disks of 10K rpm.
* Creating your hdisks - there are several possibilities here
  depending on you environment.
  o Small environments with only a few disk should use JBOD.
obviously you give up some safety over running RAID 1, 5 or
10 but small environments can't afford this anyway.
  o Mid size and above should use one of the following configs
that fits there environment the best.  If you will use RAID
5 then create several small arrays, 4 or 5 disks per array
is good if you have lots of disk then you can go as high as
8 per array.  If you have a very large number of disks than
you can use either RAID 0 or 10, obviously RAID 10 will give
you some disk failure protection but at the cost of 2 x

AW: Failed incremental backup for NT 2003.

2004-10-29 Thread Stefan Holzwarth
Arnaud,
Thank you very much for this valuable information.
Since 5.2.2.x we see that error and where looking for the reason.
There where some fixes in TSM but you know it was only the half of the whole
thing.
Kind regards,
Stefan Holzwarth

-Ursprüngliche Nachricht-
Von: PAC Brion Arnaud [mailto:[EMAIL PROTECTED] 
Gesendet: Freitag, 29. Oktober 2004 11:40
An: [EMAIL PROTECTED]
Betreff: Re: Failed incremental backup for NT 2003.


Hoa,


You'll find an article on Microsoft support web site
(http://support.microsoft.com), which may solve your problem : do a
search for 833167 and you'll be directed to an article about Time-out
errors occur in Volume Shadow Copy service writers, and shadow copies
are lost during backup and during times when there are high levels of
input/output. There is an associated patch you should ask them and
apply on your client. We did it and it solved the problem !
Cheers. 


Arnaud 


**
Panalpina Management Ltd., Basle, Switzerland, CIT Department
Viadukstrasse 42, P.O. Box 4002 Basel/CH
Phone:  +41 (61) 226 11 11, FAX: +41 (61) 226 17 01
Direct: +41 (61) 226 19 78
e-mail: [EMAIL PROTECTED]

**

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Hoa V Nguyen
Sent: Thursday, 28 October, 2004 18:29
To: [EMAIL PROTECTED]
Subject: Failed incremental backup for NT 2003.

Hi gurus,

Our NT guys have issues to backup NT 2003 with following errors
messages:

10/24/2004 18:44:24 CreateSnapshotSet(): AddToSnapshotSet() returns
hr=VSS_E_UNEXPECTED_PROVIDER_ERROR
10/24/2004 18:44:26 ANS1950E Backup via Microsoft Volume Shadow Copy
failed.  See error log for more detail.

I would appreciated you folks shed some light please.

We have opened a PMR 60173,370 with IBM regarding the windows 2003 TSM
client errors,but didn't hear any good news yet.

Client version: 5.2.3
TSM   5.2.2.3

Thank you.
Hoa.


AW: NAS Storage for Copypools

2004-10-19 Thread Stefan Holzwarth
Jozef, thanks for that hint. Your are right, i forgot ...
Kind regards
Stefan Holzwarth

-Ursprüngliche Nachricht-
Von: Jozef Zatko [mailto:[EMAIL PROTECTED] 
Gesendet: Dienstag, 19. Oktober 2004 08:32
An: [EMAIL PROTECTED]
Betreff: Re: NAS Storage for Copypools


Stefan,
check that user under which your TSM server is running can see and access
your shared drive. If you run TSM under system user, this user by default
does not have access to network shares.

Hope this helps

Ing. Jozef Zatko
Login a.s.
Dlha 2, Stupava
tel.: (421) (2) 60252618



Stefan Holzwarth [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
10/18/2004 04:01 PM

Please respond to
ADSM: Dist Stor Manager [EMAIL PROTECTED]


To
[EMAIL PROTECTED]
cc

Subject
NAS Storage for Copypools






Hi, i thought a simple question but 

Can you use networkshares for storagepools?
I tried devclass file, but got an error about

ANS8000I Server command: 'define devc FILEDEV1 devtype=FILE
directory=Y:\
mountl=32 maxcap=25M'
ANR8366E DEFINE DEVCLASS: Invalid value for DIRECTORY parameter.

Y: is a networkshare on a large NAS Box (Sharerights and acl's are ok)

I think about a configuration for copypools that use distributed non used
space...

Kind Regards
Stefan Holzwarth
ForwardSourceID:NT00051F76


NAS Storage for Copypools

2004-10-18 Thread Stefan Holzwarth
Hi, i thought a simple question but 

Can you use networkshares for storagepools?
I tried devclass file, but got an error about

ANS8000I Server command: 'define devc FILEDEV1 devtype=FILE directory=Y:\
mountl=32 maxcap=25M'
ANR8366E DEFINE DEVCLASS: Invalid value for DIRECTORY parameter.

Y: is a networkshare on a large NAS Box (Sharerights and acl's are ok)

I think about a configuration for copypools that use distributed non used
space...

Kind Regards
Stefan Holzwarth


AW: MS-SQL TDP weirdness - more issues

2004-10-06 Thread Stefan Holzwarth
Del,
try relocating at the command line. GUI isn't supported for that task, (but
offers it)
Regards Stefan Holzwarth

-Ursprüngliche Nachricht-
Von: Zoltan Forray/AC/VCU [mailto:[EMAIL PROTECTED] 
Gesendet: Mittwoch, 6. Oktober 2004 15:26
An: [EMAIL PROTECTED]
Betreff: Re: MS-SQL TDP weirdness - more issues


My SQL guy sent me the following email about the restore:

ok, got it to work finally.  after upgrading the client, it still
wouldn't restore to my SQL 7 instance.  so i tried the SQL 2000 instance,
and that gave me an error that the path was incorrect (it was trying to
restore to the D drive, which is where it was stored on ADMNT30, but i
didn't have a D drive on DSS7).  so i tried to do a relocate files in the
TDP client, but the files and paths aren't there!  why?  i don't know,
they should be.  so, on DSS7, i have a small seperate hard drive i had
marked as E, i wiped it out, and renamed it D, and did the restore, and it
seems to be working.  what a pain in the !

Is there a workaround for this issue ?  Does it always have to go back to
the same drive-letter/path ?




Del Hoobler [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
10/05/2004 02:06 PM
Please respond to
ADSM: Dist Stor Manager [EMAIL PROTECTED]


To
[EMAIL PROTECTED]
cc

Subject
Re: MS-SQL TDP weirdness






Zoltan,

No... there were things added. (See README for details.)
In addition, if you backed up with newer version,
and tried restoring with an older version (like 2.2.1),
there could be problems with meta data alignment,
which might result in weird stripe values.
Bottom line, make sure you are running with 5.2.1
on the restore machine and let us know what happens.

Thanks,

Del




ADSM: Dist Stor Manager [EMAIL PROTECTED] wrote on 10/05/2004
01:42:40 PM:

 We did check and everything is the default, which should be *1*.

 We checked and the TDP is V2.2.1.  He is installing the 5.2.1
 upgrade/update, although I thought it was simply a version realignment
?



AW: TSM Client 5.2.3 for XP Pro

2004-10-06 Thread Stefan Holzwarth
Chris,
Whats your server level?
As i know for asr you need min 5.2.x
Regards
Stefan Holzwarth

-Ursprüngliche Nachricht-
Von: Christopher Zahorak [mailto:[EMAIL PROTECTED] 
Gesendet: Mittwoch, 6. Oktober 2004 17:34
An: [EMAIL PROTECTED]
Betreff: TSM Client 5.2.3 for XP Pro


Hello All,

I'm starting to test the TSM  ASR functionality of XP. When I use the
Backup/Archive GUI (Ver5.2.3) client on an XP Pro workstation, I do not see
the ASR (Automated System Recovery) entry on the tree list . I also cannot
find any of the ASR related files
(asr.sif,asrpnp.sif,tsmasr.cmd,waitforevent.exe,tsmasr.op) which should be
in the c:\adsm.sys\ASR directory.

 The XP Pro workstation has the SP2 update.

I can manually create an ASR diskette using the MS Backup utility, so I
don't believe the problem is within MS.

Any ideas?

Chris.


AW: disk storage pools and windows compressed file systems

2004-09-22 Thread Stefan Holzwarth
Hi Steve,
i think ist better to use the client cpu for that task to save disk space
and cpu at the tsm server. 
That also reduces networkload for backup and restore.
We use compression at the client on all tsm clients including TDP for SQL
for a long time.
The load during backuptime is typical low, so we can use the cpu for that
task without problems.
We are quite satisfied with the performance of that solution.

Regards 
Stefan Holzwarth

-Ursprüngliche Nachricht-
Von: Steve Bennett [mailto:[EMAIL PROTECTED] 
Gesendet: Dienstag, 21. September 2004 19:11
An: [EMAIL PROTECTED]
Betreff: disk storage pools and windows compressed file systems


Have any of you used disk primary storage pools which use windows
compressed file systems? Comments on performance, etc?

We are investigating use of a multi TB raid5 array to use as a buffer
between our local primary disk pool and the tapepool. Have seen the
posts regarding file vs disk device classes but what about compression?
Good, bad, etc.

Win 2000 sp4 with TSM server 5.2.3.2

--

Steve Bennett, (907) 465-5783
State of Alaska, Enterprise Technology Services, Technical Services Section


AW: Expiration performance

2004-09-13 Thread Stefan Holzwarth
Hi, our current performance.

TSM 5.2.3.1 @ Z/OS 1.4 ~ 626 Mips
1 Gbyte Regionsize, Bufferpool 175.000 pages a 4K
34 Gbyte DB with 78% util  -  about 4.000.000 objets, 300.000 deleted each
day
11 Gbyte Log
1 TByte Primary Diskpool
All on heavily loaded EMC Symmetrix 5930 Raid 5 with SRDF sync
Tape 8 *  3590B and 4 * 3490(!)

ACTIVITY DateObjects Examined Up/Hr
-- -- -
EXPIRATION 2004-08-01298800
EXPIRATION 2004-08-02478800
EXPIRATION 2004-08-03471600
EXPIRATION 2004-08-04212400
EXPIRATION 2004-08-05342000
EXPIRATION 2004-08-06500400
EXPIRATION 2004-08-08320400
EXPIRATION 2004-08-09547200
EXPIRATION 2004-08-10504000
EXPIRATION 2004-08-11486000
EXPIRATION 2004-08-12410400
EXPIRATION 2004-08-13550800
EXPIRATION 2004-08-15367200
EXPIRATION 2004-08-16529200
EXPIRATION 2004-08-17525600
EXPIRATION 2004-08-18428400
EXPIRATION 2004-08-19259200
EXPIRATION 2004-08-20115200
EXPIRATION 2004-08-20342000
EXPIRATION 2004-08-22363600
EXPIRATION 2004-08-23525600
EXPIRATION 2004-08-23597600
EXPIRATION 2004-08-24583200
EXPIRATION 2004-08-25514800
EXPIRATION 2004-08-26331200
EXPIRATION 2004-08-27356400
EXPIRATION 2004-08-29356400
EXPIRATION 2004-08-30547200
EXPIRATION 2004-08-31478800
EXPIRATION 2004-09-01262800
EXPIRATION 2004-09-02208800
EXPIRATION 2004-09-03482400
EXPIRATION 2004-09-0536
EXPIRATION 2004-09-06597600

ACTIVITY DatePages backed Up/Hr
-- -- -
FULL_DBBACKUP  2004-07-17   4795200
FULL_DBBACKUP  2004-07-24   5929200
FULL_DBBACKUP  2004-07-29   1112400
FULL_DBBACKUP  2004-07-31   720
FULL_DBBACKUP  2004-08-07   5292000
FULL_DBBACKUP  2004-08-14   4096800
FULL_DBBACKUP  2004-08-20   1969200
FULL_DBBACKUP  2004-08-21   4953600
FULL_DBBACKUP  2004-08-28   6922800
FULL_DBBACKUP  2004-09-03   5410800
FULL_DBBACKUP  2004-09-11   7722000


Kind regards,
Stefan Holzwarth

--
ADAC e.V. (Informationsverarbeitung - Systemtechnik - Basisdienste)
Am Westpark 8, 81373 München, Tel.: (089) 7676-5212, Fax: (089) 76768924
mailto:[EMAIL PROTECTED]



-Ursprüngliche Nachricht-
Von: goc [mailto:[EMAIL PROTECTED] 
Gesendet: Montag, 13. September 2004 10:42
An: [EMAIL PROTECTED]
Betreff: Re: Expiration performance


ok, i have to shere mine ... look pretty messy ... LOL ... bu here you go
...
specs:
IBM p640 4GB RAM, 2 procs
db 24GB and 80% utilized is on SSA disks on 5 volumes ... RAID5 - thats
important !

  Operation Results


ACTIVITY Date Pages backedUp/Hr
-- -- -
FULL_DBBACKUP  2004-08-14  12888000
FULL_DBBACKUP  2004-08-14  13060800
FULL_DBBACKUP  2004-08-14   6843600
FULL_DBBACKUP  2004-08-15  13975200
FULL_DBBACKUP  2004-08-15

AW: Restore from node to another node

2004-07-09 Thread Stefan Holzwarth
I do exactly the same thing since 2 years (cifs shares). But now i have more
and more (dmz) clients that have no cifs can not send by mail. 
At the moment every admin of that systems is responsible for giving me
access to dsmsched.log for reporting services about the backups - one does
this by ftp, one by mail and so one - not very efficent. (the information at
the tsm server is not sufficent in my opinion)

So i thought about using tsm that has all needed dsmsched.logs within the
daily backups. 

Thank you for all your responses!
Kind regards Stefan 


-Ursprüngliche Nachricht-
Von: TSM_User [mailto:[EMAIL PROTECTED] 
Gesendet: Donnerstag, 8. Juli 2004 21:12
An: [EMAIL PROTECTED]
Betreff: Re: Restore from node to another node


I know of a site that runs a copy of the dsmerror and dsmsched logs to a
network share after each backup. Of course they appened the node name to the
front of each one.  They then have a script that looks at all the logs in
that directory and produces some reports.  I'm not sure how much better that
is than the Operational repoerter but I thought I'd just let you know how
I've seen it done.

Also, that one share is backed up by one client so they effectly have one
node with every dsmsched and dsmerror log backed up that can be restored
somewhere later if needed.

Andrew Raibeck [EMAIL PROTECTED] wrote:
What you are asking for is not possible, at least not with a capability of
being able to restore from a single system. Among other things, you can
not use a single client to restore all sched logs since, for example, a
Windows client can not restore data backed up by NetWare or Unix.

Your best bet might be to try to automate some process where the client's
dsmsched.log and dsmerror.log files are emailed to you, or copied to a
shared network disk. Perhaps you can set up some kind of POSTSCHEDULECMD
processing to handle this.

Other members of this list with far more practical experience than I might
have other suggestions to offer.

Regards,

Andy

Andy Raibeck
IBM Software Group
Tivoli Storage Manager Client Development
Internal Notes e-mail: Andrew Raibeck/Tucson/[EMAIL PROTECTED]
Internet e-mail: [EMAIL PROTECTED]

The only dumb question is the one that goes unasked.
The command line is your friend.
Good enough is the enemy of excellence.

ADSM: Dist Stor Manager wrote on 07/08/2004
08:30:18:

 Andy,
 since i want to collect and analyse the dsmsched.log of the done backups
i
 like to do the following:

 - a daily second schedule for each node that does a backup of
dsmsched.log -
 easy and e.g. no access problems for firewalled systems, no different
access
 methods for different operating systems,...
 - restore of that logs from a central secure point for analysis and
 reporting - i would like to do this auotmated with as less effort as
 possible. (no set access for new nodes,...)

 so: can the admin logon with commandline be done full automated?

 Thanks a lot
 Stefan Holzwarth


-
Do you Yahoo!?
New and Improved Yahoo! Mail - 100MB free storage!


AW: AW: Antwort: Restore from node to another node

2004-07-08 Thread Stefan Holzwarth
Andy, 
since i want to collect and analyse the dsmsched.log of the done backups i
like to do the following:

- a daily second schedule for each node that does a backup of dsmsched.log -
easy and e.g. no access problems for firewalled systems, no different access
methods for different operating systems,...
- restore of that logs from a central secure point for analysis and
reporting - i would like to do this auotmated with as less effort as
possible. (no set access for new nodes,...) 

so: can the admin logon with commandline be done full automated?

Thanks a lot
Stefan Holzwarth

-Ursprüngliche Nachricht-
Von: Andrew Raibeck [mailto:[EMAIL PROTECTED] 
Gesendet: Mittwoch, 7. Juli 2004 18:38
An: [EMAIL PROTECTED]
Betreff: Re: AW: Antwort: Restore from node to another node


If you use -VIRTUALNODENAME=nodename from the command line, then enter 
your admin ID and password when prompted for the user ID, you can access 
the node's data. Some considerations:

1) In general, most users do not (or should not) have admin IDs that can 
access another node's data. As a TSM administrator (as most ADSM-L 
subscribers are) you have this access, but if non-admin user A wants to 
share data with non-admin user B, the -VIRTUALNODENAME option won't work.

2) In general, I would hope that admins with the authority to access other 
node's data would not do so indiscriminately or without the user's 
knowledge.

3) My comments are for very general cases (which is why I say in 
general... quite a bit) since the original question was not very 
specific. Of course, depending on your exact circumstances, one method 
might be preferable to the other. But for the very general question of 
how can I access node A's data from node B?, my general answer is still 
to use SET ACCESS and -FROMNODE. The -VIRTUALNODENAME method is available 
only to TSM administrators (almost certainly the minority of your TSM user 
community) and -NODENAME should be used rarely, if ever, as I've already 
mentioned.

Regards,

Andy

Andy Raibeck
IBM Software Group
Tivoli Storage Manager Client Development
Internal Notes e-mail: Andrew Raibeck/Tucson/[EMAIL PROTECTED]
Internet e-mail: [EMAIL PROTECTED]

The only dumb question is the one that goes unasked.
The command line is your friend.
Good enough is the enemy of excellence.

ADSM: Dist Stor Manager [EMAIL PROTECTED] wrote on 07/07/2004 
08:55:52:

 Within the gui i have another option to use my admin account at the tsm
 server for authentication.
 Can you do this from the commandline? Ist very comfortable and you have 
no
 password issues and no set access stories...
 Kind Regards
 Stefan Holzwarth
 
 
 -Ursprüngliche Nachricht-
 Von: Andrew Raibeck [mailto:[EMAIL PROTECTED] 
 Gesendet: Mittwoch, 7. Juli 2004 17:45
 An: [EMAIL PROTECTED]
 Betreff: Re: Antwort: Restore from node to another node
 
 
 I do not recommend putting NODE1's node name in NODE2's options file. If
 PASSWORDACCESS GENERATE is being used (which is the likely case) then 
this
 will cause NODE1's password to be encrypted on NODE2's machine. For
 example, if you came over to my machine and put your node name in my
 options file so we could restore one of your files to my machine, then
 once you enter the password, it will be encrypted on my machine, 
allowing
 me access to all your files whenever I want (or until the password is
 changed externally). Not only that, but I could change your password and
 thus deny you access to your own node data, at least until you can get 
the
 TSM admin to straighten out the situation. So except in specific cases
 where the behavior I describe is acceptable or desirable, I do not
 recommend using the NODENAME method. Instead, either use SET ACCESS to
 allow NODE2 to access NODE1's data, or else use -VIRTUALNODENAME=NODE1 
if
 you want to come over to my machine and enter the password. With
 -VIRTUALNODENAME, the password will not be stored on my machine.
 
 Regards,
 
 Andy
 
 Andy Raibeck
 IBM Software Group
 Tivoli Storage Manager Client Development
 Internal Notes e-mail: Andrew Raibeck/Tucson/[EMAIL PROTECTED]
 Internet e-mail: [EMAIL PROTECTED]
 
 The only dumb question is the one that goes unasked.
 The command line is your friend.
 Good enough is the enemy of excellence.
 
 ADSM: Dist Stor Manager [EMAIL PROTECTED] wrote on 07/07/2004
 08:31:57:
 
  You can change the DSM.OPT/SYS and specify the NODENAME of the server
 you
  want to restore FROM. Or you can start the GUI with the
 -VIRTUALNODENAME=
  parameter. Another way is to grant access to the FROM nodes' data to 
the
 TO
  node.
 
  But you are still limited to restoring to the SAME operating
  system...Windows-Windows Unix-Unix and Novell-Novellno cross 
system
  restores.
 
  Bill Boyer
  DSS, Inc.
 
  -Original Message-
  From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Behalf 
Of
  Christian Demnitz
  Sent: Wednesday, July 07, 2004 11:11 AM
  To: [EMAIL PROTECTED]
  Subject: Antwort: Restore from node

AW: Antwort: Restore from node to another node

2004-07-07 Thread Stefan Holzwarth
Within the gui i have another option to use my admin account at the tsm
server for authentication.
Can you do this from the commandline? Ist very comfortable and you have no
password issues and no set access stories...
Kind Regards
Stefan Holzwarth


-Ursprüngliche Nachricht-
Von: Andrew Raibeck [mailto:[EMAIL PROTECTED] 
Gesendet: Mittwoch, 7. Juli 2004 17:45
An: [EMAIL PROTECTED]
Betreff: Re: Antwort: Restore from node to another node


I do not recommend putting NODE1's node name in NODE2's options file. If
PASSWORDACCESS GENERATE is being used (which is the likely case) then this
will cause NODE1's password to be encrypted on NODE2's machine. For
example, if you came over to my machine and put your node name in my
options file so we could restore one of your files to my machine, then
once you enter the password, it will be encrypted on my machine, allowing
me access to all your files whenever I want (or until the password is
changed externally). Not only that, but I could change your password and
thus deny you access to your own node data, at least until you can get the
TSM admin to straighten out the situation. So except in specific cases
where the behavior I describe is acceptable or desirable, I do not
recommend using the NODENAME method. Instead, either use SET ACCESS to
allow NODE2 to access NODE1's data, or else use -VIRTUALNODENAME=NODE1 if
you want to come over to my machine and enter the password. With
-VIRTUALNODENAME, the password will not be stored on my machine.

Regards,

Andy

Andy Raibeck
IBM Software Group
Tivoli Storage Manager Client Development
Internal Notes e-mail: Andrew Raibeck/Tucson/[EMAIL PROTECTED]
Internet e-mail: [EMAIL PROTECTED]

The only dumb question is the one that goes unasked.
The command line is your friend.
Good enough is the enemy of excellence.

ADSM: Dist Stor Manager [EMAIL PROTECTED] wrote on 07/07/2004
08:31:57:

 You can change the DSM.OPT/SYS and specify the NODENAME of the server
you
 want to restore FROM. Or you can start the GUI with the
-VIRTUALNODENAME=
 parameter. Another way is to grant access to the FROM nodes' data to the
TO
 node.

 But you are still limited to restoring to the SAME operating
 system...Windows-Windows Unix-Unix and Novell-Novellno cross system
 restores.

 Bill Boyer
 DSS, Inc.

 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Behalf Of
 Christian Demnitz
 Sent: Wednesday, July 07, 2004 11:11 AM
 To: [EMAIL PROTECTED]
 Subject: Antwort: Restore from node to another node


 yes, you have to modify the dsm.opt/sys entry with you restore node!



 Christian Demnitz
 CoC ADSM/TSM

 Sinius GmbH
 mailto:[EMAIL PROTECTED]
 WWW - http://www.sinius.com




 Timothy Hughes [EMAIL PROTECTED]
 Gesendet von: ADSM: Dist Stor Manager [EMAIL PROTECTED]
 07.07.2004 17:03
 Bitte antworten an
 ADSM: Dist Stor Manager [EMAIL PROTECTED]


 An
 [EMAIL PROTECTED]
 Kopie

 Thema
 Restore from node to another node






 Hello, All

 Is it possible to restore a file from one Node to another Node
 using the GUI?

 Also Is it possible to restore a file on a Novell box to a Windows Box?

 Thanks in advance for all responses!

 TSM version 5.2.1.3
 Running AIX 5.1


AW: Restore from Commandline with fromnode

2004-07-06 Thread Stefan Holzwarth
Sorry no success for

dsmc restore -pick=yes -subdir=yes -fromnode=szent063
\\szent063\c$\adsm32\baclient\dsmsched.log c:\nobackup
ANS1302E No objects on server match query

And

dsmc restore -pick=yes -subdir=yes -fromnode=szent063
\\szent063\c$\dsmsched.log c:\nobackup
ANS1302E No objects on server match query

But 

C:\adsm32\baclientdsmc query file -fromnode=szent063
IBM Tivoli Storage Manager
Command Line Backup/Archive Client Interface - Version 5, Release 2, Level
2.10

(c) Copyright by IBM Corporation and other(s) 1990, 2003. All Rights
Reserved.

Node Name: PC7802
Session established with server ADSM: MVS
  Server Version 5, Release 2, Level 2.1
  Server date/time: 07/06/2004 16:32:21  Last access: 07/06/2004 16:31:59

Num Last Incr Date  TypeFile Space Name
--- --  ---
  1   07/05/2004 22:56:24   NTFSSYSTEM OBJECT fsID: 7
  2   07/05/2004 22:14:29   NTFS\\szent063\c$ fsID: 5
  3   07/05/2004 22:53:26   NTFS\\szent063\d$ fsID: 6
  4   06/24/2004 09:13:47   NTFS\\szent063\e$ fsID: 16
  5   06/24/2004 09:13:47   NTFS\\szent063\f$ fsID: 15


Whats wrong?



-Ursprüngliche Nachricht-
Von: Bill Dourado [mailto:[EMAIL PROTECTED] 
Gesendet: Dienstag, 6. Juli 2004 16:30
An: [EMAIL PROTECTED]
Betreff: Re: Restore from Commandline with fromnode


Stefan

Try this:

 dsmc restore -pick=yes -subdir=yes  -fromnode=szent063
\\szent063\c$\dsmsched.log c:\nobackup\

Bill 








Stefan Holzwarth [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
06/07/2004 15:17
Please respond to ADSM: Dist Stor Manager

 
To: [EMAIL PROTECTED]
cc: 
Subject:Restore from Commandline with fromnode


Hi,
I need your help, since i do not see the mistake/error.

For daily backup reports I will try to restore dsmsched.log from a central
node for all NT-clients. 
(Tsm Server 5.2.2.1 Client 5.2.2.10)

I tried for example: dsmc restore -pick=yes -fromnode=szent063
\\szent063\c$\* c:\nobackup\
Result: ANS1302E No objects on server match query

Using TSM-Gui i have no problems doing cross-restores.

Kind Regards
Stefan Holzwarth


-Ursprüngliche Nachricht-
Von: Rajesh Oak [mailto:[EMAIL PROTECTED] 
Gesendet: Dienstag, 6. Juli 2004 15:48
An: [EMAIL PROTECTED]
Betreff: Re: Where are the TSM API docs???


http://www.adsm.org

Rajesh Oak

- Original Message -
From: P G [EMAIL PROTECTED]
Date: Mon, 5 Jul 2004 18:13:13 -0700
To: [EMAIL PROTECTED]
Subject: Where are the TSM API docs???

 I have the IBM Tivoli Storage Manager AIX 5.1 64-bit
 product and would like to use the client API to
 develop a client program, but I cannot find the
 documentation referenced in the README.api.  The
 README.api references the Tivoli Storage Manager
 Using the API (SH26-4123) and Tivoli Storage Manager
 Installing the Clients (SH26-4102) books.  Will
 someone point me to where these docs are located or
 any place containing documenation as to how to utilize
 the API?

 Also, I am new to this mailing list.  Is there a web
 address I can visit to search this mailing lists
 archives?

 Thanks.




 __
 Do you Yahoo!?
 New and Improved Yahoo! Mail - Send 10MB messages!
 http://promotions.yahoo.com/new_mail


--
___
Find what you are looking for with the Lycos Yellow Pages
http://r.lycos.com/r/yp_emailfooter/http://yellowpages.lycos.com/default.asp
?SRC=lycos10


AW: AW: Restore from Commandline with fromnode

2004-07-06 Thread Stefan Holzwarth
Andi, you are right!

In TSM GUI i used my admin account for authentication - so i missed the
grant access thing for nodenames!

Thanks a lot!

-Ursprüngliche Nachricht-
Von: Andrew Raibeck [mailto:[EMAIL PROTECTED] 
Gesendet: Dienstag, 6. Juli 2004 16:48
An: [EMAIL PROTECTED]
Betreff: Re: AW: Restore from Commandline with fromnode


If you run

   dsmc query access

from the machine whose node is szent063, what is the output? Does it 
indicate that node pc7802 has been given access?

Regards,

Andy

Andy Raibeck
IBM Software Group
Tivoli Storage Manager Client Development
Internal Notes e-mail: Andrew Raibeck/Tucson/[EMAIL PROTECTED]
Internet e-mail: [EMAIL PROTECTED]

The only dumb question is the one that goes unasked.
The command line is your friend.
Good enough is the enemy of excellence.



Stefan Holzwarth [EMAIL PROTECTED] 
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
07/06/2004 07:36
Please respond to
ADSM: Dist Stor Manager


To
[EMAIL PROTECTED]
cc

Subject
AW: Restore from Commandline with fromnode






Sorry no success for

dsmc restore -pick=yes -subdir=yes -fromnode=szent063
\\szent063\c$\adsm32\baclient\dsmsched.log c:\nobackup
ANS1302E No objects on server match query

And

dsmc restore -pick=yes -subdir=yes -fromnode=szent063
\\szent063\c$\dsmsched.log c:\nobackup
ANS1302E No objects on server match query

But 

C:\adsm32\baclientdsmc query file -fromnode=szent063
IBM Tivoli Storage Manager
Command Line Backup/Archive Client Interface - Version 5, Release 2, Level
2.10

(c) Copyright by IBM Corporation and other(s) 1990, 2003. All Rights
Reserved.

Node Name: PC7802
Session established with server ADSM: MVS
  Server Version 5, Release 2, Level 2.1
  Server date/time: 07/06/2004 16:32:21  Last access: 07/06/2004 16:31:59

Num Last Incr Date  TypeFile Space Name
--- --  ---
  1   07/05/2004 22:56:24   NTFSSYSTEM OBJECT fsID: 7
  2   07/05/2004 22:14:29   NTFS\\szent063\c$ fsID: 5
  3   07/05/2004 22:53:26   NTFS\\szent063\d$ fsID: 6
  4   06/24/2004 09:13:47   NTFS\\szent063\e$ fsID: 16
  5   06/24/2004 09:13:47   NTFS\\szent063\f$ fsID: 15


Whats wrong?



-Ursprüngliche Nachricht-
Von: Bill Dourado [mailto:[EMAIL PROTECTED] 
Gesendet: Dienstag, 6. Juli 2004 16:30
An: [EMAIL PROTECTED]
Betreff: Re: Restore from Commandline with fromnode


Stefan

Try this:

 dsmc restore -pick=yes -subdir=yes  -fromnode=szent063
\\szent063\c$\dsmsched.log c:\nobackup\

Bill 








Stefan Holzwarth [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
06/07/2004 15:17
Please respond to ADSM: Dist Stor Manager

 
To: [EMAIL PROTECTED]
cc: 
Subject:Restore from Commandline with fromnode


Hi,
I need your help, since i do not see the mistake/error.

For daily backup reports I will try to restore dsmsched.log from a central
node for all NT-clients. 
(Tsm Server 5.2.2.1 Client 5.2.2.10)

I tried for example: dsmc restore -pick=yes -fromnode=szent063
\\szent063\c$\* c:\nobackup\
Result: ANS1302E No objects on server match query

Using TSM-Gui i have no problems doing cross-restores.

Kind Regards
Stefan Holzwarth


-Ursprüngliche Nachricht-
Von: Rajesh Oak [mailto:[EMAIL PROTECTED] 
Gesendet: Dienstag, 6. Juli 2004 15:48
An: [EMAIL PROTECTED]
Betreff: Re: Where are the TSM API docs???


http://www.adsm.org

Rajesh Oak

- Original Message -
From: P G [EMAIL PROTECTED]
Date: Mon, 5 Jul 2004 18:13:13 -0700
To: [EMAIL PROTECTED]
Subject: Where are the TSM API docs???

 I have the IBM Tivoli Storage Manager AIX 5.1 64-bit
 product and would like to use the client API to
 develop a client program, but I cannot find the
 documentation referenced in the README.api.  The
 README.api references the Tivoli Storage Manager
 Using the API (SH26-4123) and Tivoli Storage Manager
 Installing the Clients (SH26-4102) books.  Will
 someone point me to where these docs are located or
 any place containing documenation as to how to utilize
 the API?

 Also, I am new to this mailing list.  Is there a web
 address I can visit to search this mailing lists
 archives?

 Thanks.




 __
 Do you Yahoo!?
 New and Improved Yahoo! Mail - Send 10MB messages!
 http://promotions.yahoo.com/new_mail


--
___
Find what you are looking for with the Lycos Yellow Pages
http://r.lycos.com/r/yp_emailfooter/http://yellowpages.lycos.com/default.asp

?SRC=lycos10


AW: AW: Command line equivalent for GUI

2004-05-13 Thread Stefan Holzwarth
Andi, you have always to change your scripts if there are new directories on
the NAS system...
So the risk is high to miss one.
Kind Regards, Stefan Holzwarth

 -Ursprüngliche Nachricht-
 Von: Andrew Raibeck [mailto:[EMAIL PROTECTED]
 Gesendet: Donnerstag, 13. Mai 2004 15:51
 An: [EMAIL PROTECTED]
 Betreff: Re: AW: Command line equivalent for GUI
 
 
 If I understand this correctly, include/exclude isn't really the issue
 since from the GUI you were picking the specific 
 subdirectories that you
 wanted to process. Why not simply do the same thing from the CLI?
 
 For example, if from the GUI you were selecting directories 
 C:\Some Dir,
 D:\MyDir, and D:\work\datafiles, then why not just do this?
 
dsmc i c:\some dir\* d:\mydir\ d:\work\datafiles\ -subdir=yes
 
 Note that the asterisk is required in the first example because the
 embedded space in the file specification necessitates using quotation
 marks. If you don't care why, then you can skip the rest of this. For
 those who want to know, it has to do with how the Windows command
 processor parses quotes, backslashes, and escape characters.
 
 If I had used a command like this, without the asterisk:
 
dsmc i c:\some dir\ d:\mydir\ d:\work\datafiles\ -subdir=yes
 
 Then the Windows command processor would have treated the ending
 slash-quote combination in c:\some dir\ as an embedded 
 quotation mark
 rather than a delimiter, passing the following arguments to the TSM
 client:
 
arg1: i
arg2: c:\some dir d:\mydir\ d:\work\datafiles\ -subdir=yes
 
 which would make the client think that arg2 was the entire file
 specifications, rather than three file specs and an option. 
 What we really
 want to pass to the client is:
 
arg1: i
arg2: c:\some dir
arg3: d:\mydir\
arg4: d:\work\datafiles\
arg5: -subdir=yes
 
 If this doesn't make sense, don't worry about it just use 
 the asterisk
 as I showed above and it should work fine.
 
 Regards,
 
 Andy
 
 Andy Raibeck
 IBM Software Group
 Tivoli Storage Manager Client Development
 Internal Notes e-mail: Andrew Raibeck/Tucson/[EMAIL PROTECTED]
 Internet e-mail: [EMAIL PROTECTED]
 
 The only dumb question is the one that goes unasked.
 The command line is your friend.
 Good enough is the enemy of excellence.
 
Hi,
   
I'm backing up a NAS device with 1M+ files. As
   other
posts indicate, the only way to get any
   performance in
this setup is to split the file backups across
multiple sessions. In my case I have two backup
   hosts,
each running two backup streams. I can start each
   of
the four backup streams using the GUI and
   selecting
appropriate folders. A kind of load balancing is
achieved by picking an appropriate mix of folders
   for
each backup stream. Each backup runs in a few
   hours
each stream processing a subset of the 1M files.
   
My problem is I can't figure out how to accomplish
   the
same thing using the command line and associated
include/excludes. I'd like to schedule the four
streams to run the same way, each with an
   appropriate
set of folders/files to process. It would be
   wonderful
if I could somehow see how this is accomplished
   via
the GUI, as I can't seem to get the same behavior
using commands. Each combination I have tried
   always
results in each backup stream processing/scanning
   all
of the 1M files! This takes way too many hours and
doesn't provide the same parallel behavior I get
   when
running the GUI. How can I limit each stream to
processes only the files/folders I desire without
scanning everything; the same way as the GUI seems
   to
function?
   
Thanks for your help,
Rodney Hroblak
ADP
   
   
   
   
__
Do you Yahoo!?
Yahoo! Movies - Buy advance tickets for 'Shrek 2'
   
  
  http://movies.yahoo.com/showtimes/movie?mid=1808405861
   
 
 
 
 
 
  __
  Do you Yahoo!?
  Yahoo! Movies - Buy advance tickets for 'Shrek 2'
  http://movies.yahoo.com/showtimes/movie?mid=1808405861
 


AW: Command line equivalent for GUI

2004-05-13 Thread Stefan Holzwarth
Hi Rodney

we use successfully the following since 2 years:
(for each Volume at the NAS server sznas01 we have a construct like the
example data1)

Content of dsm_a.opt  
NODENAMESZNAS01_A
* EXCLUDE.DIR \\sznas01\tsmdata1$\[A-E]*
EXCLUDE.DIR \\sznas01\tsmdata1$\[F-J]*
EXCLUDE.DIR \\sznas01\tsmdata1$\[K-O]*
EXCLUDE.DIR \\sznas01\tsmdata1$\[P-T]*
EXCLUDE.DIR \\sznas01\tsmdata1$\[U-Z0-9$]*

Content of dsm_b.opt
NODENAMESZNAS01_B
EXCLUDE.DIR \\sznas01\tsmdata1$\[A-E]*
*EXCLUDE.DIR \\sznas01\tsmdata1$\[F-J]*
EXCLUDE.DIR \\sznas01\tsmdata1$\[K-O]*
EXCLUDE.DIR \\sznas01\tsmdata1$\[P-T]*
EXCLUDE.DIR \\sznas01\tsmdata1$\[U-Z0-9$]*


so we have 5 TSM-nodes at one NT machine defined.
The advantage is that you don't have that big volumes to backup and recover
from one TSM-node.

We are able do do a full incremental of all volumes of our filer within 24
hours to TSM 5.2 @ MVS. (8 M+ objects 3 TByte Size)

Kind regards
Stefan Holzwarth


 -Ursprüngliche Nachricht-
 Von: rh [mailto:[EMAIL PROTECTED]
 Gesendet: Mittwoch, 12. Mai 2004 18:17
 An: [EMAIL PROTECTED]
 Betreff: Command line equivalent for GUI
 
 
 Hi,
 
 I'm backing up a NAS device with 1M+ files. As other
 posts indicate, the only way to get any performance in
 this setup is to split the file backups across
 multiple sessions. In my case I have two backup hosts,
 each running two backup streams. I can start each of
 the four backup streams using the GUI and selecting
 appropriate folders. A kind of load balancing is
 achieved by picking an appropriate mix of folders for
 each backup stream. Each backup runs in a few hours
 each stream processing a subset of the 1M files.
 
 My problem is I can't figure out how to accomplish the
 same thing using the command line and associated
 include/excludes. I'd like to schedule the four
 streams to run the same way, each with an appropriate
 set of folders/files to process. It would be wonderful
 if I could somehow see how this is accomplished via
 the GUI, as I can't seem to get the same behavior
 using commands. Each combination I have tried always
 results in each backup stream processing/scanning all
 of the 1M files! This takes way too many hours and
 doesn't provide the same parallel behavior I get when
 running the GUI. How can I limit each stream to
 processes only the files/folders I desire without
 scanning everything; the same way as the GUI seems to
 function?
 
 Thanks for your help,
 Rodney Hroblak
 ADP
 
 
 
 
 __
 Do you Yahoo!?
 Yahoo! Movies - Buy advance tickets for 'Shrek 2'
 http://movies.yahoo.com/showtimes/movie?mid=1808405861
 


AW: ntuser.dat backup on Windows 2003

2004-05-09 Thread Stefan Holzwarth
No way to do that with TSM at the moment. 
We use a presched-cmd for backup of that data with ntbackup

Regards
Stefan Holzwarth

-Ursprüngliche Nachricht-
Von: TSM_User [mailto:[EMAIL PROTECTED] 
Gesendet: Saturday, May 08, 2004 9:32 PM
An: [EMAIL PROTECTED]
Betreff: ntuser.dat backup on Windows 2003


On Windows 2003 it appears that a user that is currently logged on does not
have their ntuser.dat file backed up under adsm.sys like Windows 2000 and
Windows XP.

Was eliminating this done for a specific reason?  Or is the profile data
backed up differently on Windows 2003?


-
Do you Yahoo!?
Win a $20,000 Career Makeover at Yahoo! HotJobs


AW: AW: Internal Memory Managment TSM @MVS

2004-04-28 Thread Stefan Holzwarth
John,
search for TSM Server performance degradation on MVS
or use reference number 1155186
Stefan

 -Ursprüngliche Nachricht-
 Von: John Naylor [mailto:[EMAIL PROTECTED]
 Gesendet: Mittwoch, 28. April 2004 10:47
 An: [EMAIL PROTECTED]
 Betreff: Re: AW: Internal Memory Managment TSM @MVS
 
 
 Stefan/Zoltan,
 I would be interested in reading more on this.
 I cannot find any hits for pmr92547 on ibm website
 Have you got an apar reference
 thanks,
 John
 
 
 
 **
 The information in this E-Mail is confidential and may be legally
 privileged. It may not represent the views of Scottish and Southern
 Energy Group.
 It is intended solely for the addressees. Access to this E-Mail by
 anyone else is unauthorised. If you are not the intended recipient,
 any disclosure, copying, distribution or any action taken or omitted
 to be taken in reliance on it, is prohibited and may be unlawful.
 Any unauthorised recipient should advise the sender immediately of
 the error in transmission.
 
 Scottish Hydro-Electric, Southern Electric, SWALEC and S+S
 are trading names of the Scottish and Southern Energy Group.
 **
 


Internal Memory Managment TSM @MVS

2004-04-27 Thread Stefan Holzwarth
Hi,

3 month ago i read about PMR92547, that describes performance-problems
coming from a too small internal memorypool
Since we had speedproblems we doubled that pool size and after 3 month i
believe it was a good step forward.

With show memu you can have a look at the current settings (should be done
after some uptime)

MAX initial storage  1073741824  (1024.0 MB)
Freeheld bytes 45106900  (43.0 MB)  - this is needed by
the TSM Server
MaxQuickFree bytes 52682955  (50.2 MB)- this should be always
greater than freeheld
51 Page buffers of 25724 : 194 buffers of 3215.
26 Large buffers of 1607 : 153 XLarge buffers of 200.
   18 buffers free: 257114 hiAlloc buffers: 70448 current buffers.
   144822 units of 56 bytes hiAlloc: 58024 units of 56 bytes hiCur.
  001 blocks, total 0002672, lgst 0002672 avg 0002672  by ¨0Cf?+
Ñ+8E08.
  001 blocks, total 136, lgst 136 avg 136  by ¨0Cf?+
Ñ+8080.
  001 blocks, total 0006652, lgst 0006652 avg 0006652  by ¨0Cf?+
Ñ+8078.
  007 blocks, total 0001232, lgst 176 avg 176  by ¨0Cf?+
Ñ+8062.
  669 blocks, total 0065147, lgst 285 avg 097  by ¨0Cf?+
Ñ+89DE.


with Show Memu SET MAXQUICK  52682955 set the MaxQuickFree to about
50M


All at your own risk!

Kind regards,
Stefan Holzwarth


--
Stefan Holzwarth
ADAC e.V. (Informationsverarbeitung - Systemtechnik - Basisdienste)
Am Westpark 8, 81373 München, Tel.: (089) 7676-5212, Fax: (089) 76768924
mailto:[EMAIL PROTECTED]


AW: Internal Memory Managment TSM @MVS

2004-04-27 Thread Stefan Holzwarth
Matt, since 
Freeheld bytes touches MaxQuickFree bytes I would say you should increase
that size 

Kind regards,
Stefan Holzwarth
 
 I was under the impression that if FREEHELD BYTES = 
 MAXQUICKFREE BYTES  things were O.K.  I currently have mine 
 set to 40MB.  Are you saying I am still low?
 Matt
 
 MAX initial storage  536870912  (512.0 MB)
 Freeheld bytes 40959642  (39.1 MB)
 MaxQuickFree bytes 4096  (39.1 MB)
 44 Page buffers of 12671 : 133 buffers of 1583.
 62 Large buffers of 791 : 89 XLarge buffers of 98.
113360 buffers free: 221949 hiAlloc buffers: 41790 current 
 buffers. 
 
 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] 
 On Behalf Of Stefan Holzwarth
 Sent: Tuesday, April 27, 2004 3:09 AM
 To: [EMAIL PROTECTED]
 Subject: Internal Memory Managment TSM @MVS
 
 Hi,
 
 3 month ago i read about PMR92547, that describes 
 performance-problems coming from a too small internal 
 memorypool Since we had speedproblems we doubled that pool 
 size and after 3 month i believe it was a good step forward.
 
 With show memu you can have a look at the current settings 
 (should be done after some uptime)
 
 MAX initial storage  1073741824  (1024.0 MB)
 Freeheld bytes 45106900  (43.0 MB)  - 
 this is needed by
 the TSM Server
 MaxQuickFree bytes 52682955  (50.2 MB)- this 
 should be always
 greater than freeheld
 51 Page buffers of 25724 : 194 buffers of 3215.
 26 Large buffers of 1607 : 153 XLarge buffers of 200.
18 buffers free: 257114 hiAlloc buffers: 70448 current buffers.
144822 units of 56 bytes hiAlloc: 58024 units of 56 bytes hiCur.
   001 blocks, total 0002672, lgst 0002672 avg 0002672  by ¨0Cf?+
 Ñ+8E08.
   001 blocks, total 136, lgst 136 avg 136  by ¨0Cf?+
 Ñ+8080.
   001 blocks, total 0006652, lgst 0006652 avg 0006652  by ¨0Cf?+
 Ñ+8078.
   007 blocks, total 0001232, lgst 176 avg 176  by ¨0Cf?+
 Ñ+8062.
   669 blocks, total 0065147, lgst 285 avg 097  by ¨0Cf?+
 Ñ+89DE.
 
 
 with Show Memu SET MAXQUICK  52682955 set the 
 MaxQuickFree to about
 50M
 
 
 All at your own risk!
 
 Kind regards,
 Stefan Holzwarth
 
 --
 --
 --
 Stefan Holzwarth
 ADAC e.V. (Informationsverarbeitung - Systemtechnik - 
 Basisdienste) Am Westpark 8, 81373 München, Tel.: (089) 
 7676-5212, Fax: (089) 76768924 mailto:[EMAIL PROTECTED]
 


AW: BA client 5.2.2 and Server 2003

2004-04-19 Thread Stefan Holzwarth
There are many problems - also that you described with the levels before
5.2.2.5
We use 5.2.2.9 and it seems that this version is quiet ok.
For open userprofiles we use ntbackup per preschedcmd, since TSM does not
support that at the moment on Windows 2003.

Kind Regards

Stefan Holzwarth


--
Stefan Holzwarth
ADAC e.V. (Informationsverarbeitung - Systemtechnik - Basisdienste)
Am Westpark 8, 81373 München, Tel.: (089) 7676-5212, Fax: (089) 76768924
mailto:[EMAIL PROTECTED]


 -Ursprüngliche Nachricht-
 Von: Mike Bantz [mailto:[EMAIL PROTECTED]
 Gesendet: Montag, 19. April 2004 16:36
 An: [EMAIL PROTECTED]
 Betreff: BA client 5.2.2 and Server 2003
 
 
 I've just installed the 5.2.2 baclient on a Server 2003 
 machine, trying to
 back up to a Version 5, Release 2, Level 0.2 TSM server.
 
 The dsm.opt file looks like this:
 
 PASSWORDACCESS  GENERATE
 TCPSERVERADDRESS10.17.10.13
 dirmc directory
 ERRORLOGRETENTION   5 D
 SCHEDLOGRETENTION   5 D
 
 This opt file backs up every local drive on any other machine 
 we've got it
 on.
 
 I'd just like to back up the C$ and D$, default mgmt class. 
 Problem is, the
 server will back up the C$, process the D$, then kick back an error
 ANS1950E, that Backup via Windows Shadow Copy failed.
 
 I cannot query the error on the server (no text found).
 
 I've tried explicity including the d:\temp directory, etc to no avail.
 
 This is something obvious, isn't it? :-)
 
 Mike Bantz
 Systems Administrator
 Research Systems, Inc
 


AW: OS/390 2.10 upgrade to TSM 5.2

2004-02-24 Thread Stefan Holzwarth
We use TSM 5.2.2.1 on Z/OS 1.2 without problems since 4 weeks. 
Regards
Stefan Holzwarth


--
Stefan Holzwarth
ADAC e.V. (Informationsverarbeitung - Systemtechnik - Basisdienste)
Am Westpark 8, 81373 München, Tel.: (089) 7676-5212, Fax: (089) 76768924
mailto:[EMAIL PROTECTED]


 -Ursprüngliche Nachricht-
 Von: Nancy R. Brizuela [mailto:[EMAIL PROTECTED]
 Gesendet: Dienstag, 24. Februar 2004 23:15
 An: [EMAIL PROTECTED]
 Betreff: FW: OS/390 2.10 upgrade to TSM 5.2
 
 
 Hello,
 
 We are thinking of upgrading from TSM 5.1.8 to TSM 5.2, running on
 OS/390 V2.10.  We are also planning to upgrade to Z/os 4.1 
 this summer.
 Has anyone running TSM 5.2 on OS/390 or Z/os run into any problems not
 encountered in 5.1?   I am aware that I need to run CLEANUP 
 BACKUPGROUPS
 before upgrading but didn't see any other conversation about this
 platform on the list.  Thanks!
 
 Nancy Brizuela
 University of Wyoming
 IBM Systems Group
 Ivinson Room 238
 (307)766-2958
 


AW: Dsmfmt on cx600 any thoughts?

2004-02-19 Thread Stefan Holzwarth
Hi,

what about the option to start more then one define volume. Does the speed
increase?

Greetings 
Stefan Holzwarth

 -Ursprüngliche Nachricht-
 Von: Karel Bos [mailto:[EMAIL PROTECTED]
 Gesendet: Donnerstag, 19. Februar 2004 17:45
 An: [EMAIL PROTECTED]
 Betreff: Re: Dsmfmt on cx600 any thoughts?
 
 
 I would create and format the volumes from within TSM. This will speed
 things up!
 
 Regard,
 
 Karel
 
 -Oorspronkelijk bericht-
 Van: Amar Vi [mailto:[EMAIL PROTECTED]
 Verzonden: donderdag 19 februari 2004 17:25
 Aan: [EMAIL PROTECTED]
 Onderwerp: Dsmfmt on cx600 any thoughts?
 
 
 Greetings,
  
 We have serious performance issue for dsmfmt command.
  
 We are getting 45MB/sec write speed on AIX4.3.3 box connected 
 to CX600 ATA
 RAID5 4+1 disk group.But when we use dsmfmt it gives 9 to 16 
 MB/sec. Which
 is very sluggish to format 16 TB storage for storage pools.
  
 Does anybody have information on how dsmfmt works?
  
  or how to speed up so that it will give better I/O on SAN storage?
  
 Thank you,
 
 Bandu Vibhute
 
 Technology Solutions Group 
 
 EMC² 
 
 Where Information Lives 
 
 *Office: 484-322-1000
 
 * Cell: 484-868-0599
 
 *E-mail: vibhute_bandu mailto:[EMAIL PROTECTED] @emc.com
 


AW: Bare Metal Restore Intel

2004-02-13 Thread Stefan Holzwarth
Hello Christian and Phil,

I also would be very interested in your procedures.

Kind Regards,
Stefan Holzwarth


--
Stefan Holzwarth
ADAC e.V. (Informationsverarbeitung - Systemtechnik - Basisdienste)
Am Westpark 8, 81373 München, Tel.: (089) 7676-5212, Fax: (089) 76768924
[EMAIL PROTECTED]
 


AW: Primary storage pool on disk

2004-02-02 Thread Stefan Holzwarth
Hi, my approach:

Design your primary pool on fast disks with a size that is enough for your
daily backupvolume. 
If the daily backup volume is not to much and the main volume comes from
daily changing files you can size your primary pool
so that the files expire on primary before they migrate to the next pool.
A second pool on cheaper ATA disk is good enough for the files that (never)
change. 
A FastT with Raid 5 should be fast enough and can be mixed with fc and ata
disks.

Kind Regards
Stefan Holzwarth


 -Ursprüngliche Nachricht-
 Von: Miles Purdy [mailto:[EMAIL PROTECTED]
 Gesendet: Montag, 2. Februar 2004 17:32
 An: [EMAIL PROTECTED]
 Betreff: Re: Primary storage pool on disk
 
 
 My $0.02...
 
 GOOD storage doesn't get a whole lot cheaper than the FAStT.  
 Of course you can use ATA disks, but I wouldn't recommend it. 
 At 7TB, they have quite a bit of data. If they already have a 
 FAStT, but some drawers with 143GB disks. RAID 5 is fine for 
 this situation. If they want a cheaper solution, use tapes :).
 
 Miles
 
 
  [EMAIL PROTECTED] 30-Jan-04 12:45:47 PM 
 Hi,
 
 I have a customer who wants to put his primary storage pool 
 on disk.  This is
 now 3TB and can grown up to 7TB.  Incremental backup is appr 
 300GB/night.
 
 But I have some questions?
 - What kind of disks should I use?  (the customer has a FastT 
 from IBM with
 fiber disks, but he wants a cheaper solution)
 - What kind of raid level as protection?  Raid5, JBOD, Raild 
 0+1, ... ??
 - Should I use a DISK type or FILE type storage pool?
 
 Thx.
 
 Stef
 


AW: Open File Support for Windows 2003 (Was Userprofile Backup wi t h Windows2003 and TSM 5.2)

2004-01-26 Thread Stefan Holzwarth
In our company we have a lot of domain user accounts that are security
designed for special tasks on the servers. 
Primary for common services as e.g. scheduler, but also for applications
that best run under a customized service account 
as SAP, TDP 4 MSSQL, TWS, EMC Control Center, many others. 
Since all that applications store important informations within the profile
it's not only a workstation problem.
Kind Regards 
Stefan Holzwarth

 -Ursprüngliche Nachricht-
 Von: Prather, Wanda [mailto:[EMAIL PROTECTED]
 Gesendet: Mittwoch, 21. Januar 2004 22:17
 An: [EMAIL PROTECTED]
 Betreff: Re: Open File Support for Windows 2003 (Was 
 Userprofile Backup
 wi t h Windows2003 and TSM 5.2)
 
 
 Yes, that is true, at least on W2K pro and server, the 
 profiles are not
 locked for inactive users and will back up OK.
 
 However, if you have any services running under accounts 
 other than system
 (which you must do if you want the service to be able to 
 access network
 resources), then they are always logged on/active.
 
 So you can hose yourself up an infinite number of ways~! .
 
 
 
 -Original Message-
 From: Stapleton, Mark [mailto:[EMAIL PROTECTED]
 Sent: Wednesday, January 21, 2004 3:29 PM
 To: [EMAIL PROTECTED]
 Subject: Re: Open File Support for Windows 2003 (Was 
 Userprofile Backup wi t
 h Windows2003 and TSM 5.2)
 
 
 From: Prather, Wanda [mailto:[EMAIL PROTECTED]
 THANK YOU for this information, it is very important to us.
 
 It seems Tivoli has never been interested in marketing TSM 
 for backing up
 workstations (which REQUIRES the ability to recover user profiles).
 
 
 Good point, Wanda. Most customers don't give a flip about restoring a
 server's user profiles, since most servers use the default 
 user profile,
 which is rebuilt when you reinstall the OS.
 
 However, I will say that TSM is not the only backup/recovery 
 system that has
 a hard time with user profiles. NetWorker suffers from the 
 same problem.
 
 It seems to me with older TSM clients that the only user profile that
 doesn't backup is the one used by the logon(s) that is/are 
 active during the
 backup. Does anyone still find this true?
 
 --
 Mark Stapleton ([EMAIL PROTECTED])
 


AW: Open File Support for Windows 2003 (Was Userprofile Backup wi t h Windows2003 and TSM 5.2)

2004-01-20 Thread Stefan Holzwarth
.   
.   
So, it is not recommended to use OFS for backing up locked Windows  
system files and the current version of OFS support in TSM v5.2.0 does  
not support Windows Server 2003. If customer desire, he could use it on 
his own risk.

---

If I'm the only one that does miss his userprofiles I'm probably wrong.
So I will do further testing and wait what the community's experiences.

Kind Regards 
Stefan Holzwarth


 -Ursprüngliche Nachricht-
 Von: Rushforth, Tim [mailto:[EMAIL PROTECTED]
 Gesendet: Dienstag, 20. Januar 2004 16:47
 An: [EMAIL PROTECTED]
 Betreff: Re: Open File Support for Windows 2003 (Was 
 Userprofile Backup
 wi t h Windows2003 and TSM 5.2)
 
 
 Stefan:
 
 Do you have the APAR # for the documentation change for this?
 
 Thanks,
 
 Tim Rushforth
 City of Winnipeg
 
 -Original Message-
 From: Rushforth, Tim [mailto:[EMAIL PROTECTED] 
 Sent: January 13, 2004 2:39 PM
 To: [EMAIL PROTECTED]
 Subject: Re: Open File Support for Windows 2003 (Was 
 Userprofile Backup wit
 h Windows2003 and TSM 5.2)
 
 OK, I'm confused.
 
 The 5.2.2 readme indicates that Open File Support is not 
 supported for 2003.
 The manual seems to indicate this also.
 
 But I am able to install OFS agent with 5.2.2 client on 2003 
 and it allows
 me to backup locked files (ntuser.dat of a logged in user).
 
 Support has indicated below to use Open File Support on 2003.
 
 So can someone (Andy?) please clarify this?
 
 Thanks,
 
 Tim Rushforth
 City of Winnipeg
 
 -Original Message-
 From: Rushforth, Tim [mailto:[EMAIL PROTECTED] 
 Sent: January 13, 2004 9:33 AM
 To: [EMAIL PROTECTED]
 Subject: Re: Userprofile Backup with Windows2003 and TSM 5.2
 
 But Open File Support is not supported for Windows 2003 ...
 
 -Original Message-
 From: Stefan Holzwarth [mailto:[EMAIL PROTECTED] 
 Sent: January 13, 2004 6:58 AM
 To: [EMAIL PROTECTED]
 Subject: Userprofile Backup with Windows2003 and TSM 5.2
 
 Hi all,
 
 during my recoverytests with Windows2003 and TSM 5.2 (client 
 and  server) i
 couldn't find a way to backup and restore NT userprofiles. 
 The content in
 adsm.sys didn't show the files as usual and the ASR feature 
 didn't restore
 the profiles. So i opened a PMR at IBM and got the follwing 
 information
 after some see-saw mailing:
 
 The only way to save that information is to use the open file 
 support. 
 
 Since that fact isn't documented i opend an doc-apar.
 Kind Regards
 
 Stefan Holzwarth
 
 --
 --
 --
 Stefan Holzwarth
 ADAC e.V. (Informationsverarbeitung - Systemtechnik - Basisdienste)
 Am Westpark 8, 81373 München, Tel.: (089) 7676-5212, Fax: 
 (089) 76768924
 mailto:[EMAIL PROTECTED]
 


Userprofile Backup with Windows2003 and TSM 5.2

2004-01-13 Thread Stefan Holzwarth
Hi all,

during my recoverytests with Windows2003 and TSM 5.2 (client and  server) i
couldn't find a way to backup and restore NT userprofiles. The content in
adsm.sys didn't show the files as usual and the ASR feature didn't restore
the profiles. So i opened a PMR at IBM and got the follwing information
after some see-saw mailing:

The only way to save that information is to use the open file support. 

Since that fact isn't documented i opend an doc-apar.
Kind Regards

Stefan Holzwarth


--
Stefan Holzwarth
ADAC e.V. (Informationsverarbeitung - Systemtechnik - Basisdienste)
Am Westpark 8, 81373 München, Tel.: (089) 7676-5212, Fax: (089) 76768924
mailto:[EMAIL PROTECTED]


Userprofile Backup with 5.2 and Windows2003

2003-12-16 Thread Stefan Holzwarth
Hi,

i can't find any informatioon about userprofile backup under windows2003 und
ITSM 5.2.
After incr backup with domain ALL-LOCAL there are no entires in
adsm.sys (except some xml files), as it had been under Windows2000 and NT4).
Can someone help or explain?

Kind regards,
Stefan Holzwarth


ASR - problem with userprofiles

2003-12-12 Thread Stefan Holzwarth
Hi,
just tried the recovery of a Windows 2003 Server with ITSM Client 5.2.0.6
from MVS Server with 5.2.1.3.
I was surprised, that nearly everything works as described. After an hour
the Windows2003 Server was up again.

2 little things to mention
- desktop.ini problem occured files are not restored with the hidden
attribute
- timeout of 10 minutes during tsmcli automatic installation. Seem that
waitforsingnal command does not work correct

and
- user profiles that had been accessed during backup aren't restored. In the
past you could copy the contents of adsm.sys to the profiledirectory (do not
forget the default profile) any all where happy. But know i can not find the
ntuser.dat files.

Do i have to run open file support for the system volume?

Kind regards
Stefan Holzwarth


--
Stefan Holzwarth
ADAC e.V. (Informationsverarbeitung - Systemtechnik - Basisdienste)
Am Westpark 8, 81373 München, Tel.: (089) 7676-5212, Fax: (089) 76768924
mailto:[EMAIL PROTECTED]


AW: Incremental backups on file systems that contain a large numb er of files

2003-10-02 Thread Stefan Holzwarth
My little input to your miracle:
Maybe switching to an domainuser alters the kind the tsm client uses to save
the data.
As an administrator he should use the backup api. As an domain usser he can
only use cifs access.

Kind Regards,
Stefan Holzwarth

-Ursprüngliche Nachricht-
Von: Alex Paschal [mailto:[EMAIL PROTECTED]
Gesendet: Mittwoch, 1. Oktober 2003 21:02
An: [EMAIL PROTECTED]
Betreff: Re: Incremental backups on file systems that contain a large
numb er of files


Michael, that's weird.  I believe you, but I just had to say that it's
weird.

Mark, for AIX, since there's no journaled backups there, have you considered
-incrbydate's during the week, then a regular incremental on the weekend?
This greatly sped up the backup of one of our very large file systems on
AIX.

Alex Paschal
Freightliner, LLC
(503) 745-6850 phone/vmail

-Original Message-
From: Wheelock, Michael D [mailto:[EMAIL PROTECTED]
Sent: Wednesday, October 01, 2003 8:07 AM
To: [EMAIL PROTECTED]
Subject: Re: Incremental backups on file systems that contain a large
numb er of files


Hi,

I will throw this out there, though it is a wintel solution and will not
function as a solution under aix.  We had a system with about 3 million
files.  The disk was SAN attached and quite speedy.  The system was
gig-ethernet attached and had no cpu/memory bottlencks.  Journaling wasn't
an option as this was a clustered system.  The incremental backup (all 400MB
of it) took around 10 hours for these 3 million files.  I tried just about
everything, but at various points, the backup would slow to a crawl (just
from doing an analysis on the entries in the dsmsched.log (the timestampe --
### files processed entries)).  I eventually began trying off the wall
stuff.  The thing that fixed us up was running the scheduler as a domain
user.  The backup now takes 45 minutes.  I have no idea why, but this may
help someone else on a wintel platform deal with a very similar issue.

Michael Wheelock
Integris Health of Oklahoma



-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
Sent: Wednesday, October 01, 2003 9:57 AM
To: [EMAIL PROTECTED]
Subject: Re: Incremental backups on file systems that contain a large number
of files


= On Wed, 1 Oct 2003 09:27:41 -0400, Mark Trancygier
[EMAIL PROTECTED] said:


 We are currently having problems performing incremental backups on a
 file system that has a large amount of files. The daily changes to
 this file system are small so we are only sending approximately 5 - 10
 gig per backup however, since there are around 3,000,000 files to
 examine, the backup takes about 10 - 13 hours to complete.

 Does anyone have any suggestions on how to handle incremental backups
 of file systems that contain a large number of I-nodes ?


We've got several systems with lots of files, including 3 between 10 and 15
million files.

Key thing is to figure out where your bottleneck is; if you're having
problems with client disk contention, one set of things are useful.  If
you're having TSM database contention problems, another set is indicated.

I'll talk about the different strategies we've phased through.

We've got a large number of virtual mount points defined, so that our work
is chopped up into chunks of approximately 700,000 files each.  This lets us
run a large number of parallel sessions (e.g. resourceutilization=10, or
many heavyweight processes) at the same time.

If you have disk contention problems on your client system, however, this
will make your problem worse, not better.  Our disk architecture is such
that we weren't getting in our own way.

On the client, that is.

What we determined was that lots of actors interested in writing to the TSM
DB simultaneously was our big problem.  When we ran with a parallelism of
four or five, an incremental took seven to 15 hours.  When we ran with a
parallelism of two, it completed in 4.

Of course, for those 7-15 hours, it was also making life hell for anything
else that wanted to update the database (Expiration anyone?) so we're
currently backing those bits up in separate windows, with a server that's
mostly otherwise quiet.



- Allen S. Rout
This e-mail may contain identifiable health information that is subject to
protection
under state and federal law. This information is intended to be for the use
of the
individual named above. If you are not the intended recipient, be aware that
any
disclosure, copying, distribution or use of the contents of this information
is prohibited
and may be punishable by law. If you have received this electronic
transmission in
error, please notify us immediately by electronic mail (reply).


Findings with Migration and bad Reclamation behavior

2003-08-26 Thread Stefan Holzwarth
Today I took a closer look at how often and how much data is reclaimed and
migrated.

Since upgrade of on of our primary stgpool to 700 GByte  2 month ago we
could establish a migdelay of  8 days.
Together with our standard policy (1 active + 3 inactive and 60 days keep
for last deleted) we could reduce the migration amount from
100% to 30% of daily backup volume for that pool (120 small NT server):
(All nodes with name NT% go to BACKUPPOOL)

'select sum(bytes)/(1024*1024*1024) as GBYTE from adsm.summary where
activity = 'BACKUP' and entity like 'NT%' and end_time = current_timestamp
- 14 days'
Result: 2188
'select sum(bytes)/(1024*1024*1024) as GBYTE from adsm.summary where
activity = 'MIGRATION' and entity = 'BACKUPPOOL' and end_time =
current_timestamp - 14 days'
Result: 593

In the past these 2 values where the same for each day 

My next hope was that the reclamation for the nt nodes should also
significantly have been reduced.
But I could not find a really significant decrease - maybe I have to wait a
little longer for that process

But what I found was the following:   
I have from time to time reclamation processes that move more than 3 times
of the capacity of one 3590 tape !! i.e.
08/21/2003 11:43:21 ANR0986I Process 199 for SPACE RECLAMATION running in
the BACKGROUND processed 1 items for a total of 33.007.866.134 bytes with a
completion state of SUCCESS at 11:43:21.  

The reclamation process put 1 large file spread over 4 tapes onto 4 new
tapes and scratching the old ones. 
The reason seems to be the reaching of the reclamation threshold (80%) on
one of the 4 tapes. 

Has anyone similar problems? 
How can I avoid that the reclamation process keeps 2 units for hours?

Kind regards,

Stefan Holzwarth


--
Stefan Holzwarth
ADAC e.V. (Informationsverarbeitung - Systemtechnik - Basisdienste)
Am Westpark 8, 81373 München, Tel.: (089) 7676-5212, Fax: (089) 76768924
mailto:[EMAIL PROTECTED]


AW: Microsoft.Net

2003-07-30 Thread Stefan Holzwarth
I asked IBM about .NET support. Up to now i have only the response that it
is supported since Windows 2003 with ITSM 5.2. 
If it is supported within Windows 2000 as addon is not clear, but when it
seems only with ITSM 5.2.

Regards Stefan Holzwarth


-Ursprüngliche Nachricht-
Von: Adams, Matt (US - Hermitage) [mailto:[EMAIL PROTECTED]
Gesendet: Mittwoch, 30. Juli 2003 15:15
An: [EMAIL PROTECTED]
Betreff: Microsoft.Net


Just following up on a previous post to see if anyone has experienced this
yet...

Thanks,

Matt






Client:  W2K or Win2003 server running Microsoft.net  - B/A client version
5.1.5.9 or 5.1.6.0
TSM server:  5.1.6.2 on AIX 5.1


Has anyone else experienced problems backing up some files located in the
directory C:\windows\Microsoft.NET\Framework\...\CONFIG\   on systems
running Microsoft.Net?

The files are semi-permanent cached files that NTFS permissions do not
adhere.  In other words, you can't look at the permissions of the file.
From what I've been told, MS's unofficial position is that the \config
directory does not need to be backed up.  However, some of our developers
feel that there are some files in that directory that are essential in the
case of a bare metal recovery.TSM returns a code of rc=12 and reports
the backup as failed.

Just wondering if anyone else has experienced this issue and their thoughts
on it.

Regards,

Matt Adams
Tivoli Storage Manager Team
Hermitage Site Tech
Deloitte and Touche USA LLP
This message (including any attachments) contains confidential information
intended for a specific individual and purpose, and is protected by law.  If
you are not the intended recipient, you should delete this message.  Any
disclosure, copying, or distribution of this message, or the taking of any
action based on it, is strictly prohibited.


AW: Lotus Notes Non-TDP backups

2003-07-29 Thread Stefan Holzwarth
I would try openfile support in 5.2 . First tests look quite good.
Regards Stefan Holzwarth

-Ursprüngliche Nachricht-
Von: Gordon Woodward [mailto:[EMAIL PROTECTED]
Gesendet: Dienstag, 29. Juli 2003 04:01
An: [EMAIL PROTECTED]
Betreff: Lotus Notes Non-TDP backups


We currently have over 160Gb of Notes mail databases that need to be backed
up nightly. Due to incompatabilities with the Notes TDP, our version of TSM
(v4.2.2.5) and the way compaction runs on our Notes servers, we have to use
the normal Tivoli backup client to backup the mailboxes. It takes about 12
hours for all the databases to get backed up each night but the vast amount
of this time seems to be spend trying and then retrying to send mailboxes to
the TSM server. A typical schedule log looks like this:

28-07-2003 19:51:53 Retry # 2  Normal File--   157,548,544
\\sdbo5211\d$\notes\data\mail\beggsa.nsf [Sent]
28-07-2003 19:52:28 Normal File--70,778,880
\\sdbo5211\d$\notes\data\mail\bingleyj.nsf [Sent]
28-07-2003 19:54:05 Retry # 1  Normal File--   349,437,952
\\sdbo5211\d$\notes\data\mail\bignasck.nsf [Sent]
28-07-2003 19:55:10 Normal File--   131,072,000
\\sdbo5211\d$\notes\data\mail\Bishnic.nsf  Changed
28-07-2003 19:56:58 Normal File--   265,289,728
\\sdbo5211\d$\notes\data\mail\bellm.nsf [Sent]
28-07-2003 19:58:08 Retry # 1  Normal File--   131,072,000
\\sdbo5211\d$\notes\data\mail\Bishnic.nsf [Sent]
28-07-2003 20:00:46 Normal File--   387,186,688
\\sdbo5211\d$\notes\data\mail\BLACKAD.NSF  Changed
28-07-2003 20:03:52 Normal File--   367,263,744
\\sdbo5211\d$\notes\data\mail\BERNECKC.NSF  Changed
28-07-2003 20:06:18 Retry # 1  Normal File--   387,186,688
\\sdbo5211\d$\notes\data\mail\BLACKAD.NSF [Sent]
28-07-2003 20:10:11 Normal File-- 1,011,613,696
\\sdbo5211\d$\notes\data\mail\binneyk.nsf  Changed
28-07-2003 20:11:52 Retry # 2  Normal File--   953,942,016
\\sdbo5211\d$\notes\data\mail\andrewsj.nsf [Sent]
28-07-2003 20:12:01 Retry # 1  Normal File--   367,263,744
\\sdbo5211\d$\notes\data\mail\BERNECKC.NSF [Sent]
28-07-2003 20:12:05 Normal File--10,485,760
\\sdbo5211\d$\notes\data\mail\bousran.nsf [Sent]
28-07-2003 20:13:40 Normal File--   720,633,856
\\sdbo5211\d$\notes\data\mail\BLACKC.NSF  Changed
28-07-2003 20:18:58 Retry # 3  Normal File-- 1,863,057,408
\\sdbo5211\d$\notes\data\dbecna.nsf  Changed

Is there anything we can do reduce the window for this backup? Both the TSM
server and our Notes server have dedicated 1Gb links so bandwidth isn't a
problem. The Backup Copy Group for the Management Class the Notes data is
allocated to has Copy Serialization set to 'Shared Static'. Would changing
this to Dynamic be beneficial in reducing the amount of retries that occur
and also setting CHANGERETRIES to a lower option help?

Thanks in advance,

Gordon Woodward
Senior Support Analyst
Deutsche Asset Management (Australia) Limited


--

This e-mail may contain confidential and/or privileged information. If you
are not the intended recipient (or have received this e-mail in error)
please notify the sender immediately and destroy this e-mail. Any
unauthorized copying, disclosure or distribution of the material in this
e-mail is strictly forbidden.


AW: Lotus Notes Non-TDP backups

2003-07-29 Thread Stefan Holzwarth
Hi David,

as i understood the openfile feature a snapshot is made for the whole
filesystem. Therefore there should be no problem with db-consistency between
db-files if they live all on the same volume. Since in my company our lotus
db files have proofen some kind of robustness (we only have a small domino
environment) i can not total agree with your absolute no to this topic.
Domino uses an underlaying simple database that has to maintain some
robustnes towards sudden failures like power off, lost connectivity to the
db on a networkshare or some bluescreens. From the other side if an openfile
agent waits (configurable) for seconds for inactivity there should not occur
a cut through a write operation.
I'm sure there are better and more saver ways doing backups of Domino, but
most need more efforts or resources.

Kind regards, 
Stefan Holzwarth

-Ursprüngliche Nachricht-
Von: David McClelland [mailto:[EMAIL PROTECTED]
Gesendet: Dienstag, 29. Juli 2003 10:44
An: [EMAIL PROTECTED]
Betreff: Re: Lotus Notes Non-TDP backups


Stefan, Gordon,

Urrgh - no! 

As soon as you try to restore any of these files which will have changed
during the backup, even with open file support, you'll more than likely get
a corrupt .nsf database! Notes .nsf files are pretty sensitive and any
change somewhere in one part of the db will have repercussions elsewhere in
the db and before you know it you won't be able to open up the .nsf at all,
and will get 'b-tree structure invalid' or similar complaints from Notes.
You need to have the Notes server process 'down' in order to quiece the
databases and prevent them from being written to before backing them up.

The *usual* way of handling Notes backups without using TDP is to use a
'backup' server - the concept works like this:

You have a separate Notes server (i.e. a 'backup Notes server) which
contains replicas of the databases on the live Notes servers. Using Notes
replication, all changes to the live databases are replicated to the
replicas on the backup server. At a time controlled by you, you take the
Notes server process down on the backup server (as no users connect directly
to the backup Notes server, there will be no outage) and then perform the
backups of the now quiesced .nsf files using the normal TSM BA client. Once
the backup is complete, bring up the Notes server on the backup server and
begin replication with the live servers to the backup .nsf's up to date
again. Depending upon hardware, you can have many live Notes server's worth
of .nsf's contained on a single backup Notes server - just ensure you have
enough time to replicate the data from live to backup server.

In terms of recoveries, as the backup Notes server is down during backups,
you might want to have an additional Notes partition somewhere on a backup
server which you can use as a 'recovery server' - a Notes server which is
*always* up, regardless of whether a backup is taking place. Users can
connect to this directly and pull back any recovered .nsf databases, or even
just documents from a .nsf.

Hope this helps :o)

David McClelland
Global Management Systems
Reuters Ltd


-Original Message-
From: Stefan Holzwarth [mailto:[EMAIL PROTECTED] 
Sent: 29 July 2003 07:06
To: [EMAIL PROTECTED]
Subject: AW: Lotus Notes Non-TDP backups


I would try openfile support in 5.2 . First tests look quite good. Regards
Stefan Holzwarth

-Ursprüngliche Nachricht-
Von: Gordon Woodward [mailto:[EMAIL PROTECTED]
Gesendet: Dienstag, 29. Juli 2003 04:01
An: [EMAIL PROTECTED]
Betreff: Lotus Notes Non-TDP backups


We currently have over 160Gb of Notes mail databases that need to be backed
up nightly. Due to incompatabilities with the Notes TDP, our version of TSM
(v4.2.2.5) and the way compaction runs on our Notes servers, we have to use
the normal Tivoli backup client to backup the mailboxes. It takes about 12
hours for all the databases to get backed up each night but the vast amount
of this time seems to be spend trying and then retrying to send mailboxes to
the TSM server. A typical schedule log looks like this:

28-07-2003 19:51:53 Retry # 2  Normal File--   157,548,544
\\sdbo5211\d$\notes\data\mail\beggsa.nsf [Sent]
28-07-2003 19:52:28 Normal File--70,778,880
\\sdbo5211\d$\notes\data\mail\bingleyj.nsf [Sent]
28-07-2003 19:54:05 Retry # 1  Normal File--   349,437,952
\\sdbo5211\d$\notes\data\mail\bignasck.nsf [Sent]
28-07-2003 19:55:10 Normal File--   131,072,000
\\sdbo5211\d$\notes\data\mail\Bishnic.nsf  Changed
28-07-2003 19:56:58 Normal File--   265,289,728
\\sdbo5211\d$\notes\data\mail\bellm.nsf [Sent]
28-07-2003 19:58:08 Retry # 1  Normal File--   131,072,000
\\sdbo5211\d$\notes\data\mail\Bishnic.nsf [Sent]
28-07-2003 20:00:46 Normal File--   387,186,688
\\sdbo5211\d$\notes\data\mail\BLACKAD.NSF  Changed
28-07-2003 20:03:52 Normal File--   367,263,744
\\sdbo5211\d$\notes\data\mail\BERNECKC.NSF  Changed
28-07-2003 20:06:18 Retry # 1  Normal File

AW: Multi session backup restore question

2003-07-17 Thread Stefan Holzwarth
Hello,

I'm not sure i understand that feature, therefore the following question:

1 Filespace with 100 GByte active data spread over 10 tapes and 10
tapedrives available.
Can you restore this filespace from all 10 tapes at the same time or should
the tapes belong to different filespaces?

Kind Regards
Stefan Holzwarth



-Ursprüngliche Nachricht-
Von: David Longo [mailto:[EMAIL PROTECTED]
Gesendet: Donnerstag, 17. Juli 2003 16:25
An: [EMAIL PROTECTED]
Betreff: Re: Multi session backup restore question


In order to have multi session simultaneous restore from tape pool
then you must have this data in a pool that has collocation by filespace.
If you have a big enough disk pool such that if you did a restore
before migration had moved the data, then a multi session restore
would use the disk pool and be multi session.



David B. Longo
System Administrator
Health First, Inc.
3300 Fiske Blvd.
Rockledge, FL 32955-4305
PH  321.434.5536
Pager  321.634.8230
Fax:321.434.5509
[EMAIL PROTECTED]


 [EMAIL PROTECTED] 07/17/03 09:48AM 
Hello all,
I am trying to work in a multi session backup and be able to use a
multi session restore.  What I have found is that with the
maxresourceutilization set to 8 that my 350GB 8 processor AIX client is
using only 4 data sessions and 4 control sessions.   The data all goes to
the disk pool and then the migration will use only 1 tape drive for this
data.  I then assume a multi session restore is not going to work.

1)  how can I get this backup to use more sessions for data transfer?
2)  Am I correct in believing that the multi-session restore is out
because of the actions of the migration process?  MUST this go direct to
tape to have multi session backup and RESTORE?

Thanks in advance,
Matt

##
This message is for the named person's use only.  It may 
contain confidential, proprietary, or legally privileged 
information.  No confidentiality or privilege is waived or 
lost by any mistransmission.  If you receive this message 
in error, please immediately delete it and all copies of it 
from your system, destroy any hard copies of it, and notify 
the sender.  You must not, directly or indirectly, use, 
disclose, distribute, print, or copy any part of this message
if you are not the intended recipient.  Health First reserves
the right to monitor all e-mail communications through its
networks.  Any views or opinions expressed in this message
are solely those of the individual sender, except (1) where
the message states such views or opinions are on behalf of 
a particular entity;  and (2) the sender is authorized by 
the entity to give such views or opinions.
##


AW: Active Directory Problems

2003-07-11 Thread Stefan Holzwarth
As i understood bmr recovery for windows2000 you must install to the same
directory as before.
If you restore from a temp running windows the systemobjects go to the wrong
destination!

Regards Stefan Holzwarth

-Ursprüngliche Nachricht-
Von: Christian Svensson [mailto:[EMAIL PROTECTED]
Gesendet: Freitag, 11. Juli 2003 11:05
An: [EMAIL PROTECTED]
Betreff: Re: Active Directory Problems



   
   
   


When you installed Windows on the machine.
Did you installed Windows in a Temp directory and then restore all data
from TSM?
Or did you installed Windows in there standard path and overwrite each file
with TSM?

If you did the last thing. I can understand way you got that problem. That
is becuse you got a new SID and et c.
Try to install Windows in a Temp Path and restore the data from TSM to
there normal PATH. Reboot the server and boot up in Restore Directory
Service mode and restore the System Object. Reboot the server again and
remove the temp Windows installation.

Now should it work fine.
To do this much easyer. Buy a Disaster Recovery tool. Talk to IBM and they
should give you some advice. Or can you download your own eval. copy from
www.cristie.com

Best Regard / Med vänlig hälsning
Christian Svensson
Tivoli Storage Manager Certified



 Cristie Nordic AB

 Box 2 Phone : +46-(0)8-718 43 30

 SE-131 06 Nacka   Mobil : +46-(0)70-325 15 77

 SwedeneMail : Christian.
   [EMAIL PROTECTED]



 Visit : Gamla Värmdövägen 4, Plan 2

 web : www.cristie.com









   
  Jacques Butcher  
  [EMAIL PROTECTED]To:
[EMAIL PROTECTED]
  .CO.ZA  cc: 
  Sent by: ADSM:  Subject:  Active Directory
Problems
  Dist Stor
  Manager ADSM-  
  [EMAIL PROTECTED] 
   
   
  2003-07-11 10:53 
  Please respond to
  ADSM: Dist Stor 
  Manager 
   
   




Hi Everyone.

I've restore the system volume (c:) and all system objects
(of which I have a consistent backup of) successfully.  I
can access Active Directory and see all resources.  I can
even see all the resources from another machine through a
UNC path.  It even prompts me for a username and and
password.  I type the domain name\username and the password
and I can see all printers, etc.

I however cannot log onto or join the domain.

Did anyone else get this?

Any help will be greatly appreciated.

Thanks in advance.

==
Download ringtones, logos and picture messages at Ananzi Mobile Fun.
http://www.ananzi.co.za/cgi-bin/goto.pl?mobile


Faster Desasterrecovery for Windows NT

2003-07-08 Thread Stefan Holzwarth
Hello,

yesterday i had a look at powerquests V2I Serverprotect Software.
They restore their images from an Windows XP running entirely from CDROM
(WindowsPE)
I wonder whether we could do that with TSM...

One problem is to restore the systemobject as files to the newly restored
systemdrive. 
Like in ADSM Times, where you had to copy adsm.sys to winnt\system32\config
...
But TSM does not offer to reroute systemobject restores to folders.

Any ideas or suggestions?

Kind regards,
Stefan Holzwarth 


--
Stefan Holzwarth
ADAC e.V. (Informationsverarbeitung - Systemtechnik - Basisdienste)
Am Westpark 8, 81373 München, Tel.: (089) 7676-5212, Fax: (089) 76768924
mailto:[EMAIL PROTECTED]


AW: WIndows 2k server restore

2003-06-26 Thread Stefan Holzwarth
One important thing is to install the special hardware dependent agents
before doing the restore with TSM.
I had trouble doing BMR without installing Compaq Agents with the same
level.
Also some information in pnp Database where e.g.. teamingconfig of
networkadapters is stored gets lost.
Despite of that it works good but with a lot of time and effort.

Greetings

Stefan Holzwarth


--
Stefan Holzwarth
ADAC e.V. (Informationsverarbeitung - Systemtechnik - Basisdienste)
Am Westpark 8, 81373 München, Tel.: (089) 7676-5212, Fax: (089) 76768924
[EMAIL PROTECTED]



-Ursprüngliche Nachricht-
Von: Tae Kim [mailto:[EMAIL PROTECTED]
Gesendet: Mittwoch, 25. Juni 2003 18:10
An: [EMAIL PROTECTED]
Betreff: WIndows 2k server restore


Will be doing a full restore of a win2k box for DR testing tomorrow and
was wondering if there are any gotchas...

Here is the steps I will be taking

1. reinstall windows (setup the network driver and IP address etc.)
2. put in the service pack
3. install tsm client
4. restore c:\
5. before reboot, restore system objects
6. reboot and restore the rest of the drives

missed anything?
Thanks for  your input

Tae


AW: Win2000 Restore

2003-03-20 Thread Stefan Holzwarth
NTBACKUP doesn't care about userprofiles - maybe important for services 
running under a useraccount with a special environment/registry.

(my experience until now is that you do not need ntbackup)

Regards Stefan Holzwarth

-Ursprüngliche Nachricht-
Von: Robertson, G Louis (BearingPoint)
[mailto:[EMAIL PROTECTED]
Gesendet: Donnerstag, 20. März 2003 16:04
An: [EMAIL PROTECTED]
Betreff: Re: Win2000 Restore


Jack,

I have a presched .bat file that contains the following line:

ntbackup backup systemstate /f C:\Systemstate.bkf /m copy

This is the command line version of Microsoft Backup program usually found
under Start:Programs:Accessories:System Tools:Backup .  This command will
start the MS Backup utility and only backup the systemstate (which contains
all you registry information and associated files). I always overwrite the
old file each time my incremental backup runs because of space limitations
on my servers (the systemstate.bkf file is approximately 250 MB in size).
For more information on the syntax of ntbackup type ntbackup /? at a
command prompt.

To restore I do the following:

1. Install temp windows (winnt directory)
2. Restore C drive (and other drives)
3. Restore system objects
4. Reboot
5. Start MS Backup (Start:Programs:Accessories:System Tools:Backup) and use
the GUI to restore the system state from the systemstate.bkf file.

If you need it I have a draft copy of our restore procedures that I can send
you.

Louis
G Louis Robertson
Senior Systems Analyst

BearingPoint
[EMAIL PROTECTED]



-Original Message-
From: Coats, Jack [mailto:[EMAIL PROTECTED]
Sent: Wednesday, March 19, 2003 7:37 PM
To: [EMAIL PROTECTED]
Subject: Re: Win2000 Restore


Louis,
Would you mind sharing the details of this step for some of us
non-MS types?

 -Original Message-
 From: Robertson, G Louis (BearingPoint)
 [SMTP:[EMAIL PROTECTED]
 Sent: Wednesday, March 19, 2003 3:36 PM
 To:   [EMAIL PROTECTED]
 Subject:  Re: Win2000 Restore

 Mike,

 I add an additional step to my backups and restore.  As part of my backup
 I
 run a presched that does an ntbackup of the systemstate to a directory I
 know is backed up during archive and incremental backups.  During a
 restore
 I add the additional step of restoring the systemstate via ntbackup after
 the last reboot.

 Louis




**
The information in this email is confidential and may be legally
privileged.  Access to this email by anyone other than the
intended addressee is unauthorized.  If you are not the intended
recipient of this message, any review, disclosure, copying,
distribution, retention, or any action taken or omitted to be taken
in reliance on it is prohibited and may be unlawful.  If you are not
the intended recipient, please reply to or forward a copy of this
message to the sender and delete the message, any attachments,
and any copies thereof from your system.

**


AW: Win2000 Restore

2003-03-19 Thread Stefan Holzwarth
Michael,
before you restart after a restore of the systemobject you have manual to
restore the userprofiles to their original places by copying
C:\adsm.sys\W2KReg\REGISTRY\USER... to profilepath under c:\Documents and
Settings\USERID
ntuser.dat and ntuser.class.dat
Don't forget the default user

Kind Regards,
Stefan Holzwarth


--
Stefan Holzwarth
ADAC e.V. (Informationsverarbeitung - Systemtechnik - Basisdienste)
Am Westpark 8, 81373 München, Tel.: (089) 7676-5212, Fax: (089) 76768924
mailto:[EMAIL PROTECTED]


-Ursprüngliche Nachricht-
Von: Anderson, Michael - HMIS [mailto:[EMAIL PROTECTED]
Gesendet: Mittwoch, 19. März 2003 22:09
An: [EMAIL PROTECTED]
Betreff: Win2000 Restore


I am trying to test the 5.1.5 Windows client. After I do the restore
of the system objects and reboot, it creates me
 a new profile. I can see my icons but most of them won't work. I found
a document about changing the registry, in which
 it does point to my user name with the name of our domain and 001. I
try and change it back to my ID like it says and
 reboot, when I receive the error:  windows\system32\config\systemd
corrupt or missing. I figure I am doing something
 wrong, but just not sure where. Could someone please give me some
advice. Server is Aix 4.3.3 TSM 4.2.3.3

 I do the following steps:

 Install temp windows (winnt directory)
 restore C drive
 restore system objects
 reboot

 Thanks
  Mike Anderson
 [EMAIL PROTECTED]


CONFIDENTIALITY NOTICE: This e-mail message, including any attachments,
is for the sole use of the intended recipient(s) and may contain
confidential
and privileged information. Any unauthorized review, use, disclosure or
distribution is prohibited. If you are not the intended recipient, please
contact the sender by reply e-mail and destroy all copies of the original
message.


AW: Ndmp Backup to primary Diskpool

2003-02-18 Thread Stefan Holzwarth
I want to put the ndmp datastream on diskpools (type file or disk), but i
don't know whether it's possible.
Regards 
Stefan holzwarth

-Ursprüngliche Nachricht-
Von: PINNI, BALANAND (SBCSI) [mailto:[EMAIL PROTECTED]]
Gesendet: Montag, 17. Februar 2003 20:43
An: [EMAIL PROTECTED]
Betreff: Re: Ndmp Backup to primary Diskpool


Hi
If u have filer .Then u should we using ndmpcopy utility on Filer.To copy
between volumes on Filer or across filers.
I did this recently.
Thanks
Balanand Pinni

-Original Message-
From: Stefan Holzwarth [mailto:[EMAIL PROTECTED]] 
Sent: Monday, February 17, 2003 8:13 AM
To: [EMAIL PROTECTED]
Subject: Ndmp Backup to primary Diskpool

I think about the following environment:

TSM Server on Windows2000 with 5 * 2 TByte Primary diskpools on FC Raids
(with IDE Disk internal - RAID 5)
and 5 * 2 TByte Copy diskpools on FC Raids
(with IDE Disk internal - RAID 5)
TSM DB on lokal RAID 10 with total 8 disks

Together with other NT Systems I want to backup our filer (4TB) over NDMP to
that diskpools.

Since i have no experience with NDMP (we have TSM on MVS) i wonder whether
that config is possible. 
Does the datastream goe over FC or ip?

Kind regards,
Stefan Holzwarth


--
Stefan Holzwarth
ADAC e.V. (Informationsverarbeitung - Systemtechnik - Basisdienste)
Am Westpark 8, 81373 München, Tel.: (089) 7676-5212, Fax: (089) 76768924
mailto:[EMAIL PROTECTED]



Ndmp Backup to primary Diskpool

2003-02-17 Thread Stefan Holzwarth
I think about the following environment:

TSM Server on Windows2000 with 5 * 2 TByte Primary diskpools on FC Raids
(with IDE Disk internal - RAID 5)
and 5 * 2 TByte Copy diskpools on FC Raids
(with IDE Disk internal - RAID 5)
TSM DB on lokal RAID 10 with total 8 disks

Together with other NT Systems I want to backup our filer (4TB) over NDMP to
that diskpools.

Since i have no experience with NDMP (we have TSM on MVS) i wonder whether
that config is possible. 
Does the datastream goe over FC or ip?

Kind regards,
Stefan Holzwarth


--
Stefan Holzwarth
ADAC e.V. (Informationsverarbeitung - Systemtechnik - Basisdienste)
Am Westpark 8, 81373 München, Tel.: (089) 7676-5212, Fax: (089) 76768924
mailto:[EMAIL PROTECTED]



Backup eventlogs NT4

2003-02-14 Thread Stefan Holzwarth
In the past i had 2 schedules for backup on NT4 and TSM V4: 
one for data and Registry and one for eventlogs.

Since we use NT2000 and TSM V5 i was lucky since i had to use only one
schedule for all.

Now i want to use TSM V5 on NT4 and wonder, if there is no possibility (no
pre script) to
avoid the 2nd schedule.

It seems, that everything has been as i had been in the past with V4. Why?

Regards 
Stefan Holzwarth

 

-Ursprüngliche Nachricht-
Von: Raminder Braich [mailto:[EMAIL PROTECTED]]
Gesendet: Donnerstag, 13. Februar 2003 21:51
An: [EMAIL PROTECTED]
Betreff: NT restore to different hardware revisited..


A few weeks ago I sent an email asking if anyone could give me some
directions on how to restore WINNT on different hardware. Thanks to
everyone who responded. I was able to accomplish the restore. It is a
little tricky but not hard.
   If anyone else likes to know the process, please send me an email
individually.

Regards

Raminder Braich
[EMAIL PROTECTED]



AW: Stable 5.1.5 client

2003-01-31 Thread Stefan Holzwarth
Since 2 days i use that level on 50 w2k server. Up to now without problems.

Regards, Stefan Holzwarth

-Ursprüngliche Nachricht-
Von: Gill, Geoffrey L. [mailto:[EMAIL PROTECTED]]
Gesendet: Donnerstag, 30. Januar 2003 23:35
An: [EMAIL PROTECTED]
Betreff: Stable 5.1.5 client


Are we at a stable level on the NT clinet at 5.1.5.9? Has anyone used it
extensively enought to buy off on it?

Geoff Gill
TSM Administrator
NT Systems Support Engineer
SAIC
E-Mail:mailto:[EMAIL PROTECTED] [EMAIL PROTECTED]
Phone:  (858) 826-4062
Pager:   (877) 905-7154



Problems with W2k Client 5.1.5.7

2003-01-15 Thread Stefan Holzwarth
Since Upgrade from 5.1.5.4 to 5.1.5.7 for Windows2000 Client
I can see sesssion at the tsm server in idle state. I believe they survive
the scheduler start and query for next task.
Does anyone has the same experience with the new fixlevel?
Our Server is 4.2.3.0 on MVS 2.10 / 64 Bit mode

Stefan Holzwarth


--
Stefan Holzwarth
ADAC e.V. (Informationsverarbeitung - Systemtechnik - Basisdienste)
Am Westpark 8, 81373 München, Tel.: (089) 7676-5212, Fax: (089) 76768924
mailto:[EMAIL PROTECTED]



AW: AW: Group by problem for Storagereports

2002-11-15 Thread Stefan Holzwarth
Thanks for that trick - it works for me.
I made 6 new Domains - for each servercategory.

My Result:

tsm: ADSMselect cast(sum (filespaces.capacity/1024) as decimal(8,2)) as
Instal
led GB ,cast(sum(filespaces.capacity*pct_util/102400) as decimal(8,2)) as
Used
 GB , domains.description from nodes,filespaces,domains where
nodes.node_name=f
ilespaces.node_name and substr(nodes.contact,1,1)=domains.DOMAIN_name group
by d
omains.description

Installed GBUsed GB DESCRIPTION
 -- --
 2395.261018.27 Appl
  427.12 181.80 Database
 1809.32 890.50 File
 1659.71 379.36 Infrastruktur
  821.53 266.20 Mail
  354.38 135.46 SAP

(in the Serverdescription i use a number as the first letter for each 
category - i.e. 4, Contact,...  4 means fileserver)

Regards 
Stefan Holzwarth


tsm: ADSM
-Ursprüngliche Nachricht-
Von: Zlatko Krastev [mailto:acit;ATTGLOBAL.NET]
Gesendet: Freitag, 15. November 2002 01:43
An: [EMAIL PROTECTED]
Betreff: Re: AW: Group by problem for Storagereports


Look at my reply on thread SQL query - GROUP on derived value? from 
25.09.2001.
There is a workaround.
 
Zlatko Krastev
IT Consultant
 
 
 
 
 
 
Stefan Holzwarth [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED]
24.10.2002 18:13
Please respond to ADSM: Dist Stor Manager
 
 
To: [EMAIL PROTECTED]
cc: 
Subject:AW: Group by problem for Storagereports
 
 
Group by nodes.contact works, but
groups by the whole string not the first letter.
Regards,
Stefan Holzwarth
 
 
-Ursprüngliche Nachricht-
Von: Tomás Hrouda [mailto:throuda;HTD.CZ]
Gesendet: Donnerstag, 24. Oktober 2002 16:12
An: [EMAIL PROTECTED]
Betreff: Re: Group by problem for Storagereports
 
 
Did you try to use group by nodes.contact?
I tried you command (only without section   filespace_name like '%\c$'  
-
it didn't work)
 
select substr(nodes.contact,1,1) as SERVERTYP,sum 
(filespaces.capacity),sum
(filespaces.capacity*pct_util/100) from nodes,filespaces where
nodes.node_name=filespaces.node_name group by nodes.contact
 
gives this report
 
 SERVERTYP:
Unnamed[2]: 20024.3
Unnamed[3]: 16880.92
 
Is it OK?
Hope this helps
 
Tom
 
 
-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L;VM.MARIST.EDU]On Behalf Of
Stefan Holzwarth
Sent: Thursday, October 24, 2002 2:19 PM
To: [EMAIL PROTECTED]
Subject: Group by problem for Storagereports
 
 
Hello,
 
i tried to realize some storage reports about out NT servers with the
following select:
 
select substr(nodes.contact,1,1) as SERVERTYP ,sum (filespaces.capacity), 
-
  sum (filespaces.capacity*pct_util/100) from nodes,filespaces
where filespace_name like '%\c$' and nodes.node_name=filespaces.node_name
group by SERVERTYP
 
But:
===
ANR2940E The reference 'SERVERTYP' is an unknown SQL column name.
|
 ..V
 c$' and nodes.node_name=filespaces.node_name group by SERVERTYP
 
 
Any ideas who to group by the first letter of the description?
 
(I use the first letter for asigning the Server to some goups like mail,
application, file,.)
 
Kind regards,
 
Stefan Holzwarth
 

--
Stefan Holzwarth
ADAC e.V. (Informationsverarbeitung - Systemtechnik - Basisdienste)
Am Westpark 8, 81373 M|nchen, Tel.: (089) 7676-5212, Fax: (089) 76768924
mailto:stefan.holzwarth;adac.de



AW: Error initializing TSM Api

2002-11-14 Thread Stefan Holzwarth
We see the same error on our W2K machines with TSM 5.1.5 using CAD.
No idea who to fix, but it seems to run fine.
Regards Stefan Holzwarth

-Ursprüngliche Nachricht-
Von: Gill, Geoffrey L. [mailto:GEOFFREY.L.GILL;SAIC.COM]
Gesendet: Mittwoch, 13. November 2002 18:47
An: [EMAIL PROTECTED]
Betreff: Error initializing TSM Api


I see this message on a node when the scheduler service starts: Error
initializing TSM Api, unable to verify Registry Password, see dsierror.log.
 
1.  There is no dsierror.log being created, I searched all the drives.
2.  I tried updating the password dthrough the GUI and the dsmcutil but
the error remains.
3.  I've removed and reinstalled the schedule service but still see this
error an still no dsierror.log
 
Client WIN2K TSM V5.1.5.2
Server AIX 4.3.3 TSM V5.1.5.2
 
Anyone else seen this?
Geoff Gill
TSM Administrator
NT Systems Support Engineer
SAIC
E-Mail:mailto:gillg;saic.com [EMAIL PROTECTED]
Phone:  (858) 826-4062
Pager:   (877) 905-7154



Group by problem for Storagereports

2002-10-24 Thread Stefan Holzwarth
Hello,

i tried to realize some storage reports about out NT servers with the
following select:

select substr(nodes.contact,1,1) as SERVERTYP ,sum (filespaces.capacity), -
  sum (filespaces.capacity*pct_util/100) from nodes,filespaces 
where filespace_name like '%\c$' and nodes.node_name=filespaces.node_name 
group by SERVERTYP

But:
===
ANR2940E The reference 'SERVERTYP' is an unknown SQL column name.
|
 ..V
 c$' and nodes.node_name=filespaces.node_name group by SERVERTYP


Any ideas who to group by the first letter of the description?

(I use the first letter for asigning the Server to some goups like mail,
application, file,.)

Kind regards,

Stefan Holzwarth


--
Stefan Holzwarth
ADAC e.V. (Informationsverarbeitung - Systemtechnik - Basisdienste)
Am Westpark 8, 81373 München, Tel.: (089) 7676-5212, Fax: (089) 76768924
mailto:stefan.holzwarth;adac.de



AW: Group by problem for Storagereports

2002-10-24 Thread Stefan Holzwarth
Group by nodes.contact works, but
groups by the whole string not the first letter.
Regards,
Stefan Holzwarth


-Ursprüngliche Nachricht-
Von: Tomás Hrouda [mailto:throuda;HTD.CZ]
Gesendet: Donnerstag, 24. Oktober 2002 16:12
An: [EMAIL PROTECTED]
Betreff: Re: Group by problem for Storagereports


Did you try to use group by nodes.contact?
I tried you command (only without section   filespace_name like '%\c$'  -
it didn't work)
 
select substr(nodes.contact,1,1) as SERVERTYP,sum (filespaces.capacity),sum
(filespaces.capacity*pct_util/100) from nodes,filespaces where
nodes.node_name=filespaces.node_name group by nodes.contact
 
gives this report
 
 SERVERTYP:
Unnamed[2]: 20024.3
Unnamed[3]: 16880.92
 
Is it OK?
Hope this helps
 
Tom
 
 
-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L;VM.MARIST.EDU]On Behalf Of
Stefan Holzwarth
Sent: Thursday, October 24, 2002 2:19 PM
To: [EMAIL PROTECTED]
Subject: Group by problem for Storagereports
 
 
Hello,
 
i tried to realize some storage reports about out NT servers with the
following select:
 
select substr(nodes.contact,1,1) as SERVERTYP ,sum (filespaces.capacity), -
  sum (filespaces.capacity*pct_util/100) from nodes,filespaces
where filespace_name like '%\c$' and nodes.node_name=filespaces.node_name
group by SERVERTYP
 
But:
===
ANR2940E The reference 'SERVERTYP' is an unknown SQL column name.
|
 ..V
 c$' and nodes.node_name=filespaces.node_name group by SERVERTYP
 
 
Any ideas who to group by the first letter of the description?
 
(I use the first letter for asigning the Server to some goups like mail,
application, file,.)
 
Kind regards,
 
Stefan Holzwarth
 

--
Stefan Holzwarth
ADAC e.V. (Informationsverarbeitung - Systemtechnik - Basisdienste)
Am Westpark 8, 81373 M|nchen, Tel.: (089) 7676-5212, Fax: (089) 76768924
mailto:stefan.holzwarth;adac.de



AW: Calculating amount of data being backed up every 24 hours.

2002-10-02 Thread Stefan Holzwarth

All schedules that run over midnight are counted twice:
So your results are not correct.

Regards Stefan Holzwarth

-Ursprüngliche Nachricht-
Von: Karel Bos [mailto:[EMAIL PROTECTED]]
Gesendet: Mittwoch, 2. Oktober 2002 12:52
An: [EMAIL PROTECTED]
Betreff: Re: Calculating amount of data being backed up every 24 hours.


Or
 
/*  */
/* Query TSM to make a daily summary*/
/*  */
set sqldatetimeformat i
set sqldisplaymode w
set sqlmathmode r
commit
select count(*) as Count, -
  case -
when sum(bytes)  1073741824 then -
 cast(sum(bytes)/1073741824 as varchar(24))||' Gb' -
when sum(bytes)  1048576 then -
 cast(sum(bytes)/1048576 as varchar(24))||' Mb' -
when sum(bytes)  1024 then -
 cast(sum(bytes)/1024 as varchar(24))||' Kb' -
else cast(sum(bytes) as varchar(24)) -
  end as Bytes, activity as Activity -
 from adsm.summary -
  where date(start_time) = current date - 1 day -
 or date(end_time) = current date - 1 day -
   group by activity
commit
 
 


 
 
 
-Oorspronkelijk bericht-
Van: David E Ehresman [mailto:ISVILLE.deehre01@LOUEDU]
Verzonden: vrijdag 27 september 2002 17:38
Aan: [EMAIL PROTECTED]
Onderwerp: Re: Calculating amount of data being backed up every 24
hours.
 
 
Does anyone have any suggestions regarding how to calculate how much
data
is being handled by TSM every day ??


 
Get a trial of Servergraph/TSM, http://www.servergraph.com , and see if
it does what you want (and a whole lot more).
 
David Ehresman
A satisfied customer



AW: DELTA: Error generating the delta file

2002-09-11 Thread Stefan Holzwarth

Deleting the .file does the job(in any case i think so). No need to clear
the entire cache. But it seems that the trigger for the misbehavior still
exists and the error occurs again after a view backups.
Regards
Stefan holzwarth

-Ursprüngliche Nachricht-
Von: Prather, Wanda [mailto:[EMAIL PROTECTED]]
Gesendet: Mittwoch, 11. September 2002 03:16
An: [EMAIL PROTECTED]
Betreff: Re: DELTA: Error generating the delta file


We see about 1 of these a month, out of 400 Win2K machines backing up daily
with ALL files subject to subfile backups.  It does kill the backup.
 
I had not figured out what triggers it, so I'm glad to know.
 
Can you get around the problem by deleting the .file?
We've been blowing away the whole cache subdirectory.  
That lets the next backup complete OK  (although it will not be a subfile
backup). 
 
 
-Original Message-
From: Stefan Holzwarth
To: [EMAIL PROTECTED]
Sent: 9/10/2002 3:41 AM
Subject: AW: DELTA: Error generating the delta file
 
We also see this error from time to time. 
With the error TSM leaves a .file in the cache directory, that causes
new
errors. I started a PMR at Tivoli.
Regards
Stefan holzwarth
 
-Ursprüngliche Nachricht-
Von: Bruce Lowrie [mailto:[EMAIL PROTECTED]]
Gesendet: Montag, 9. September 2002 22:32
An: [EMAIL PROTECTED]
Betreff: DELTA: Error generating the delta file
 
 
All,
Receiving this error  DELTA: Error generating the delta file
(reported to
the DMSERROR.LOG). Has anyone else seen this error? I cannot find any
documentation on this error.
Bruce E. Lowrie
Sr. Systems Analyst
Information Technology Services
Storage, Output, Legacy
*E-Mail: [EMAIL PROTECTED]
*Voice: (989) 496-6404
7 Fax: (989) 496-6437
*Post: 2200 W. Salzburg Rd.
*Post: Mail: CO2111
*Post: Midland, MI 48686-0994
This e-mail transmission and any files that accompany it may contain
sensitive information belonging to the sender. The information is
intended
only for the use of the individual or entity named. If you are not the
intended recipient, you are hereby notified that any disclosure,
copying,
distribution, or the taking of any action in reliance on the contents of
this information is strictly prohibited. Dow Corning's practice
statement
for digitally signed messages may be found at
http://www.dowcorning.com/dcps. If you have received this e-mail
transmission in error, please immediately notify the Security
Administrator
at mailto:[EMAIL PROTECTED]
 
 
 
This email has been scanned for all viruses by the MessageLabs SkyScan
service. For more information on a proactive anti-virus service working
around the clock, around the globe, visit http://www.messagelabs.com



  1   2   >