Calculating Change in Backup Environment

2009-03-18 Thread Dennis, Melburn (IT Solutions US)
Currently, our backup environment employs a 90-day / 7 revision backup
policy.  My customer has come to me to find out how much our backup data
would grow or shrink if we went to a 21-day / 21-revision backup policy.
Have any of you out there experience requests like this before, and if
so, how were you able to get a reasonable guestimate of this.
 
Mel Dennis
Systems Engineer II
 
Siemens IT Solutions and Services, Inc.
Energy Data Center Operations
4400 Alafaya Trail
MC Q1-108
Orlando, FL 32826
Tel.: 407-736-2360
Mob: 321-356-9366
Fax: 407-243-0260
mailto:melburn.den...@siemens.com
www.usa.siemens.com/it-solutions
 


Re: Two different retention policies for the same node

2009-03-18 Thread Michael Green
On Tue, Mar 17, 2009 at 11:33 PM, Kelly Lipp l...@storserver.com wrote:

 Where are the people that use the data going to be?  How will
customers interact with them?

This is very valid point! Luckily I'm not the one who needs to think
about it ;-)



 If you worry DR application by application and think about all aspects of 
 using the data, the problem actually becomes simpler since there is less data 
 to worry about the time frame to recover it is probably longer than you think.

 It's all about RPO and RTO.  In our sales practice, I'm spending a lot of 
 time consulting (during pre-sales so it's free) about DR issues.  Bottom 
 line: you need to have a very good plan, but since you will probably never 
 execute (beyond testing), you probably shouldn't spend too much time/money on 
 it.

 Kelly Lipp
 CTO
 STORServer, Inc.
 485-B Elkton Drive
 Colorado Springs, CO 80907
 719-266-8777 x7105
 www.storserver.com




Re: Two different retention policies for the same node

2009-03-18 Thread Michael Green
On Tue, Mar 17, 2009 at 11:18 PM, Conway, Timothy
timothy.con...@jbssa.com wrote:
 Is this to avoid having two copypools?  That's a reasonable goal.   I
 have only one copypool, which is my DR offsite pool.  Just make your
 onsite copypool an offsite pool, and you can give them 25 times better
 than they're asking for.

No, the idea is to keep offsite 7 days history for very few most
important servers (ERP, HR) on disk.  I don't much care if that will
be primary pool or copy pool. As long as I can get my data back off it
- it's fine.
Today, I manage 3 servers here and am sitting on 0.5 peta of backup
data. There is no point to have all that data (most of which is
inactive) at DR site (we do have offsite vault though). At DR site we
want to keep preconfigured turn-key ERP, HR servers, a preconfigured
TSM server with its database and SAN or NAS attached disk that has the
7-days history. I have yet to work out how and by what means my 140GB
database will get to DR site on daily basis. Maybe we will use a
dedupe or maybe we will open a separate TSM instance  just for these
few servers so that the DB that we will have to copy to DR site will
be as small as possible. Also the smaller  DB, the better in DR
situation.

 Unless most of the data changes every day, the difference between 7 days
 and 180 days worth of copypool is remarkably small.

It can be big. ERP server backs up over 100G nightly. I guess it
dedupes pretty well though.


 If you have no copypool at all, the whole thing is a farce.
 If they're wanting fast portable full restores of a subset of the total
 nodes, how about backupsets?  Make a nodegroup containing all the nodes

Backup sets are fine as long as tey are relatively small and you don't
have to generate them on daily basis. Imagine your ERP is about 400GB
worth of active data and you have to generate backup set that big on
daly basis? I don't even know yet what kind of bandwidth I'll have to
our DR location. Assuming I get backupset generated in 4-5 hours, how
many hours will be required to send it off?  Also what happens if then
the managment decides they want a few more machine to join the first
one at DR location? This solution sound like a nice idea TSM-wise, but
imho it's not very scalable otherwise. As it looks to me the best
approach is to backup locally, dedupe, send it off deduped.


 they want daily fulls of, and make a backupset of that nodegroup every
 day.  Give the backupset a 7 day retention, and keep track of the
 volumes that are in the list of backupset volumes from one day that
 disappear the next (simple to script).  That same script can note tapes
 that show up in the list of backupset volumes that weren't there the day
 before, and check them out of the library and throw your operations team
 an email listing every tape to be sent offsite and to be recalled.  I
 find that I can generate file-class backupsets(with TOC) at about 27MB/S
 - 8.5 hours to do an 814GB node, single-threaded.



Client Data Lost after client re-installation

2009-03-18 Thread Botelho, Tiago (External)
 

Hello all,

 

After client node system disk failure the system administrator change
the failure disk, install the OS and makes a manual backup of the client
(same node name and IP as before).

 

Now, I search for old data (before disk failure), and no data were
found.

 

An expire inventory were made after the backup ... I think that the
inactive files should be shown. Backup copy Group for backup: Version
Data Deleted =3 ; Retain only Version  = 360 days.

 

I try the option show active and inactive files and point in time
options on the client, but no old data were found (only the data of the
manual backup made after node disk failure)

 

Any idea what can be the problem? TSM Node ID? 

 

Any suggestion to solve this problem and return the old files?

 

 

Thank you for your help

 

Best Regards

Tiago

 

 


Re: Client Data Lost after client re-installation

2009-03-18 Thread Richard Sims

When posting, provide basic information such as platform type.
If this was Windows, perhaps the disk letter changed.
Is your client options file exactly the same as before?
Perform 'dsmc query filespace' as your starting point toward checking
what's in TSM server storage.

   Richard Sims


FILE devices the 8340/511/514 message triplet

2009-03-18 Thread Nick Laflamme
I have my copypools set up to deliver most data to FILE device storage
pools. I used to use DISK, but FILE seems to replace DISK in many ways, so
I'm using that for a new server.

Alas, during backups, my consoles get over run with ANR8340I, 0511I, and
0514I messages

03/18/2009 09:50:37 ANR0514I Session 2581 closed volume
/tsmstg/part02/01E0.BFS. (SESSION: 2581)
03/18/2009 09:50:37 ANR8340I FILE volume /tsmstg/part02/01E0.BFS
mounted. (SESSION: 2581)
03/18/2009 09:50:37 ANR0511I Session 2581 opened output volume
/tsmstg/part02/01E0.BFS. (SESSION: 2581)
03/18/2009 09:50:38 ANR0514I Session 2581 closed volume
/tsmstg/part02/01E0.BFS. (SESSION: 2581)

FILE devices don't have Mount Retention settings; I can't ask TSM to leave a
volume open for 1 minute in case the client comes back again for it.

For the same of my activity log and my console sessions, is there a way to
tell TSM not to be so aggressive about dismounting and/or closing these
volumes every time a session hesistates for a moment?

Or should I just tell myself I'm abusing FILE devices and ought to use DISK
storage pools to receive incoming data?

Thanks,
Nick


Re: Client Data Lost after client re-installation

2009-03-18 Thread Botelho, Tiago (External)
Hello,

 

Regarding to client platform, it runs IRIX 6.5 and TSM Server is an AIX 5.2.2 
running TSM 5.2.35.

 

Additional information regarding to client:

 

  Node Name: #

  Platform: IRIX

   Client OS Level: 6.5

Client Version: Version 5, Release 1, Level 5.14

Policy Domain Name: CAX_SERVERS

 Last Access Date/Time: 03/18/09   14:20:23

Days Since Last Access: 1

Password Set Date/Time: 07/10/07   08:37:59

   Days Since Password Set: 617

 Invalid Sign-on Count: 0

   Locked?: No

   Contact: Tivoli Administrators

   Compression: Client

   Archive Delete Allowed?: Yes

Backup Delete Allowed?: No

Registration Date/Time: 03/07/05   11:53:38

 Registering Administrator: 

Last Communication Method Used: Tcp/Ip

   Bytes Received Last Session: 5,580

   Bytes Sent Last Session: 1.10 M

  Duration of Last Session: 585.95

   Pct. Idle Wait Last Session: 25.99

  Pct. Comm. Wait Last Session: 54.16

  Pct. Media Wait Last Session: 4.81

 Optionset: IRIX

   URL: http://client.host.name:1581

 Node Type: Client

Password Expiration Period: 

 Keep Mount Point?: No

  Maximum Mount Points Allowed: 2

Auto Filespace Rename : No

 Validate Protocol: No

   TCP/IP Name: ##

TCP/IP Address:##

Globally Unique ID: 

 Transaction Group Max: 0

   Data Write Path: ANY

Data Read Path: ANY

Session Initiation: ClientOrServer

High-level Address: 

 Low-level Address: 

 

 

 

 

Session established with server TSM01: AIX-RS/6000

  Server Version 5, Release 2, Level 3.5

  Server date/time: 03/18/09   14:49:46  Last access: 03/18/09   14:43:54

 

 

tsm: TSM01q file  f=d

 

  Node Name: 

 Filespace Name: /

 Hexadecimal Filespace Name: 

   FSID: 1

   Platform: IRIX

 Filespace Type: XFS

  Is Filespace Unicode?: No

  Capacity (MB): 13,262.1

   Pct Util: 34.1

Last Backup Start Date/Time: 03/18/09   09:26:02

 Days Since Last Backup Started: 1

   Last Backup Completion Date/Time: 03/17/09   03:00:17

   Days Since Last Backup Completed: 1

Last Full NAS Image Backup Completion Date/Time: 

Days Since Last Full NAS Image Backup Completed: 

 

  Node Name: 

 Filespace Name: /usr2

 Hexadecimal Filespace Name: 

   FSID: 2

   Platform: IRIX

 Filespace Type: XFS

  Is Filespace Unicode?: No

  Capacity (MB): 17,359.9

   Pct Util: 68.3

Last Backup Start Date/Time: 03/15/09   03:09:08

 Days Since Last Backup Started: 3

   Last Backup Completion Date/Time: 03/15/09   03:10:56

   Days Since Last Backup Completed: 3

Last Full NAS Image Backup Completion Date/Time: 

Days Since Last Full NAS Image Backup Completed: 

 

  Node Name: ##

 Filespace Name: /usr3

 Hexadecimal Filespace Name: 

   FSID: 3

   Platform: IRIX

 Filespace Type: XFS

  Is Filespace Unicode?: No

  Capacity (MB): 139,995.4

   Pct Util: 76.7

Last Backup Start Date/Time: 03/15/09   03:05:39

 Days Since Last Backup Started: 3

   Last Backup Completion Date/Time: 03/15/09   03:09:08

   Days Since Last Backup Completed: 3

Last Full NAS Image Backup Completion Date/Time: 

Days Since Last Full NAS Image Backup Completed: 

 

more...   (ENTER to continue, 'C' to cancel) 

 

 

tsm: TSM01

 

tsm: TSM01q file ###

 

Node Name   Filespace   FSID Platform Filespace Is 
Files- Capacity   Pct

Name  Typepace  
  (MB)  Util


Unicode?   

--- ---

Re: Client Data Lost after client re-installation

2009-03-18 Thread Richard Sims

Your filespace queries show that you definitely have filespaces in
TSM server storage.  If you now perform 'dsmc query backup -
subdir=yes -inactive /usr3/' AS ROOT, that will report all the files
out that exist in TSM server storage for that file system.  (I would
advise not using the GUI for assured verification queries like
this.)  You might additionally go to the TSM server and perform Query
OCCupancy to further check inventory numbers, and to look for
anomalies involving the node in the Activity Log (as in ANE backup
summary messages reflecting outrageous objects expired numbers, or
unusually high Expire Inventory result numbers).

   Richard Sims


is TSM Client-encrypted data still compressable on the 3592 Drives ?

2009-03-18 Thread Rainer Wolf

Hi All,

we normally recommend 'not using TSM Compression' becaus the
fantastic 3592-drives are doing the compression very well and fast.

If users want to encrypt their data with the tsm-client I tend to
recommend also using compression because data would be first get compressed
and then get encrypted (on the client).
This should help save some space on the tapes but it is only an assumption
and possibly compression is not essential.

my question is
If I have a 10GB file and this would appear (without Client-compression and 
without
client-encryption) as 6GB on the tape (after the hardwarecompression) ...

... is it possible to say something about what happens if I set up
TSM Encryption (AES128) and send this file again - now encrypted ?
Wil this data  appear
  more at around 6GB,  more at around 10GB, somewhere between  ?
Or is it something completely unpredictable ?
Statistics would be also interesting

If it is more at 10GB it makes sense using TSM client-compression just to save 
space.
Because of the recently discussed problems with restoring
tsm-compressed data that is aleady compressed by any software
then the comressalway-Option shouldn't also be used there
to avoid problems at restore-process ?

thanks in advance for any hints
Rainer


--

Rainer Wolf  eMail:   rainer.w...@uni-ulm.de
kiz - Abt. Infrastruktur   Tel/Fax:  ++49 731 50-22482/22471
Universitaet Ulm wwweb:http://kiz.uni-ulm.de


Re: volumes in a collocation group?

2009-03-18 Thread Botelho, Tiago (External)
Hi Larry,

I thing that my TSM Server do not have this DB record (version 5.2) , but I 
will try to help you or give you some ideas.

Try sommething similarlike this:

select volume_name from volumeusage where stgpool_name='STORAGEPOOL' group by 
volume_name

try to find where COLLOCG record are (maybe in nodedata table)

See on TSM Server commnd line all the tables from db:

h select
select * from tables
select * from columns
...


Useful scripts can be made with TSm DB


Best regards 

Tiago



-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of Larry 
Clark
Sent: sábado, 14 de Março de 2009 22:13
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] volumes in a collocation group?

Hi,
Would someone have a query that returns all the volumes in a particular
collocation group (within a copypool)?


Lost admin password

2009-03-18 Thread Mario Behring
Hi list,

The admin password (for CLI administration access) was lost. How can I change 
it without knowing the current password?

Mario


Re: volumes in a collocation group?

2009-03-18 Thread Fred Johanson
Q noded col=name-of-collocpool stg=name-of-stgpool

Fred Johanson
TSM Administrator
University of Chicago

773-702-8464


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of 
Botelho, Tiago (External)
Sent: Wednesday, March 18, 2009 11:27 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] volumes in a collocation group?

Hi Larry,

I thing that my TSM Server do not have this DB record (version 5.2) , but I 
will try to help you or give you some ideas.

Try sommething similarlike this:

select volume_name from volumeusage where stgpool_name='STORAGEPOOL' group by 
volume_name

try to find where COLLOCG record are (maybe in nodedata table)

See on TSM Server commnd line all the tables from db:

h select
select * from tables
select * from columns
...


Useful scripts can be made with TSm DB


Best regards 

Tiago



-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of Larry 
Clark
Sent: sábado, 14 de Março de 2009 22:13
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] volumes in a collocation group?

Hi,
Would someone have a query that returns all the volumes in a particular
collocation group (within a copypool)?


Re: Lost admin password

2009-03-18 Thread km
On 18/03, Mario Behring wrote:
 Hi list,

 The admin password (for CLI administration access) was lost. How can I change 
 it without knowing the current password?

 Mario

Start dsmserv in console mode. Shut TSM down and start it with
'dsmserv' from the shell or a command prompt. You now have a console session.

Change the password, halt the server and start it normally again.

-km


Re: volumes in a collocation group?

2009-03-18 Thread Larry Clark
Thanks Fred. That was mentioned previously, but it does not show if other 
servers not in the colloc group are stored on those volumes.

Larry Clark
(518) 712-5138 Home Office


- Original Message - 
From: Fred Johanson f...@uchicago.edu

To: ADSM-L@VM.MARIST.EDU
Sent: Wednesday, March 18, 2009 12:34 PM
Subject: Re: [ADSM-L] volumes in a collocation group?


Q noded col=name-of-collocpool stg=name-of-stgpool

Fred Johanson
TSM Administrator
University of Chicago

773-702-8464


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of 
Botelho, Tiago (External)

Sent: Wednesday, March 18, 2009 11:27 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] volumes in a collocation group?

Hi Larry,

I thing that my TSM Server do not have this DB record (version 5.2) , but I 
will try to help you or give you some ideas.


Try sommething similarlike this:

select volume_name from volumeusage where stgpool_name='STORAGEPOOL' group 
by volume_name


try to find where COLLOCG record are (maybe in nodedata table)

See on TSM Server commnd line all the tables from db:

h select
select * from tables
select * from columns
...


Useful scripts can be made with TSm DB


Best regards

Tiago



-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of 
Larry Clark

Sent: sábado, 14 de Março de 2009 22:13
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] volumes in a collocation group?

Hi,
Would someone have a query that returns all the volumes in a particular
collocation group (within a copypool)?


Re: volumes in a collocation group?

2009-03-18 Thread Bob Levad
NOTES: 

These may run for a while.  Watch for wrapped lines to fix.

In the first query, watch for Number of Groups greater than 1 when the
storage pool is supposed to be collocated by group.

In the second query, you will see a line item for each group on each tape.

All of our nodes belong to a group, even if there is only one node in it.


*


The following query will show:
Pool name, Access, Volume name, # Groups, # Nodes, # FSpaces, % Util, % Recl


select  -
  cast(vu.stgpool_name as char(9)) as Pool, -
  (select access from volumes where volume_name=vu.volume_name) as Access,
-
  cast(vu.volume_name as char(9)) as Volume, -
  cast(count(distinct nd.collocgroup_name) as decimal(4,0)) as Groups, -
  cast(count(distinct vu.node_name) as decimal(3,0)) as Nodes, -
  cast(count(distinct vu.filespace_name) as decimal(5,0)) as FSpaces, -
  (select pct_utilized from volumes where volume_name=vu.volume_name) as
Util, -
  (select pct_reclaim from volumes where volume_name=vu.volume_name) as
Recl -
from -
  volumeusage vu, nodes nd -
where -
  vu.node_name=nd.node_name -
group by -
  vu.stgpool_name, vu.volume_name -
order by -
  Groups desc, Nodes desc, FSpaces desc







This query will show:

Pool, Access, Volume name, Group name, # Nodes, # FSpaces, % Util, % Recl



select  -
 cast(vu.stgpool_name as char(9)) as Pool, -
 (select access from volumes where volume_name=vu.volume_name) as Access,
-
  cast(vu.volume_name as char(9)) as Volume, -
  cast(nd.collocgroup_name as char(7)) as Groups, -
  cast(count(distinct vu.node_name) as decimal(3,0)) as Nodes, -
  cast(count(distinct vu.filespace_name) as decimal(5,0)) as FSpaces, -
  (select pct_utilized from volumes where volume_name=vu.volume_name) as
Util, -
  (select pct_reclaim from volumes where volume_name=vu.volume_name) as
Recl -
from -
  volumeusage vu, nodes nd -
where -
  vu.node_name=nd.node_name -
group by -
  vu.stgpool_name, vu.volume_name, nd.collocgroup_name -
order by -
  Access, Pool, Volume, Groups, Nodes desc, FSpaces desc



***

Bob


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
Larry Clark
Sent: Wednesday, March 18, 2009 12:28 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] volumes in a collocation group?

Thanks Fred. That was mentioned previously, but it does not show if other 
servers not in the colloc group are stored on those volumes.
Larry Clark
(518) 712-5138 Home Office


- Original Message - 
From: Fred Johanson f...@uchicago.edu
To: ADSM-L@VM.MARIST.EDU
Sent: Wednesday, March 18, 2009 12:34 PM
Subject: Re: [ADSM-L] volumes in a collocation group?


Q noded col=name-of-collocpool stg=name-of-stgpool

Fred Johanson
TSM Administrator
University of Chicago

773-702-8464


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of 
Botelho, Tiago (External)
Sent: Wednesday, March 18, 2009 11:27 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] volumes in a collocation group?

Hi Larry,

I thing that my TSM Server do not have this DB record (version 5.2) , but I 
will try to help you or give you some ideas.

Try sommething similarlike this:

select volume_name from volumeusage where stgpool_name='STORAGEPOOL' group 
by volume_name

try to find where COLLOCG record are (maybe in nodedata table)

See on TSM Server commnd line all the tables from db:

h select
select * from tables
select * from columns
...


Useful scripts can be made with TSm DB


Best regards

Tiago



-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of 
Larry Clark
Sent: sábado, 14 de Março de 2009 22:13
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] volumes in a collocation group?

Hi,
Would someone have a query that returns all the volumes in a particular
collocation group (within a copypool)?

This electronic transmission and any documents accompanying this electronic 
transmission contain confidential information belonging to the sender.  This 
information may be legally privileged.  The information is intended only for 
the use of the individual or entity named above.  If you are not the intended 
recipient, you are hereby notified that any disclosure, copying, distribution, 
or the taking of any action in reliance on or regarding the contents of this 
electronically transmitted information is strictly prohibited.


Re: is TSM Client-encrypted data still compressable on the 3592 Drives ?

2009-03-18 Thread Clark, Robert A
Is it still true that the client side compression requires paging in the
whole object before it can be compressed?

Tying up 10GB of RAM and/or paging/swap space (or more in the case of
larger objects) used to be a big enough concern to recommend against the
practice. 

Especially on Windows where memory was so constrained, and using up
available RAM tended to cause the app to die.

[RC]

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
Rainer Wolf
Sent: Wednesday, March 18, 2009 8:35 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] is TSM Client-encrypted data still compressable on the
3592 Drives ?

Hi All,

we normally recommend 'not using TSM Compression' becaus the fantastic
3592-drives are doing the compression very well and fast.

If users want to encrypt their data with the tsm-client I tend to
recommend also using compression because data would be first get
compressed and then get encrypted (on the client).
This should help save some space on the tapes but it is only an
assumption and possibly compression is not essential.

my question is
If I have a 10GB file and this would appear (without Client-compression
and without
client-encryption) as 6GB on the tape (after the hardwarecompression)
...

... is it possible to say something about what happens if I set up TSM
Encryption (AES128) and send this file again - now encrypted ?
Wil this data  appear
   more at around 6GB,  more at around 10GB, somewhere between  ?
Or is it something completely unpredictable ?
Statistics would be also interesting

If it is more at 10GB it makes sense using TSM client-compression just
to save space.
Because of the recently discussed problems with restoring tsm-compressed
data that is aleady compressed by any software then the
comressalway-Option shouldn't also be used there to avoid problems at
restore-process ?

thanks in advance for any hints
Rainer


--

Rainer Wolf  eMail:   rainer.w...@uni-ulm.de
kiz - Abt. Infrastruktur   Tel/Fax:  ++49 731 50-22482/22471
Universitaet Ulm wwweb:http://kiz.uni-ulm.de


DISCLAIMER:
This message is intended for the sole use of the addressee, and may contain 
information that is privileged, confidential and exempt from disclosure under 
applicable law. If you are not the addressee you are hereby notified that you 
may not use, copy, disclose, or distribute to anyone the message or any 
information contained in the message. If you have received this message in 
error, please immediately advise the sender by reply email and delete this 
message.


Re: Client hangs when TSM Client Acceptor starts TSM Client Scheduler.

2009-03-18 Thread Clark, Robert A
I looked into everything I could, and didn't find anything useful.

Dsmcad is no longer managing the scheduler, and that seems to make the
problem go away.

This one goes on my list of known issues that I hope to resolve and
document in the future.

Thanks, [RC] 

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
Richard Sims
Sent: Tuesday, March 10, 2009 12:42 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Client hangs when TSM Client Acceptor starts TSM
Client Scheduler.

Things I would first check:

That client and server are syncing to a time standard, for overall
sanity.
Check for any errors in dsmerror.log.
On the server side, do 'Query SEssion F=D' and see what's going on with
that node, and inspect the Activity Log for issues regarding the
session.  Backups are relatively low priority, so long waits for tape
drives are possible.
In Windows, look for any questionable Event Log entries, and see if the
dsmc scheduler process is using any CPU, if the server indicates it
healthy.

Richard Sims


DISCLAIMER:
This message is intended for the sole use of the addressee, and may contain 
information that is privileged, confidential and exempt from disclosure under 
applicable law. If you are not the addressee you are hereby notified that you 
may not use, copy, disclose, or distribute to anyone the message or any 
information contained in the message. If you have received this message in 
error, please immediately advise the sender by reply email and delete this 
message.


New TSM set up

2009-03-18 Thread ashish sharma
Hello all,

I have received a new assignment for setting backup infra  in one of our
sites.The details provided by site local admin to are:

Below  are two symmetrical platforms.
Each platform is made of:
- 4 HP-UX servers (BL870c  BL860c). One Campus cluster (HP-OVO), two metro
cluster (Oracle DB) two stand-alone servers (no cluster needed)
- about 15 x86 servers (BL460c) mainly running Win2003: one is running
Efficient IP appliance for DNS, one other is running RedHat for... TSM
- Mirapoint mail (+ it's own backup) + ldap (no need to backup)
- BlueCoat proxy
- F5 load balancer
- Netapp NAS (Metro cluster)
- EMC DMX950 SAN (Metro cluster with SRDF + BCV)
All HP components are installed in 3 c7000 blade enclosures per platform

There is a library with 4 drives in it and 3 more for expansion.The tsm
server is decided to be installed on Linux Platform.

Can experts suggest me the best set-up for this enviroment as i am not a
expert in Implementation.Please let me know if more information i needed as
i am dpendent on local admin for this.



--
Best Regards
Ashish Sharma


Re: Calculating Change in Backup Environment

2009-03-18 Thread Steven Harris
Dennis,

FWIW, I don't like a N days/N versions strategy.  It can have unexpected
side effects.

N Days/unlimited versions has more predictable behaviour and also is more
efficient at expiry time.

Regards

Steve.

Steven Harris
TSM Admin, Sydney Australia




 Dennis, Melburn
 (IT Solutions
 US)   To
 melburn.den...@s ADSM-L@VM.MARIST.EDU
 IEMENS.COMcc
 Sent by: ADSM:
 Dist Stor Subject
 Manager  [ADSM-L] Calculating Change in
 ads...@vm.marist Backup Environment
 .EDU


 18/03/2009 11:46
 PM


 Please respond to
 ADSM: Dist Stor
 Manager
 ads...@vm.marist
   .EDU






Currently, our backup environment employs a 90-day / 7 revision backup
policy.  My customer has come to me to find out how much our backup data
would grow or shrink if we went to a 21-day / 21-revision backup policy.
Have any of you out there experience requests like this before, and if
so, how were you able to get a reasonable guestimate of this.

Mel Dennis
Systems Engineer II

Siemens IT Solutions and Services, Inc.
Energy Data Center Operations
4400 Alafaya Trail
MC Q1-108
Orlando, FL 32826
Tel.: 407-736-2360
Mob: 321-356-9366
Fax: 407-243-0260
mailto:melburn.den...@siemens.com
www.usa.siemens.com/it-solutions


Backup up a NetApp Filer without using NDMP

2009-03-18 Thread Brian G. Kunst
Hello all,

We have a customer with a NetApp Filer they want to back up to our TSM system.  
For various reasons, we can't support using NDMP in our environment.  Does 
anyone out there currently do backups of a Filer without using NDMP?  If so, 
what method did you employ?

Thank you,

--
Brian Kunst
Storage Administrator
University of Washington - Technology Services


Re: Two different retention policies for the same node

2009-03-18 Thread Steven Harris
Michael,

I have two ideas about your problem.

Idea 1.  Create another domain for your high priority servers, with the 7
day retention period.  Move the nodes into this domain. Create new nodes
for these machines in the old domain with different names.  For machines
performing an normal incremental run two backups every day, one to each
node name.  For machines with too many files for that run a weekly
incremental to the longer retention domain.  For databases/tdp nodes run a
weekly extra backup to the longer retention domain.  Explain carefully to
management that the coverage of the longer retention period data now has
holes in it.

Idea 2.  The 7day retention for offsite sounds like a simple-minded
notion of what might be nice to have.  What they really want is an
activepool but they aren't comfortable with it.  Create an activepool daily
for the high priority data.  Send it offsite to disk.  Set the DB
expiration to not less than 7 days.  Set activepool pending delay to 7
days. Continue to send your normal copypool tapes offsite.  Have a small
library offsite too.

If you have a disaster all your active data is instantly available.  If you
need day -n, restore the db for that day and the activepool data for that
day will be available. Alternatively use your small library and restore day
-n data from the usual copypool tapes.

One loose end if your ERP is SAP, the backups are actually TSM
archives.  I'm not sure how Activepools work with archives and don't have
the time to look it up now.

Regards

Steve.


Steven Harris
TSM Admin, Sydney Australia



   
 Michael Green 
 mishagr...@gmail 
 .COM  To
 Sent by: ADSM:   ADSM-L@VM.MARIST.EDU
 Dist Stor  cc
 Manager  
 ads...@vm.marist Subject
 .EDU Re: [ADSM-L] Two different  
   retention policies for the same 
   node
 19/03/2009 12:27  
 AM
   
   
 Please respond to 
 ADSM: Dist Stor  
 Manager  
 ads...@vm.marist 
   .EDU   
   
   




On Tue, Mar 17, 2009 at 11:18 PM, Conway, Timothy
timothy.con...@jbssa.com wrote:
 Is this to avoid having two copypools?  That's a reasonable goal.   I
 have only one copypool, which is my DR offsite pool.  Just make your
 onsite copypool an offsite pool, and you can give them 25 times better
 than they're asking for.

No, the idea is to keep offsite 7 days history for very few most
important servers (ERP, HR) on disk.  I don't much care if that will
be primary pool or copy pool. As long as I can get my data back off it
- it's fine.
Today, I manage 3 servers here and am sitting on 0.5 peta of backup
data. There is no point to have all that data (most of which is
inactive) at DR site (we do have offsite vault though). At DR site we
want to keep preconfigured turn-key ERP, HR servers, a preconfigured
TSM server with its database and SAN or NAS attached disk that has the
7-days history. I have yet to work out how and by what means my 140GB
database will get to DR site on daily basis. Maybe we will use a
dedupe or maybe we will open a separate TSM instance  just for these
few servers so that the DB that we will have to copy to DR site will
be as small as possible. Also the smaller  DB, the better in DR
situation.

 Unless most of the data changes every day, the difference between 7 days
 and 180 days worth of copypool is remarkably small.

It can be big. ERP server backs up over 100G nightly. I guess it
dedupes pretty well though.


 If you have no copypool at all, the whole thing is a farce.
 If they're wanting fast portable full restores of a subset of the total
 nodes, how about backupsets?  Make a nodegroup containing all the nodes

Backup sets are fine as long as tey are relatively small and you don't
have to generate them on daily basis. Imagine your ERP is about 400GB
worth of active data 

Re: Backup up a NetApp Filer without using NDMP

2009-03-18 Thread Shawn Drew
Depending on your Various Reasons , you might still want to go with
NDMP.  If you don't want to use it because of a lack of SAN/Tape drives,
you can still backup over IP to the normal storage pool hierarchy.
In Chapter 7 of the Administrator's Guide, look at the section:
Performing NDMP Filer to Tivoli Storage Manager Server Backups for
details.

Otherwise, the other option is to mount the file systems and back it up
with a normal B/A client.  (Preferably from the TSM Server itself)

Regards,
Shawn

Shawn Drew





Internet
bku...@u.washington.edu

Sent by: ADSM-L@VM.MARIST.EDU
03/18/2009 05:28 PM
Please respond to
ADSM-L@VM.MARIST.EDU


To
ADSM-L
cc

Subject
[ADSM-L] Backup up a NetApp Filer without using NDMP





Hello all,

We have a customer with a NetApp Filer they want to back up to our TSM
system.  For various reasons, we can't support using NDMP in our
environment.  Does anyone out there currently do backups of a Filer
without using NDMP?  If so, what method did you employ?

Thank you,

--
Brian Kunst
Storage Administrator
University of Washington - Technology Services


This message and any attachments (the message) is intended solely for
the addressees and is confidential. If you receive this message in error,
please delete it and immediately notify the sender. Any use not in accord
with its purpose, any dissemination or disclosure, either whole or partial,
is prohibited except formal approval. The internet can not guarantee the
integrity of this message. BNP PARIBAS (and its subsidiaries) shall (will)
not therefore be liable for the message if modified. Please note that certain
functions and services for BNP Paribas may be performed by BNP Paribas RCC, Inc.


Re: Calculating Change in Backup Environment

2009-03-18 Thread Kelly Lipp
First, you aren't really doing what you think you're doing: providing the 
ability to restore data back to any point in the previous 90 days.  You get 
seven days for a file that changes every day.

Steve is right: should probably do something like verexists=unlimited and 
retextra=21.  That way if you do a backup more than once a day you can still 
get 21 days worth of restores.

This boils down to how many files really change 21 times in 21 days.  Most 
files won't do that.  Large things, like SQL and Exchange databases will.  But 
you wouldn't really want to restore your SQL database back 21 days in any 
event.  How would you get back to scratch?  You couldn't.  So keeping these 
types of things for shorter periods makes more sense.

Anecdotally, my experience has been that increasing/reducing the numbers does 
not generally result in a ton more storage consumed/saved as most files are 
created and then never changed (or deleted).  They get backed up the one time 
and then live until they are deleted and retonly is reached.

I would definitely get to the 21/21 soon, though, as your user's expectations 
are probably different from what you're providing!

Kelly Lipp
CTO
STORServer, Inc.
485-B Elkton Drive
Colorado Springs, CO 80907
719-266-8777 x7105
www.storserver.com


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of Steven 
Harris
Sent: Wednesday, March 18, 2009 3:24 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Calculating Change in Backup Environment

Dennis,

FWIW, I don't like a N days/N versions strategy.  It can have unexpected
side effects.

N Days/unlimited versions has more predictable behaviour and also is more
efficient at expiry time.

Regards

Steve.

Steven Harris
TSM Admin, Sydney Australia




 Dennis, Melburn
 (IT Solutions
 US)   To
 melburn.den...@s ADSM-L@VM.MARIST.EDU
 IEMENS.COMcc
 Sent by: ADSM:
 Dist Stor Subject
 Manager  [ADSM-L] Calculating Change in
 ads...@vm.marist Backup Environment
 .EDU


 18/03/2009 11:46
 PM


 Please respond to
 ADSM: Dist Stor
 Manager
 ads...@vm.marist
   .EDU






Currently, our backup environment employs a 90-day / 7 revision backup
policy.  My customer has come to me to find out how much our backup data
would grow or shrink if we went to a 21-day / 21-revision backup policy.
Have any of you out there experience requests like this before, and if
so, how were you able to get a reasonable guestimate of this.

Mel Dennis
Systems Engineer II

Siemens IT Solutions and Services, Inc.
Energy Data Center Operations
4400 Alafaya Trail
MC Q1-108
Orlando, FL 32826
Tel.: 407-736-2360
Mob: 321-356-9366
Fax: 407-243-0260
mailto:melburn.den...@siemens.com
www.usa.siemens.com/it-solutions


Using NDMP backups to migrate from one Celerra to another

2009-03-18 Thread Schneider, John
Greetings,
Our Storage Team is migrating all filesystems from one Celerra NAS
to another.  Today we backup the old one via NDMP, and are beginning to
backup filesystems on the new one via NDMP as they come online.
The Storage Team would like to use the NDMP backups as a way to
migrate the large amounts of data to the new NAS filesystems.  The
Storage Team has decided that they don't like using utilities like
Robocopy or Rainfinity.  They experimented with both, and because of the
millions of small files involved, both would take a long time and
adversely affect the performance of the NAS filesystems involved.  
The strategy using the NDMP backups would be:
 
1) Take a backup for a filesystem on OLDNAS, and restore it to a
different filesystem name on NEWNAS. 
2) Once the restore is completed, use Robocopy or such utility to keep
the two filesystems in sync with changes that are still happening on the
filesystem on OLDNAS.
3) On the day of the cutover, stop all traffic to the filesystem, use
Robocopy to sync one last time
4) Switch the NAS mappings over to the new filesystem on NEWNAS.
5) Start doing NDMP backups to new filesystem on NEWNAS.
6) Repeat for every filesystem that needs to be migrated.
 
Sounds easy enough, but I can't figure out any practical way to restore
an NDMP backup from OLDNAS to NEWNAS.  The Restore Node command doesn't
provide a way to redirect the backup data to another NAS.
Theoretically, I could update the TSM NDMP definition for OLDNAS to
refer to NEWNAS, then when I do the Restore Node it would restore the
data to NEWNAS.  I would also have to update the Tape path definitions
so they are correct for NEWNAS. (I have not actually tried this.)  But
in practice that means that for the duration of the restore I can't do
any NDMP backups or restores for OLDNAS.  If it were a small filesystem
I could be sure of restoring in a few hours, that would be OK.  But the
whole reason we are doing this is because of some multiple TB
filesystems that will not restore in a few hours.  Today one of them
takes 18 hours to backup, and we have never tried restoring it.   That
could take a couple days, and we can't go that long without backup.  Or
at least we don't want to.
 
Any brilliant ideas on how to accomplish this?  Is there a way to
restore NDMP backups created on one Celerra to another without
disrupting everything? Or should I not even be trying?  :-)
 

Best Regards,

John D. Schneider 
Lead Systems Administrator - Storage 
Sisters of Mercy Health Systems 
3637 South Geyer Road 
St. Louis, MO  63127 
Phone: 314-364-3150 
Cell: 314-750-8721 
Email:  john.schnei...@mercy.net

 
This e-mail contains information which (a) may be PROPRIETARY IN NATURE OR
OTHERWISE PROTECTED BY LAW FROM DISCLOSURE, and (b) is intended only for the
use of the addressee(s) named above. If you are not the addressee, or the
person responsible for delivering this to the addressee(s), you are notified
that reading, copying or distributing this e-mail is prohibited. If you have
received this e-mail in error, please contact the sender immediately.


Re: Using NDMP backups to migrate from one Celerra to another

2009-03-18 Thread Arthur Poon
I have a question about aggregate data transfer rate. Why it is always a
lot slower than the network data transfer rate?

For example:
ANE4961I (Session: 18137, Node: MFDCITRIX)  Total number  
  of bytes transferred: 13.61 GB (SESSION: 18137)

03/18/09   15:37:37  ANE4963I (Session: 18137, Node: MFDCITRIX)  Data
transfer 
  time:  914.17 sec (SESSION: 18137)

03/18/09   15:37:37  ANE4966I (Session: 18137, Node: MFDCITRIX)  Network
data  
  transfer rate:15,618.11 KB/sec (SESSION:
18137)  
03/18/09   15:37:37  ANE4967I (Session: 18137, Node: MFDCITRIX)
Aggregate data
  transfer rate:  2,220.91 KB/sec (SESSION:
18137) 
03/18/09   15:37:37  ANE4968I (Session: 18137, Node: MFDCITRIX)  Objects

  compressed by:   14% (SESSION:
18137) 


AP   


LTO3 Problem

2009-03-18 Thread Yudi Darmadi

Hi,

I had a TSM Environment like this:
TSM Server V5.4.3.0 on Windows 2003 SP2
TSM Client V5.4.3.0 on Windows 2003 SP2
TSM Client V5.4.0.0 on Linux RH
TS3200 LTO3 Tape Drive.

The Problem is that the tape cartridge (LTO3), can only fill with capacity
400GB.
Volume Name   Storage  Device  EstimatedPct   Volume
 Pool NameClass Name   Capacity   Util   Status
  ---  --  -  -  
M7L3  DAILY_POOL   LTOCLASS  442.2 G  100.0Full
M8L3  DAILY_POOL   LTOCLASS  428.3 G  100.0Full
M9L3  DAILY_POOL   LTOCLASS  423.3 G  100.0Full
M00010L3  DAILY_POOL   LTOCLASS  420.9 G  100.0Full
M00011L3  DAILY_POOL   LTOCLASS  431.7 G   99.4Full
M00012L3  DAILY_POOL   LTOCLASS  411.3 G  100.0Full
M00054L3  DAILY_POOL   LTOCLASS  762.9 G   53.9  Filling

I had try to re-label all volume, first use, its capacity was 762.9 GB, but
then when it reach about 400G, it suddenly said full - and the estimated
capacity turns to 4000 GB.
Can anyone help me?

Best Regards,


Yudi Darmadi
PT Niagaprima Paramitra
Jl. KH Ahmad Dahlan No.25  Kebayoran Baru, Jakarta Selatan 12130
Phone: 021-72799949; Fax: 021-72799950; Mobile: 081905530830
http://www.niagaprima.com


Re: LTO3 Problem

2009-03-18 Thread David Longo
You are probably writing data that is already compressed or
data that just doesn't compress much.

One other thing, query devclass ltoclass f=d
and see if format is ULTRIUM3C.  If no C on the end
then is not compressing on drive is problem.

David Longo

 Yudi Darmadi y...@niagaprima.com 3/18/2009 11:03 PM 
Hi,

I had a TSM Environment like this:
TSM Server V5.4.3.0 on Windows 2003 SP2
TSM Client V5.4.3.0 on Windows 2003 SP2
TSM Client V5.4.0.0 on Linux RH
TS3200 LTO3 Tape Drive.

The Problem is that the tape cartridge (LTO3), can only fill with capacity
400GB.
Volume Name   Storage  Device  EstimatedPct   Volume
  Pool NameClass Name   Capacity   Util   Status
  ---  --  -  -  
M7L3  DAILY_POOL   LTOCLASS  442.2 G  100.0Full
M8L3  DAILY_POOL   LTOCLASS  428.3 G  100.0Full
M9L3  DAILY_POOL   LTOCLASS  423.3 G  100.0Full
M00010L3  DAILY_POOL   LTOCLASS  420.9 G  100.0Full
M00011L3  DAILY_POOL   LTOCLASS  431.7 G   99.4Full
M00012L3  DAILY_POOL   LTOCLASS  411.3 G  100.0Full
M00054L3  DAILY_POOL   LTOCLASS  762.9 G   53.9  Filling

I had try to re-label all volume, first use, its capacity was 762.9 GB, but
then when it reach about 400G, it suddenly said full - and the estimated
capacity turns to 4000 GB.
Can anyone help me?

Best Regards,


Yudi Darmadi
PT Niagaprima Paramitra
Jl. KH Ahmad Dahlan No.25  Kebayoran Baru, Jakarta Selatan 12130
Phone: 021-72799949; Fax: 021-72799950; Mobile: 081905530830
http://www.niagaprima.com



#
This message is for the named person's use only.  It may 
contain private, proprietary, or legally privileged information.  
No privilege is waived or lost by any mistransmission.  If you 
receive this message in error, please immediately delete it and 
all copies of it from your system, destroy any hard copies of it, 
and notify the sender.  You must not, directly or indirectly, use, 
disclose, distribute, print, or copy any part of this message if you 
are not the intended recipient.  Health First reserves the right to 
monitor all e-mail communications through its networks.  Any views 
or opinions expressed in this message are solely those of the 
individual sender, except (1) where the message states such views 
or opinions are on behalf of a particular entity;  and (2) the sender 
is authorized by the entity to give such views or opinions.
#