Feature in TSM6.1...

2009-05-06 Thread Tandon K
Hi,

Does TSM 6.1 has any feature of implementing Policies in Policy Domain  based 
on Data Type in Backup and Archive Copy Groups?

Regards,
Tandon.Disclaimer:
This email message (including attachments if any) may contain privileged, 
proprietary, confidential information, which may be exempt from any kind of 
disclosure whatsoever and is intended solely for the use of addressee (s). If 
you are not the intended recipient, kindly inform us by return e-mail and also 
kindly disregard the contents of the e-mail, delete the original message and 
destroy any copies thereof immediately. You are notified that any 
dissemination, distribution or copying of this communication is strictly 
prohibited unless approved by the sender.

DQ Entertainment (DQE) has taken every reasonable precaution to minimize the 
risk of transmission of computer viruses with this e-mail; DQE is not liable 
for any damage you may sustain as a result of any virus in this e-mail. DQE 
shall not be liable for the views expressed in the e-mail. DQE reserves the 
right to monitor and review the content of all messages sent to or from this 
e-mail address


Help with select statement

2009-05-06 Thread Moyer, Joni M
Hey Everyone,

I've been trying to get the proper output of archived data for a particular 
node, but I keep getting hung up on the archive_date parameter and was 
wondering if anyone could help out?

I am trying to get information on all archived data for the following:

node_name=fjsu101
owner=appb2b
filespace_id=11
Between the dates of 05/01/2009 & 05/02/2009

I was trying:  select * from archives where node_name='FJSU101' and 
owner='appb2b' and filespace_id=11 and date(archive_date) > 04/31/2009 as a 
starting point, but I'm still having no luck.  Any suggestions are greatly 
appreciated

Joni Moyer
Storage Administrator III
(717)302-9966
joni.mo...@highmark.com



This e-mail and any attachments to it are confidential and are intended solely 
for use of the individual or entity to whom they are addressed. If you have 
received this e-mail in error, please notify the sender immediately and then 
delete it. If you are not the intended recipient, you must not keep, use, 
disclose, copy or distribute this e-mail without the author's prior permission. 
The views expressed in this e-mail message do not necessarily represent the 
views of Highmark Inc., its subsidiaries, or affiliates.


Re: Feature in TSM6.1...

2009-05-06 Thread Remco Post

Even with tsm 5.5 you can select the managementclass of your data from
the include file. Now, the hard part is determining what type of data
you are dealing with?

On 6 mei 2009, at 09:38, Tandon K wrote:


Hi,

Does TSM 6.1 has any feature of implementing Policies in Policy
Domain  based on Data Type in Backup and Archive Copy Groups?

Regards,
Tandon.Disclaimer:
This email message (including attachments if any) may contain
privileged, proprietary, confidential information, which may be
exempt from any kind of disclosure whatsoever and is intended solely
for the use of addressee (s). If you are not the intended recipient,
kindly inform us by return e-mail and also kindly disregard the
contents of the e-mail, delete the original message and destroy any
copies thereof immediately. You are notified that any dissemination,
distribution or copying of this communication is strictly prohibited
unless approved by the sender.

DQ Entertainment (DQE) has taken every reasonable precaution to
minimize the risk of transmission of computer viruses with this e-
mail; DQE is not liable for any damage you may sustain as a result
of any virus in this e-mail. DQE shall not be liable for the views
expressed in the e-mail. DQE reserves the right to monitor and
review the content of all messages sent to or from this e-mail address


--
Met vriendelijke groeten/Kind regards,

Remco Post
r.p...@plcs.nl


TSM-OR 5.5.1.0 - 'Error -- Server Information'

2009-05-06 Thread Rainer Holzinger
Hi all,



TSM-OR 5.5.1.0 on Windows Server 2003 Standard Edition SP1

TSM server 5.5.2.0 on RedHat Enterprise Linux version 5.2



I have installed two new TSM servers, version 5.5.2.0 on RedHat
Enterprise Linux version 5.2 and implemented both into TSM-OR version
5.5.1.0 on a Windows Server 2003 Standard Edition SP1.



TSM-OR standard daily reports for one of the new servers is fine.

For the second one there is a TSM-OR daily report being returned with
error message 'Error - Server Information'.

I have checked the implementation more than once to find any differences
between both new TSM servers. Unfortunately I haven't found any
differences.



Did somebody of you run into the same problem and if so, how did you
solve it?



Thank you for any feedback.

Best regards,

Rainer



RHo-Consulting

Rainer Holzinger

Alter Bahnhof 13

D-93093 Donaustauf

phone:   +49 9403 969174

mobile:  +49 162 2807 888

email: 
rainer.holzin...@t-online.de



CONFIDENTIALITY NOTICE:

This e-mail is confidential and may also be privileged. If you are not
the intended recipient, please notify the sender IMMEDIATELY; you should
not copy the e-mail or use it for any purpose or disclose its contents
to any other person.



VIRUSES:

Although there have been taken steps to ensure that this e-mail and
attachments are free from any viruses, the recipient should at its sole
discretion take the necessary measures to ensure that the received
messages are actually virus free.


Re: LTO for long term archiving

2009-05-06 Thread Kauffman, Tom
Soapbox time.

The media is not important - any sane retention policy will require that the 
information be copied at least annually, with enough copies for redundancy.

BUT -

to be able to PROCESS the data 25 years from now imposes additional 
requirements.

First and formost -- the data will NOT be in any format except unloaded 
flat-file; ASCII, UTF-8, or UTF-16 encoding. You will NOT be able to process 
proprietary data formats 25 years from now.

Look at the Domesday Book - the version written on parchment or vellum in 1086 
is still readable. The BBC digitized it in 1986 and found it almost impossible 
to find systems to read the digitized version in 2002 - 16 years later. You're 
trying for 25 years.

Tom Kauffman
NIBCO, Inc

> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf
> Of Evans, Bill
> Sent: Tuesday, May 05, 2009 5:34 PM
> To: ADSM-L@VM.MARIST.EDU
> Subject: Re: LTO for long term archiving
>
> I truly doubt that archiving drives, servers and tapes for 25 years
> each
> time the technology updates  will let you read the tapes because the
> drive and server will probably not even boot up and run.
>
> You will have to update the data every two LTO cycles or so.  LTO will
> read two generations back and 25 years from now we will be on LTO14 or
> 15, so LTO4 is toast.   Or, more probably, we will be storing into some
> kind of flash drive at Petabyte capacities.
>
> I think that Blu-Ray DVD will meet your 25 year mark without having to
> retrieve and update to new media.  I know that my 1986 CD's (those not
> seriously scratched or warped from laying on the dash ) still work on
> today's systems.  Properly stored DVD's would need to have players
> stored also, but, these are mechanically simpler than LTO drives and
> servers and would most likely still run.  They are also much cheaper,
> so
> having a new DVD in storage every 5 years is no big expense.
>
> The bigger issue is where and how do you keep track of all of this?  I
> think TSM's HSM is probably capable, however, I'm not real comfortable
> recommending it.  We have had several years of problems running HSM on
> Solaris and have finally turned it off.
>
> What is needed is a good archiving tool that can keep an updated DB of
> content and storage location that users can browse.
>
> We recently restored a power point file written by Office version (?)
> on
> an OS 9 Mac.  This could NOT be read by Office XP, 2003, 2007 (PC) or
> Office 2004 or 2008 (Mac).  We had to find an old Mac OS/9 that still
> had a copy of Office 2000, read it, write it back to the 2000 version
> .ppt file before any 2004-2009 software could read it.  If it had been
> 18 years instead of 9 years, then we never would have been able to read
> it at all, that old OS 9 Mac would never have been saved.
>
> This will happen more and more as our programs become more complex and
> require significant changes in the file formats.  So the real problem
> is
> not just how to archive the data for 25 years, it's how to archive the
> applications for 25 years so we can access that data!
>
> Actually, stone tablets are, so far, the best archive media...
>
>
> Thanks,
>
> Bill Evans
> Research Computing Support
> FRED HUTCHINSON CANCER RESEARCH CENTER
> 206.667.4194  ~  bev...@fhcrc.org
>
>
> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf
> Of
> Kelly Lipp
> Sent: Tuesday, May 05, 2009 1:42 PM
> To: ADSM-L@VM.MARIST.EDU
> Subject: Re: [ADSM-L] LTO for long term archiving
>
> I like the implication, but I'm pretty sure somebody actually thought
> being able to read the information would have been a good idea.
>
> Kelly Lipp
> CTO
> STORServer, Inc.
> 485-B Elkton Drive
> Colorado Springs, CO 80907
> 719-266-8777 x7105
> www.storserver.com
>
>
> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf
> Of
> Remco Post
> Sent: Tuesday, May 05, 2009 2:39 PM
> To: ADSM-L@VM.MARIST.EDU
> Subject: Re: [ADSM-L] LTO for long term archiving
>
> I do agree, having the tapedrives around _could_ be important. I know
> of at least one environment that was able to produce the media that
> stores the data, but no drives. But then again, they only had to
> retain the data, not the infra to access it.
>
> On May 5, 2009, at 22:35 , Kelly Lipp wrote:
>
> > To me the problem is having the drives around and more importantly,
> > the interfaces to the drives.  I think that probably the best bet is
> > to plan on "archiving" a TSM server with a drive along with the
> > media periodically.  Snap off the last database backup, restore it
> > on the to be archived server (a good test in itself), and store the
> > whole kit together.  If one needs to retrieve an archive, fire up
> > the archived server, query the database to determine what tape is
> > required, get it, retrieve the data and put the whole mess away.
> >
> > The other way to do this would be to migr

TSM and Image Backups

2009-05-06 Thread Tim Brown
 I have been testing image backups with TSM, a feature that we never
 took advantage of. I can see how it can be benefical especially for
 recovery. It takes to long to recover whenever I have had to. We also
 should look at collocation.

 Some questions about image backups

  1. Should it be image for all nodes or just a subset of "Critical" nodes.

  2. How often does one image backup a node (weekly, monthly ...)

  3. Does one image all drives (OS only, Apps Only, Both)

  3. How much additional space is required on node drives to support imaging.
 Is there a ballpark figure based on drive utilization

  4. Any other issues/factors would be appreciated.

Tim Brown
Systems Specialist - Project Leader
Central Hudson Gas & Electric
284 South Ave
Poughkeepsie, NY 12601
Email: tbr...@cenhud.com 
Phone: 845-486-5643
Fax: 845-486-5921
Cell: 845-235-4255


This message contains confidential information and is only for the intended 
recipient.  If the reader of this message is not the intended recipient, or an 
employee or agent responsible for delivering this message to the intended 
recipient, please notify the sender immediately by replying to this note and 
deleting all copies and attachments.  Thank you.


Re: Failed Admin Center upgrade on AIX 5300 - 08-05-846

2009-05-06 Thread James Choate
For those of you interested:

If you upgrade your ISC/AC, it is important to remember the username/password 
with which the ISC/AC was installed with.  If you forget your username/password 
combo [usually (iscadmin/iscpass)], you could have potential problems of not 
successfully being to upgrade.

If by some chance you change your password combination, there is a proper way 
to do this:
[IC55394: FULL STEPS FOR CHANGING THE ISC INSTALLATION ID PASSWORD ARE NOTCLEAR 
AND CAN CAUSE ISC/ADMIN CENTER UPGRADE FAILURES]

http://www-01.ibm.com/support/docview.wss?uid=swg1IC55394

In the end my upgrade from AC 5.4.0.0 (ISC already at 6.0.1.1) was unsuccessful.

The end solution was to uninstall the ISC and perform a fresh install of the 
ISC and the AC.

In case you wanted to know.


Re: LTO for long term archiving

2009-05-06 Thread Daniel Sparrman
Like you've correctly stated below, hardware is the least of the problems
in this case.

Making the data readable after 25 years is the issue that needs to be
resolved.

There are several ways of going at this, but TSM on it's own is not the
solution for retention over decades since TSM itself has no way of storing
and converting the information so that is readable several decades into
the future.

We have a few customers that has had these issues, and in their cases
we're talking 50-100 years (depending on information and regulations. Some
state agencys are required to keep information for well over 25 years and
so are insurance companies).

IBM has several tools to handle this, you either go the packaged way with
IBM DIAS (http://www-05.ibm.com/nl/dias/) which uses a Virtual PC
technology amongst other technologies to make sure the information is
gonna be readble way into the futer. DIAS is, like option number 2 below,
based off Content Manager but comes in a pre-packaged configuration.

Or, you go the non-packaged way and base it of Content Manager. In this
case, you will need to build the routines for viewing and converting the
information yourself.

I'm sure you could also do all of this by yourself, but a long time ago I
learned that in these cases, building the app all by yourself is not the
way to go. I'd bet my last cent that 25 years from now when someone tries
to read a document out of that home-built app, they're not gonna be able
to read it, and there is probably not gonna be anyone around that was
there when the app was written ;)

Best Regards

Daniel Sparrman



From:
"Kauffman, Tom" 
To:
ADSM-L@VM.MARIST.EDU
Date:
2009-05-06 15:37
Subject:
Re: LTO for long term archiving
Sent by:
"ADSM: Dist Stor Manager" 



Soapbox time.

The media is not important - any sane retention policy will require that
the information be copied at least annually, with enough copies for
redundancy.

BUT -

to be able to PROCESS the data 25 years from now imposes additional
requirements.

First and formost -- the data will NOT be in any format except unloaded
flat-file; ASCII, UTF-8, or UTF-16 encoding. You will NOT be able to
process proprietary data formats 25 years from now.

Look at the Domesday Book - the version written on parchment or vellum in
1086 is still readable. The BBC digitized it in 1986 and found it almost
impossible to find systems to read the digitized version in 2002 - 16
years later. You're trying for 25 years.

Tom Kauffman
NIBCO, Inc

> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf
> Of Evans, Bill
> Sent: Tuesday, May 05, 2009 5:34 PM
> To: ADSM-L@VM.MARIST.EDU
> Subject: Re: LTO for long term archiving
>
> I truly doubt that archiving drives, servers and tapes for 25 years
> each
> time the technology updates  will let you read the tapes because the
> drive and server will probably not even boot up and run.
>
> You will have to update the data every two LTO cycles or so.  LTO will
> read two generations back and 25 years from now we will be on LTO14 or
> 15, so LTO4 is toast.   Or, more probably, we will be storing into some
> kind of flash drive at Petabyte capacities.
>
> I think that Blu-Ray DVD will meet your 25 year mark without having to
> retrieve and update to new media.  I know that my 1986 CD's (those not
> seriously scratched or warped from laying on the dash ) still work on
> today's systems.  Properly stored DVD's would need to have players
> stored also, but, these are mechanically simpler than LTO drives and
> servers and would most likely still run.  They are also much cheaper,
> so
> having a new DVD in storage every 5 years is no big expense.
>
> The bigger issue is where and how do you keep track of all of this?  I
> think TSM's HSM is probably capable, however, I'm not real comfortable
> recommending it.  We have had several years of problems running HSM on
> Solaris and have finally turned it off.
>
> What is needed is a good archiving tool that can keep an updated DB of
> content and storage location that users can browse.
>
> We recently restored a power point file written by Office version (?)
> on
> an OS 9 Mac.  This could NOT be read by Office XP, 2003, 2007 (PC) or
> Office 2004 or 2008 (Mac).  We had to find an old Mac OS/9 that still
> had a copy of Office 2000, read it, write it back to the 2000 version
> .ppt file before any 2004-2009 software could read it.  If it had been
> 18 years instead of 9 years, then we never would have been able to read
> it at all, that old OS 9 Mac would never have been saved.
>
> This will happen more and more as our programs become more complex and
> require significant changes in the file formats.  So the real problem
> is
> not just how to archive the data for 25 years, it's how to archive the
> applications for 25 years so we can access that data!
>
> Actually, stone tablets are, so far, the best archive media...
>
>
> Thanks,
>
> Bill Evans
> Research Computing Support
> F

Re: Help with select statement

2009-05-06 Thread Thomas Denier
-Joni Moyer wrote: -

>I was trying:  select * from archives where node_name='FJSU101' and
>owner='appb2b' and filespace_id=11 and date(archive_date) >
>04/31/2009 as a starting point, but I'm still having no luck.  Any
>suggestions are greatly appreciated

I would expect the output from the 'date' function to look like
'2009-04-31'. When I compare a database fields to a fixed time
stamps, I always enclose the time stamp in single quotation marks.
I would expect 04/31/2009 without enclosing quotation marks to
be interpreted as dividing 4 by 31 and dividing the result by
2009.


Re: Tough questions for Netapp representatives

2009-05-06 Thread Shawn Drew
One of their backup products is "Snapvault"

Frankly, NDMP is horrible.It is such a pain to perform restores. Also,
since you have to do periodic fulls and differential incrementals, it
generates so much waste data that we can't store all the tapes in the
library, so we have to run around for tapes.   It's really seems that it
is targetted towards the longer-term archive side of the house.
This is really the fault of NDMP, not TSM.  Maybe if the TOC was stored in
the DB, it might be marginally better, but probably not worth it.

On the other hand, Snapvault is made for the daily, short term backup. If
you can afford it, lie, cheat and steal to get this!   It basically let's
you do the normal type of netapp snapshots, but let's you store it on a
separate file server.  (I.E. cheaper, lower performance SATA disk)
Although we haven't enabled it yet, they also let you do their form of
deduplication on the snapvault side while leaving the source.  The problem
with snapvault is that it doesn't really work with tape.  You can hack up
some scripts to manage this sort of this, but it's ugly. (Basically a
snapmirror to tape)

I haven't tried their VTLs or any of that.  Just an opinion on Snapvault.

I think they also have a "Vseries" product that let's you use your own SAN
storage, but use the Netapp front end.

Regards,
Shawn

Shawn Drew





Internet
mishagr...@gmail.com

Sent by: ADSM-L@VM.MARIST.EDU
05/06/2009 02:35 AM
Please respond to
ADSM-L@VM.MARIST.EDU


To
ADSM-L
cc

Subject
[ADSM-L] Tough questions for Netapp representatives






I've just been notified that Netapp sales people will pay us a visit
in about a week from now and try to talk us into buying their backup
solutions. What tricky questions should I be asking them? I don't
exactly know what products they will  be talking about, but I guess it
will be about their VTL and/or FAS systems.

Please email me in private. I promise to post a summary of all
interesting questions/ideas that I will receive after my meeting.
 --
Warm regards,
Michael Green



This message and any attachments (the "message") is intended solely for
the addressees and is confidential. If you receive this message in error,
please delete it and immediately notify the sender. Any use not in accord
with its purpose, any dissemination or disclosure, either whole or partial,
is prohibited except formal approval. The internet can not guarantee the
integrity of this message. BNP PARIBAS (and its subsidiaries) shall (will)
not therefore be liable for the message if modified. Please note that certain
functions and services for BNP Paribas may be performed by BNP Paribas RCC, Inc.


TSM v6 How to SELECT output sequence of actlog (summary) in true date time sequence?

2009-05-06 Thread Cowen, Richard
I don't see an exposed unique key on actlog/summary/events.  I would
like to get it in the same order TSM v3,4,5 did by default- in the
sequence the rows were posted.  In particular, I would like to see the
start session at the top and the end session at the bottom.  I realize
the "events" can be posted in slightly different sequence, due to
multi-threading/multi-tasking.  Maybe a unique key column, or use that
.00 at the end of the date_time to actually mean something.  Or
maybe I am missing something in the manuals.

Thanks for any pointers.

Select w/o order by:
dsmadmc -id=rcowen -pa=pxx -comma select * from actlog where
DATE_TIME between '05/05/2009 07:00:00' and '05/06/2009 07:00:00'  >
f:\actlog.csv

F:\>find "2650" actlog.csv

-- ACTLOG.CSV
2009-05-05 13:48:33.00,4958,I,ANE4958I Total number of objects
updated:  0 (SESSION:
2650),CLIENT,LAB-1750-151,,ARCHIVE_HOURLY,STANDARD,2650,LAB-1750-151,265
0,
2009-05-05 13:48:33.00,4960,I,ANE4960I Total number of objects
rebound:  0 (SESSION:
2650),CLIENT,LAB-1750-151,,ARCHIVE_HOURLY,STANDARD,2650,LAB-1750-151,265
0,
2009-05-05 13:48:33.00,4957,I,ANE4957I Total number of objects
deleted:  0 (SESSION:
2650),CLIENT,LAB-1750-151,,ARCHIVE_HOURLY,STANDARD,2650,LAB-1750-151,265
0,
2009-05-05 13:48:33.00,4970,I,ANE4970I Total number of objects
expired:  0 (SESSION:
2650),CLIENT,LAB-1750-151,,ARCHIVE_HOURLY,STANDARD,2650,LAB-1750-151,265
0,
2009-05-05 13:48:33.00,403,I,ANR0403I Session 2650 ended for node
LAB-1750-151 (WinNT). (SESSION: 2650),SERVER,,,2650,
2009-05-05 13:48:33.00,4953,I,ANE4953I Total number of objects
archived:81 (SESSION:
2650),CLIENT,LAB-1750-151,,ARCHIVE_HOURLY,STANDARD,2650,LAB-1750-151,265
0,
2009-05-05 13:48:29.00,406,I,ANR0406I Session 2650 started for node
LAB-1750-151 (WinNT) (Tcp/Ip LAB-1750-151(4007)). (SESSION:
2650),SERVER,,,2650,
2009-05-05 13:48:33.00,4952,I,ANE4952I Total number of objects
inspected:   81 (SESSION:
2650),CLIENT,LAB-1750-151,,ARCHIVE_HOURLY,STANDARD,2650,LAB-1750-151,265
0,
2009-05-05 13:48:33.00,4959,I,ANE4959I Total number of objects
failed:   0 (SESSION:
2650),CLIENT,LAB-1750-151,,ARCHIVE_HOURLY,STANDARD,2650,LAB-1750-151,265
0,
2009-05-05 13:48:33.00,4961,I,ANE4961I Total number of bytes
transferred: 17.50 MB (SESSION:
2650),CLIENT,LAB-1750-151,,ARCHIVE_HOURLY,STANDARD,2650,LAB-1750-151,265
0,
2009-05-05 13:48:33.00,4963,I,ANE4963I Data transfer time:
0.92 sec (SESSION:
2650),CLIENT,LAB-1750-151,,ARCHIVE_HOURLY,STANDARD,2650,LAB-1750-151,265
0,
2009-05-05 13:48:33.00,4966,I,"ANE4966I Network data transfer rate:
19,419.35 KB/sec (SESSION:
2650)",CLIENT,LAB-1750-151,,ARCHIVE_HOURLY,STANDARD,2650,LAB-1750-151,26
50,
2009-05-05 13:48:33.00,4967,I,"ANE4967I Aggregate data transfer
rate:  5,911.63 KB/sec (SESSION:
2650)",CLIENT,LAB-1750-151,,ARCHIVE_HOURLY,STANDARD,2650,LAB-1750-151,26
50,
2009-05-05 13:48:33.00,4968,I,ANE4968I Objects compressed by:
0% (SESSION:
2650),CLIENT,LAB-1750-151,,ARCHIVE_HOURLY,STANDARD,2650,LAB-1750-151,265
0,
2009-05-05 13:48:33.00,4964,I,ANE4964I Elapsed processing time:
00:00:03 (SESSION:
2650),CLIENT,LAB-1750-151,,ARCHIVE_HOURLY,STANDARD,2650,LAB-1750-151,265
0,

2009-05-05 13:48:33.00,2507,I,ANR2507I Schedule ARCHIVE_HOURLY for
domain STANDARD started at 05/05/2009 13:48:18 for node LAB-1750-151
completed successfully at 05/05/2009 13:48:33. (SESSION:
2650),SERVER,,,2650,


Select with order by:

dsmadmc -id=rcowen -pa=xx -comma select * from actlog where
DATE_TIME between '05/05/2009 07:00:00' and '05/06/2009 07:00:00' order
by date_time > f:\actlog.csv

F:\>find "2650" actlog.csv

-- ACTLOG.CSV
2009-05-05 13:48:29.00,406,I,ANR0406I Session 2650 started for node
LAB-1750-151 (WinNT) (Tcp/Ip LAB-1750-151(4007)). (SESSION:
2650),SERVER,,,2650,
2009-05-05 13:48:33.00,4958,I,ANE4958I Total number of objects
updated:  0 (SESSION:
2650),CLIENT,LAB-1750-151,,ARCHIVE_HOURLY,STANDARD,2650,LAB-1750-151,265
0,
2009-05-05 13:48:33.00,4960,I,ANE4960I Total number of objects
rebound:  0 (SESSION:
2650),CLIENT,LAB-1750-151,,ARCHIVE_HOURLY,STANDARD,2650,LAB-1750-151,265
0,
2009-05-05 13:48:33.00,4957,I,ANE4957I Total number of objects
deleted:  0 (SESSION:
2650),CLIENT,LAB-1750-151,,ARCHIVE_HOURLY,STANDARD,2650,LAB-1750-151,265
0,
2009-05-05 13:48:33.00,4970,I,ANE4970I Total number of objects
expired:  0 (SESSION:
2650),CLIENT,LAB-1750-151,,ARCHIVE_HOURLY,STANDARD,2650,LAB-1750-151,265
0,
2009-05-05 13:48:33.00,403,I,ANR0403I Session 2650 ended for node
LAB-1750-151 (WinNT). (SESSION: 2650),SERVER,,,2650,
2009-05-05 13:48:33.00,4953,I,ANE4953I Total number of objects
archived:81 (SESSION:
2650),CLIENT,LAB-1750-151,,ARCHIVE_HOURLY,STANDARD,2650,LAB-1750-151,265
0,
2009-05-05 13:48:33.00,4952,I,ANE4952I Total number of objects
inspected:   81 (SESSION:
2650),CLIENT

Re: how to view the content of a database (db)

2009-05-06 Thread Zoltan Forray/AC/VCU
TSMManager uses the following select to get the list of DBBackup volumes:

select
date_time,volume_name,type,backup_series,volume_seq,devclass,location from
volhistory where backup_operation<>99

and results in:

11:41:59 AM   WIND : select
date_time,volume_name,type,backup_series,volume_seq,devclass,location from
volhistory where backup_operation<>99

DATE_TIME: 2009-05-05 11:20:51.00
  VOLUME_NAME: 085433
 TYPE: BACKUPFULL
BACKUP_SERIES: 21
   VOLUME_SEQ: 1
 DEVCLASS: 3592-E05
 LOCATION:

DATE_TIME: 2009-05-06 11:00:17.00
  VOLUME_NAME: 085244
 TYPE: BACKUPFULL
BACKUP_SERIES: 22
   VOLUME_SEQ: 1
 DEVCLASS: 3592-E05
 LOCATION:



From:
Joerg Pohlmann 
To:
ADSM-L@VM.MARIST.EDU
Date:
05/05/2009 06:24 PM
Subject:
Re: [ADSM-L] how to view the content of a database (db)
Sent by:
"ADSM: Dist Stor Manager" 



I assume you meant a database backup volume. For TSM 6.1, I do not know of
any way to display the contents of a database backup volume as has existed
in pre-TSM 6.1 (see the TSM 5.5 Admin Ref book under "Server Utilities").
Hence the comments in the documentation that the volume history is crucial
for any database restore. A reminder - a prepare command saves the volume
history in the disaster recovery plan file.

If you issue

dsmserv display dbb devc=sata vol=41127511.dbv

TSM 6.1 gives

ANR4855I Command DISPLAY DBBACKUPVOLUMES is no longer supported and there
is no direct replacement for this capability.

Joerg Pohlmann
250-245-9863


Re: Tough questions for Netapp representatives

2009-05-06 Thread Paul Zarnowski

Are you referring to the older version of NDMP, or the "Filer-to-Server"
capability that was added with TSM 5.4?  This allows you (I believe) to
store the NDMP data in normal TSM storage pools, rather than sending
directly to a NAS-attached tape library.

At 11:38 AM 5/6/2009, Shawn Drew wrote:

Also,
since you have to do periodic fulls and differential incrementals, it
generates so much waste data that we can't store all the tapes in the
library, so we have to run around for tapes.



--
Paul ZarnowskiPh: 607-255-4757
Manager, Storage Services Fx: 607-255-8521
719 Rhodes Hall, Ithaca, NY 14853-3801Em: p...@cornell.edu


Re: Tough questions for Netapp representatives

2009-05-06 Thread Remco Post

On 6 mei 2009, at 08:35, Michael Green wrote:


I've just been notified that Netapp sales people will pay us a visit
in about a week from now and try to talk us into buying their backup
solutions. What tricky questions should I be asking them? I don't
exactly know what products they will  be talking about, but I guess it
will be about their VTL and/or FAS systems.



Well, NetApp is a software company. The hardware is not the most
expensive part of the solution. Each and every feature that you need
is licensed. The only feature that is free is iSCSI. So the trick
question is: how much will each feature cost? And remember, the
features are more expensive as the filer becomes bigger. Also, for
some features, you need to buy some other features as well.

SnapMirror is a great solution, but disks are still disks, so they eat
power even when you don't need the data. Tape is much greener

I do believe that there could be a place for NetApp equipment in a
well designed environment. But start thinking of what you want to
achieve, and then find the best solution for your situation. NetApp
might provide a suitable solution, but they might just as well not...


Please email me in private. I promise to post a summary of all
interesting questions/ideas that I will receive after my meeting.
--
Warm regards,
Michael Green


--
Met vriendelijke groeten/Kind regards,

Remco Post
r.p...@plcs.nl


Register Node or create client schedule fails using ISC

2009-05-06 Thread James Choate
I am running TSM 5.5.2.1 on AIX 5.3 TL8

When I  register a new node, or create a client schedule using the ISC logged 
in as iscadmin, I get the following error from the ISC.
Create Client Node

  Exception Type PortletException
Message com.tivoli.dsm.snapin.DsmLaunchedPortlet:base.handlePSTrigger
Exception java.lang.NullPointerException
at 
com.tivoli.dsm.adminapi.api.DsmAdminAPI.SearchOpGroup(DsmAdminAPI.java:1855)
at 
com.tivoli.dsm.adminapi.api.DsmAdminAPI.queryOperation(DsmAdminAPI.java:778)
at 
com.tivoli.dsm.snapin.ps.DsmGenericValidator.GetExecuteOpBean(DsmGenericValidator.java:1630)
at 
com.tivoli.dsm.snapin.ps.DsmGenericValidator.ParseNameToken(DsmGenericValidator.java:1135)
at 
com.tivoli.dsm.snapin.ps.DsmGenericValidator.validate(DsmGenericValidator.java:545)
at com.ibm.psw.wcl.core.form.WForm.performFormValidation(Unknown Source)
at com.ibm.psw.wcl.core.form.WForm.handleValidate(Unknown Source)
at com.ibm.psw.wcl.components.wizard.WWizardStep.validate(Unknown 
Source)
at 
com.ibm.psw.wcl.components.wizard.WWizard$EWizardLayout.commandPerformed(Unknown
 Source)
at com.ibm.psw.wcl.core.CommandHandler.handleCommand(Unknown Source)
at 
com.ibm.psw.wcl.core.form.AWInputComponent$EInputComponentCommandListener.commandPerformed(Unknown
 Source)
at com.ibm.psw.wcl.core.form.WForm.callInputCommandListeners(Unknown 
Source)
at com.ibm.psw.wcl.core.form.WForm.handleCommand(Unknown Source)
at com.ibm.psw.wcl.core.form.WForm$EFormCallback.handleTrigger(Unknown 
Source)
at com.ibm.psw.wcl.core.trigger.Trigger.process(Unknown Source)
at com.ibm.psw.wcl.core.trigger.TriggerManager.processTrigger(Unknown 
Source)
at 
com.ibm.psw.wcl.portlet.legacy.WclPortletTriggerManager.handleRequest(Unknown 
Source)
at 
com.ibm.psw.wcl.portlet.legacy.WclPortletFacade.handleRequest(Unknown Source)
at 
com.ibm.psw.wcl.portlet.legacy.WclPortletFacade.handleRequest(Unknown Source)
at 
com.tivoli.dsm.snapin.DsmBasePortlet.handlePSTrigger(DsmBasePortlet.java:791)
at 
com.tivoli.dsm.snapin.DsmLaunchedPortlet.actionPerformedImpl(DsmLaunchedPortlet.java:270)
at 
com.tivoli.dsm.snapin.DsmBasePortlet.actionPerformed(DsmBasePortlet.java:503)
at 
com.ibm.wps.pe.pc.legacy.SPIPortletInterceptorImpl.handleEvents(SPIPortletInterceptorImpl.java(Compiled
 Code))
at 
com.ibm.wps.pe.pc.legacy.invoker.impl.PortletDelegateImpl._dispatch(PortletDelegateImpl.java(Compiled
 Code))
at 
com.ibm.wps.pe.pc.legacy.invoker.impl.PortletDelegateImpl.dispatch(PortletDelegateImpl.java(Compiled
 Code))
at org.apache.jetspeed.portlet.Portlet.doPost(Portlet.java:512)
at javax.servlet.http.HttpServlet.service(HttpServlet.java(Compiled 
Code))
at 
com.ibm.wps.pe.pc.legacy.cache.CacheablePortlet.service(CacheablePortlet.java(Compiled
 Code))
at javax.servlet.http.HttpServlet.service(HttpServlet.java(Compiled 
Code))
at org.apache.jetspeed.portlet.Portlet.service(Portlet.java(Compiled 
Code))
at 
com.ibm.ws.webcontainer.servlet.ServletWrapper.service(ServletWrapper.java(Compiled
 Code))
at 
com.ibm.ws.webcontainer.servlet.ServletWrapper.handleRequest(ServletWrapper.java(Compiled
 Code))
at 
com.ibm.ws.webcontainer.webapp.WebAppRequestDispatcher.include(WebAppRequestDispatcher.java(Compiled
 Code))
at 
com.ibm.wps.pe.pc.legacy.invoker.impl.PortletInvokerImpl.callMethod(PortletInvokerImpl.java(Compiled
 Code))
at 
com.ibm.wps.pe.pc.legacy.invoker.impl.PortletInvokerImpl.action(PortletInvokerImpl.java(Compiled
 Code))
at 
com.ibm.wps.pe.pc.legacy.PortletContainerImpl.callPortletMethod(PortletContainerImpl.java(Compiled
 Code))
at 
com.ibm.wps.pe.pc.legacy.EventEnvironmentImpl.includePortlet(EventEnvironmentImpl.java:177)
at 
com.ibm.wps.pe.pc.legacy.event.ActionEventImpl.prepare(ActionEventImpl.java:201)
at 
com.ibm.wps.pe.pc.legacy.event.EventQueueManager.processEventLoop(EventQueueManager.java:89)
at 
com.ibm.wps.pe.pc.legacy.PortletContainerImpl.performEvents(PortletContainerImpl.java:221)
at 
com.ibm.wps.pe.pc.PortletContainerImpl.performEvents(PortletContainerImpl.java:227)
at 
com.ibm.wps.engine.phases.WPActionPhase.processPortlets(WPActionPhase.java:947)
at 
com.ibm.wps.engine.phases.WPActionPhase.execute(WPActionPhase.java:489)
at 
com.ibm.wps.state.phases.AbstractActionPhase.next(AbstractActionPhase.java:130)
at 
com.ibm.wps.engine.phases.config.CfgPhase$Action.next(CfgPhase.java:303)
at com.ibm.wps.engine.Servlet.callPortal(Servlet.java:710)
at com.ibm.wps.engine.Servlet.doGet(Servlet.java:567)
at com.ibm.wps.engine.Servlet.doPost(Servlet.java:736)
at javax.servlet.http.HttpServlet.service(HttpServlet.java(Compiled 
Code))
at javax.servlet.http.HttpServlet.service(H

Re: Tough questions for Netapp representatives

2009-05-06 Thread Shawn Drew
NDMP in general.  It's not the storage pool situation that bothers me.  I
actually prefer the native, lan-free arrangement.  It is much faster than
going over IP.

It's the fact you have to do the old unix-style level-0, level1 backups
compared to the typical TSM incremental-forever.

Regards,
Shawn

Shawn Drew





Internet
p...@cornell.edu

Sent by: ADSM-L@VM.MARIST.EDU
05/06/2009 11:58 AM
Please respond to
ADSM-L@VM.MARIST.EDU


To
ADSM-L
cc

Subject
Re: [ADSM-L] Tough questions for Netapp representatives






Are you referring to the older version of NDMP, or the "Filer-to-Server"
capability that was added with TSM 5.4?  This allows you (I believe) to
store the NDMP data in normal TSM storage pools, rather than sending
directly to a NAS-attached tape library.

At 11:38 AM 5/6/2009, Shawn Drew wrote:
>Also,
>since you have to do periodic fulls and differential incrementals, it
>generates so much waste data that we can't store all the tapes in the
>library, so we have to run around for tapes.


--
Paul ZarnowskiPh: 607-255-4757
Manager, Storage Services Fx: 607-255-8521
719 Rhodes Hall, Ithaca, NY 14853-3801Em: p...@cornell.edu



This message and any attachments (the "message") is intended solely for
the addressees and is confidential. If you receive this message in error,
please delete it and immediately notify the sender. Any use not in accord
with its purpose, any dissemination or disclosure, either whole or partial,
is prohibited except formal approval. The internet can not guarantee the
integrity of this message. BNP PARIBAS (and its subsidiaries) shall (will)
not therefore be liable for the message if modified. Please note that certain
functions and services for BNP Paribas may be performed by BNP Paribas RCC, Inc.


Re: how to view the content of a database (db)

2009-05-06 Thread Joerg Pohlmann
Zoltan, the select statement gives you the entries from the volume history
table, so the TSM server is operational. Your sample output is from a
pre-TSM 6.1 server. For a TSM 6.1 server, here is an example of a database
backup volume history entry:

tsm: VISTA1>select * from volhistory where type='BACKUPFULL'

 DATE_TIME: 2009-04-30 14:38:29.00
UNIQUE: 0
  TYPE: BACKUPFULL
 BACKUP_SERIES: 11
  BACKUP_OPERATION: 0
VOLUME_SEQ: 1
  DEVCLASS: SATA
   VOLUME_NAME: C:\TSMINST1\TSMDATA\41127511.DBV
  LOCATION: VAULT
   COMMAND:
 DB2_OBJID: 3083
   DB2_HOMEPOS: 0
   DB2_HLA: \NODE\
   DB2_LLA: FULL_BACKUP.20090430143829.1.
DB2_TOTALDATABYTES: 402755595
 DB2_TOTALLOGBYTES: 33583115
   DB2_LOGBLOCKNUM: -1


However, the original question related to the contents of a database
volume. In pre-TSM 6.1 if you lost the volume history and you have only a
set of tapes where you don't know with which tape to start restoring the
database, you had the ability to determine which database backup (series)
was the most recent one by running a dsmserv display dbb against all tapes.
Admittedly a tedious process but you can determine the most current
database backup volume from the resulting output (see the Admin Ref book
for TSM 5.5 for sample output). With TSM 6.1 you cannot do that and you
have lost the entire TSM server's data if you do not have the volume
history and do not know which tape contains a database backup let alone the
most recent one. So, for TSM 6.1 you might want to specify more than one
location (different file systems on different RAID arrays) for the volume
history in dsmserv.opt, and treat the Recovery Plan File like "gold", that
is keep perhaps a couple of copies at secure locations, or better on remote
file systems (NFS/CIFS mounted/mapped) and (even better) another RPF on a
virtual volume on another, remote  TSM server.

Joerg Pohlmann
250-245-9863






Re: how to view the content of a database (db)

2009-05-06 Thread Zoltan Forray/AC/VCU
While I understand your desired results, I disagree about this servers
pedigree. This is from my sole 6.1.1.0 server - not a 5.x server.  Here
are my log entries.

11:41:59 AM   WIND : select
date_time,volume_name,type,backup_series,volume_seq,devclass,location from
volhistory where backup_operation<>99

DATE_TIME: 2009-05-05 11:20:51.00
  VOLUME_NAME: 085433
 TYPE: BACKUPFULL
BACKUP_SERIES: 21
   VOLUME_SEQ: 1
 DEVCLASS: 3592-E05
 LOCATION:

DATE_TIME: 2009-05-06 11:00:17.00
  VOLUME_NAME: 085244
 TYPE: BACKUPFULL
BACKUP_SERIES: 22
   VOLUME_SEQ: 1
 DEVCLASS: 3592-E05
 LOCATION:

1:42:55 PM   WIND : q status
Storage Management Server for Linux/x86_64 - Version 6, Release 1, Level
1.0


   Server Name: WIND
Server host name or IP address:
 Server TCP/IP port number: 1500
   Crossdefine: Off
   Server Password Set: Yes
 Server Installation Date/Time: 03/25/2009 10:40:02
  Server Restart Date/Time: 05/06/2009 10:24:03
Authentication: On
Password Expiration Period: 90 Day(s)
 Invalid Sign-on Attempt Limit: 0
   Minimum Password Length: 0
  Registration: Closed
Subfile Backup: No
  Availability: Enabled
Accounting: Off
Activity Log Retention: 90 Day(s)
Activity Log Number of Records: 183566
 Activity Log Size: 5 M
 Activity Summary Retention Period: 90 Day(s)
  License Audit Period: 30 Day(s)
Last License Audit: 04/25/2009 09:16:21
 Server License Compliance: Valid
 Central Scheduler: Active
  Maximum Sessions: 25
Maximum Scheduled Sessions: 12
 Event Record Retention Period: 90 Day(s)
Client Action Duration: 5 Day(s)
 Schedule Randomization Percentage: 25
 Query Schedule Period: Client
   Maximum Command Retries: Client
  Retry Period: Client
  Scheduling Modes: Any
  Active Receivers: CONSOLE ACTLOG
Configuration manager?: Off
  Refresh interval: 60
Last refresh date/time:
 Context Messaging: Off
Table of Contents (TOC) Load Retention: 120 Minute(s)
Machine Globally Unique ID:
96.3c.1f.a0.19.4c.11.de.a1.e6.00.15.17-
 .a5.63.ea
  Archive Retention Protection: Off
   Database Reporting Mode: Partial
  Database Directories: /tsmdb
 Total Size of File System(MB): 399,746.57
Space Used on File System (MB): 42,825.29
 Free Space Available (MB): 356,857.28
   Encryption Strength: AES

Your query produces different results (from the same server):

1:46:05 PM   WIND : select * from volhistory where type='BACKUPFULL'

 DATE_TIME: 2009-05-05 11:20:51.00
UNIQUE: 0
  TYPE: BACKUPFULL
 BACKUP_SERIES: 21
  BACKUP_OPERATION: 0
VOLUME_SEQ: 1
  DEVCLASS: 3592-E05
   VOLUME_NAME: 085433
  LOCATION:
   COMMAND:
 DB2_OBJID: 369665
   DB2_HOMEPOS: 4
   DB2_HLA: /NODE/
   DB2_LLA: FULL_BACKUP.20090505112051.1
DB2_TOTALDATABYTES: 1040445451
 DB2_TOTALLOGBYTES: 402771979
   DB2_LOGBLOCKNUM: 3974

 DATE_TIME: 2009-05-06 11:00:17.00
UNIQUE: 0
  TYPE: BACKUPFULL
 BACKUP_SERIES: 22
  BACKUP_OPERATION: 0
VOLUME_SEQ: 1
  DEVCLASS: 3592-E05
   VOLUME_NAME: 085244
  LOCATION:
   COMMAND:
 DB2_OBJID: 379135
   DB2_HOMEPOS: 4
   DB2_HLA: /NODE/
   DB2_LLA: FULL_BACKUP.20090506110017.1
DB2_TOTALDATABYTES: 1040445451
 DB2_TOTALLOGBYTES: 167833611
   DB2_LOGBLOCKNUM: 3974



From:
Joerg Pohlmann 
To:
ADSM-L@VM.MARIST.EDU
Date:
05/06/2009 01:42 PM
Subject:
Re: [ADSM-L] how to view the content of a database (db)
Sent by:
"ADSM: Dist Stor Manager" 



Zoltan, the select statement gives you the entries from the volume history
table, so the TSM server is operational. Your sample output is from a
pre-TSM 6.1 server. For a TSM 6.1 server, here is an example of a database
backup volume history entry:

tsm: VISTA1>select * from volhistory where type='BACKUPFULL'

 DATE_TIME: 2009-04-30 14:38:29.00
UNIQUE: 0
  TYPE: BACKUPFULL
 BACKUP_SERIES: 11
  BACKUP_OPERATION: 0
VOLUME_SEQ: 1
  DEVCLASS: SATA
   VOLUME_NAME: C:\TSMINST1\TSMDATA\41127511.DBV
  LOCATION: VAULT
   COMMAND:
 DB2_OBJID: 3083
   DB2_HOMEPOS: 0
   DB2_HLA: \NODE\
   DB2_LLA: FULL_BACKUP.20090430143829.1.
DB2

Re: how to view the content of a database (db)

2009-05-06 Thread Desalegne Guangul
Hi Guys,

I can only run the dsmserv command from the windows command line as you
see below.  Here is the out put of the command ...Is any body know
what this mean?
By the way I'm running TSM V 5.3.3.1 on Windows 2000 server.


c:\Program Files\tivoli\tsm\server>dsmserv.exe display dbbackupvolume
devc=file
vol=d8820567.dbv
ANR0900I Processing options file c:\program
files\tivoli\tsm\server1\dsmserv.o-
pt.
ANR7800I DSMSERV generated at 10:02:56 on May 10 2006.

Tivoli Storage Manager for Windows
Version 5, Release 3, Level 3.1

Licensed Materials - Property of IBM

(C) Copyright IBM Corporation 1990, 2006.
All rights reserved.
U.S. Government Users Restricted Rights - Use, duplication or disclosure
restricted by GSA ADP Schedule Contract with IBM Corporation.

ANR4726I The ICC support module has been loaded.
ANR8208W TCP/IP driver unable to initialize due to error in using Port
1500,
reason code 10048.
ANR8340I FILE volume H:\TSMDATA\SERVER1\BACKUP\D8820567.DBV mounted.
ANR1363I Input volume H:\TSMDATA\SERVER1\BACKUP\D8820567.DBV opened
(sequence
number 0).
Entering exception handler.
Leaving exception handler.

Thanks,

Desalegne Guangul
Regional Occupational Health
1800 Harrison St. 21st Floor
510-625-3038 or tieline 8/428-3038







NOTICE TO RECIPIENT:  If you are not the intended recipient of this
e-mail, you are prohibited from sharing, copying, or otherwise using or
disclosing its contents.  If you have received this e-mail in error,
please notify the sender immediately by reply e-mail and permanently
delete this e-mail and any attachments without reading, forwarding or
saving them.  Thank you.




Remco Post 
Sent by: "ADSM: Dist Stor Manager" 
05/05/09 03:38 PM
Please respond to
"ADSM: Dist Stor Manager" 


To
ADSM-L@VM.MARIST.EDU
cc

Subject
Re: how to view the content of a database (db)






In tsm 6.1, the volhist and devconf files are mandatory in database
restore actions, in tsm 5 you indeed had access to dsmserv display
dbb. Even better, at least on Linux it's impossible to make database
backup if you haven't configured devconf and volhist backup files.


On May 6, 2009, at 0:23 , Joerg Pohlmann wrote:

> I assume you meant a database backup volume. For TSM 6.1, I do not
> know of
> any way to display the contents of a database backup volume as has
> existed
> in pre-TSM 6.1 (see the TSM 5.5 Admin Ref book under "Server
> Utilities").
> Hence the comments in the documentation that the volume history is
> crucial
> for any database restore. A reminder - a prepare command saves the
> volume
> history in the disaster recovery plan file.
>
> If you issue
>
> dsmserv display dbb devc=sata vol=41127511.dbv
>
> TSM 6.1 gives
>
> ANR4855I Command DISPLAY DBBACKUPVOLUMES is no longer supported and
> there
> is no direct replacement for this capability.
>
> Joerg Pohlmann
> 250-245-9863

--
Met vriendelijke groeten,

Remco Post
r.p...@plcs.nl
+31 6 248 21 622


Re: LTO for long term archiving

2009-05-06 Thread Kelly Lipp
Here here.  I'll reiterate: the data must be stored in the lowest common 
denominator format.  No databases, no strange and wondrous binary formats.  
Simple text is the best.  That's why microfiche is still used!

I would be very concerned about medical data: it is in so many different 
formats and one has to wonder if our elected officials and their beaurocratic 
minions in the various regulatory agencies are aware of these facts and the 
problem they are going to have in the future.

Kelly Lipp
CTO
STORServer, Inc.
485-B Elkton Drive
Colorado Springs, CO 80907
719-266-8777 x7105
www.storserver.com


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of 
Kauffman, Tom
Sent: Wednesday, May 06, 2009 7:38 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] LTO for long term archiving

Soapbox time.

The media is not important - any sane retention policy will require that the 
information be copied at least annually, with enough copies for redundancy.

BUT -

to be able to PROCESS the data 25 years from now imposes additional 
requirements.

First and formost -- the data will NOT be in any format except unloaded 
flat-file; ASCII, UTF-8, or UTF-16 encoding. You will NOT be able to process 
proprietary data formats 25 years from now.

Look at the Domesday Book - the version written on parchment or vellum in 1086 
is still readable. The BBC digitized it in 1986 and found it almost impossible 
to find systems to read the digitized version in 2002 - 16 years later. You're 
trying for 25 years.

Tom Kauffman
NIBCO, Inc

> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf
> Of Evans, Bill
> Sent: Tuesday, May 05, 2009 5:34 PM
> To: ADSM-L@VM.MARIST.EDU
> Subject: Re: LTO for long term archiving
>
> I truly doubt that archiving drives, servers and tapes for 25 years
> each
> time the technology updates  will let you read the tapes because the
> drive and server will probably not even boot up and run.
>
> You will have to update the data every two LTO cycles or so.  LTO will
> read two generations back and 25 years from now we will be on LTO14 or
> 15, so LTO4 is toast.   Or, more probably, we will be storing into some
> kind of flash drive at Petabyte capacities.
>
> I think that Blu-Ray DVD will meet your 25 year mark without having to
> retrieve and update to new media.  I know that my 1986 CD's (those not
> seriously scratched or warped from laying on the dash ) still work on
> today's systems.  Properly stored DVD's would need to have players
> stored also, but, these are mechanically simpler than LTO drives and
> servers and would most likely still run.  They are also much cheaper,
> so
> having a new DVD in storage every 5 years is no big expense.
>
> The bigger issue is where and how do you keep track of all of this?  I
> think TSM's HSM is probably capable, however, I'm not real comfortable
> recommending it.  We have had several years of problems running HSM on
> Solaris and have finally turned it off.
>
> What is needed is a good archiving tool that can keep an updated DB of
> content and storage location that users can browse.
>
> We recently restored a power point file written by Office version (?)
> on
> an OS 9 Mac.  This could NOT be read by Office XP, 2003, 2007 (PC) or
> Office 2004 or 2008 (Mac).  We had to find an old Mac OS/9 that still
> had a copy of Office 2000, read it, write it back to the 2000 version
> .ppt file before any 2004-2009 software could read it.  If it had been
> 18 years instead of 9 years, then we never would have been able to read
> it at all, that old OS 9 Mac would never have been saved.
>
> This will happen more and more as our programs become more complex and
> require significant changes in the file formats.  So the real problem
> is
> not just how to archive the data for 25 years, it's how to archive the
> applications for 25 years so we can access that data!
>
> Actually, stone tablets are, so far, the best archive media...
>
>
> Thanks,
>
> Bill Evans
> Research Computing Support
> FRED HUTCHINSON CANCER RESEARCH CENTER
> 206.667.4194  ~  bev...@fhcrc.org
>
>
> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf
> Of
> Kelly Lipp
> Sent: Tuesday, May 05, 2009 1:42 PM
> To: ADSM-L@VM.MARIST.EDU
> Subject: Re: [ADSM-L] LTO for long term archiving
>
> I like the implication, but I'm pretty sure somebody actually thought
> being able to read the information would have been a good idea.
>
> Kelly Lipp
> CTO
> STORServer, Inc.
> 485-B Elkton Drive
> Colorado Springs, CO 80907
> 719-266-8777 x7105
> www.storserver.com
>
>
> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf
> Of
> Remco Post
> Sent: Tuesday, May 05, 2009 2:39 PM
> To: ADSM-L@VM.MARIST.EDU
> Subject: Re: [ADSM-L] LTO for long term archiving
>
> I do agree, having the tapedrives around _could_ be important. I know
> of at le

Re: how to view the content of a database (db)

2009-05-06 Thread Joerg Pohlmann
Hi Zoltan. I stand corrected on the release level of your TSM server. That
does not change my comments about the importance of the volume history/RPF,
and the inability to rebuild the TSM server if you do not have the volume
history. Even if you have knowledge (for example paper and pencil) of which
tape contains the latest database backup, you cannot restore the database.

Joerg Pohlmann
250-245-9863

Re: Tough questions for Netapp representatives

2009-05-06 Thread Richard Rhodes
We are in the process of implimenting a snapmirror/snapvault setup on IBM
NSeries (ie;  NetApp) for
one of our applications.  It is using SnapMirror for DR (async replication)
and SnapVault for backups.

The interesting thing about SnapVault is that it's like TSM - incremental
forever.  It makes a full copy the first
time then only does incrementals after that.

Rick

> On the other hand, Snapvault is made for the daily, short term backup. If
> you can afford it, lie, cheat and steal to get this!   It basically let's
> you do the normal type of netapp snapshots, but let's you store it on a
> separate file server.  (I.E. cheaper, lower performance SATA disk)
> Although we haven't enabled it yet, they also let you do their form of
> deduplication on the snapvault side while leaving the source.  The
problem
> with snapvault is that it doesn't really work with tape.  You can hack up
> some scripts to manage this sort of this, but it's ugly. (Basically a
> snapmirror to tape)
>


-
The information contained in this message is intended only for the
personal and confidential use of the recipient(s) named above. If
the reader of this message is not the intended recipient or an
agent responsible for delivering it to the intended recipient, you
are hereby notified that you have received this document in error
and that any review, dissemination, distribution, or copying of
this message is strictly prohibited. If you have received this
communication in error, please notify us immediately, and delete
the original message.


Re: how to view the content of a database (db)

2009-05-06 Thread Richard Sims

On May 6, 2009, at 1:50 PM, Desalegne Guangul wrote:


ANR8208W TCP/IP driver unable to initialize due to error in using Port
1500,
reason code 10048.


The 10xxx error number series is the Windows set.
10048 is WSAEADDRINUSE, Address already in use.
Port 1500 is the standard TSM server port.
You attempted to run 'dsmserv' when there was already one running.

   Richard Sims


NDMP and TSM 6.1 TOC support

2009-05-06 Thread john bremer

All,

One of our NDMP backup customers recently sent us an announcement from
NetApp detailing the integration of NetApp SanPirror to Tape and
Snapshot technologies with IBM TSM 6.  The enhancement will greatly
speed up what used to cumulative Differential backups with NDMP to TSM.

Later, the following was sent to my customer regarding the same
announcement:

==
"Faster full image backup with new Network Data Management Protocol
(NDMP) integration SnapMirror to Tape integration enables backup of
large filesystems to secondary media for disaster recovery.

Extremely fast incremental backup
Note in TSM6.1 TOCs are not supported with NDMP
If using TOC - Don't upgrade"



Has anyone seen similar information about these changes?  Are TOCs no
longer supported in TSM 6.1 server.

Thanks.

John Bremer


Re: NDMP and TSM 6.1 TOC support

2009-05-06 Thread Remco Post

On May 6, 2009, at 20:18 , john bremer wrote:


All,

One of our NDMP backup customers recently sent us an announcement from
NetApp detailing the integration of NetApp SanPirror to Tape and
Snapshot technologies with IBM TSM 6.  The enhancement will greatly
speed up what used to cumulative Differential backups with NDMP to
TSM.



Indeed, with TSM server version 6.1 you can SnapMirror to tape. With
Client 6.1 you can SnapDiff to TSM using either a windows or aix
proxy


Later, the following was sent to my customer regarding the same
announcement:

==
"Faster full image backup with new Network Data Management Protocol
(NDMP) integration SnapMirror to Tape integration enables backup of
large filesystems to secondary media for disaster recovery.

Extremely fast incremental backup
Note in TSM6.1 TOCs are not supported with NDMP
   If using TOC - Don't upgrade"



Has anyone seen similar information about these changes?  Are TOCs no
longer supported in TSM 6.1 server.



Indeed, in 6.1.0.0 there is nu support for TOCs, if you need them,
wait for 6.1.2.0 (or so I've heard)


Thanks.

John Bremer


--
Met vriendelijke groeten,

Remco Post
r.p...@plcs.nl
+31 6 248 21 622


Re: NDMP and TSM 6.1 TOC support

2009-05-06 Thread Colin Dawson
Howdy,

The TSM support for NDMP backups using TOC's was disabled at the GA for
6.1.0.0 due issues that were identified with performance.  Those issues are
being worked on and are expected to be resolved in a fix-pack for TSM V6.
At the current time, this fix is expected in the 6.1.2.0 fix-pack which is
tenatively scheduled for delivery the end of June.  Please note that the
schedule and target fix-pack are our current projections and may be subject
to change at IBM's discretion.

The other server function that utilized TOC's is BACKUPSETS.  The
support for BACKUPSETS was also disabled at TSM 6.1.0.0 GA and is also
being worked on.  This is also expected to be fixed in the same fix-pack as
mentioned above.

Thanks,
Colin

-
Colin Dawson
TSM Server Development
col...@us.ibm.com



  From:   john bremer 

  To: ADSM-L@VM.MARIST.EDU

  Date:   05/06/2009 11:45 AM

  Subject:[ADSM-L] NDMP and TSM 6.1 TOC support






All,

One of our NDMP backup customers recently sent us an announcement from
NetApp detailing the integration of NetApp SanPirror to Tape and
Snapshot technologies with IBM TSM 6.  The enhancement will greatly
speed up what used to cumulative Differential backups with NDMP to TSM.

Later, the following was sent to my customer regarding the same
announcement:

==
"Faster full image backup with new Network Data Management Protocol
(NDMP) integration SnapMirror to Tape integration enables backup of
large filesystems to secondary media for disaster recovery.

Extremely fast incremental backup
Note in TSM6.1 TOCs are not supported with NDMP
 If using TOC - Don't upgrade"



Has anyone seen similar information about these changes?  Are TOCs no
longer supported in TSM 6.1 server.

Thanks.

John Bremer


Re: Register Node or create client schedule fails using ISC

2009-05-06 Thread Carol Trible
The missing dsmcmd.xml file will cause the error from the Adminstration
Center on ISC. That file should have been placed in
the /usr/tivoli/tsm/server/bin directory when the server code was
installed. I believe the fileset is tivoli.tsm.server.webcon.


Carol Trible
 IBM Tivoli Storage Manager Development





I am running TSM 5.5.2.1 on AIX 5.3 TL8

When I  register a new node, or create a client schedule using the ISC
logged in as iscadmin, I get the following error from the ISC.
Create Client Node

  Exception Type PortletException
Message com.tivoli.dsm.snapin.DsmLaunchedPortlet:base.handlePSTrigger
Exception java.lang.NullPointerException
at com.tivoli.dsm.adminapi.api.DsmAdminAPI.SearchOpGroup
(DsmAdminAPI.java:1855)
at com.tivoli.dsm.adminapi.api.DsmAdminAPI.queryOperation
(DsmAdminAPI.java:778)
at com.tivoli.dsm.snapin.ps.DsmGenericValidator.GetExecuteOpBean
(DsmGenericValidator.java:1630)
at com.tivoli.dsm.snapin.ps.DsmGenericValidator.ParseNameToken
(DsmGenericValidator.java:1135)
at com.tivoli.dsm.snapin.ps.DsmGenericValidator.validate
(DsmGenericValidator.java:545)
at com.ibm.psw.wcl.core.form.WForm.performFormValidation(Unknown
Source)
at com.ibm.psw.wcl.core.form.WForm.handleValidate(Unknown Source)
at com.ibm.psw.wcl.components.wizard.WWizardStep.validate(Unknown
Source)
at com.ibm.psw.wcl.components.wizard.WWizard
$EWizardLayout.commandPerformed(Unknown Source)
at com.ibm.psw.wcl.core.CommandHandler.handleCommand(Unknown
Source)
at com.ibm.psw.wcl.core.form.AWInputComponent
$EInputComponentCommandListener.commandPerformed(Unknown Source)
at com.ibm.psw.wcl.core.form.WForm.callInputCommandListeners
(Unknown Source)
at com.ibm.psw.wcl.core.form.WForm.handleCommand(Unknown Source)
at com.ibm.psw.wcl.core.form.WForm$EFormCallback.handleTrigger
(Unknown Source)
at com.ibm.psw.wcl.core.trigger.Trigger.process(Unknown Source)
at com.ibm.psw.wcl.core.trigger.TriggerManager.processTrigger
(Unknown Source)
at
com.ibm.psw.wcl.portlet.legacy.WclPortletTriggerManager.handleRequest
(Unknown Source)
at com.ibm.psw.wcl.portlet.legacy.WclPortletFacade.handleRequest
(Unknown Source)
at com.ibm.psw.wcl.portlet.legacy.WclPortletFacade.handleRequest
(Unknown Source)
at com.tivoli.dsm.snapin.DsmBasePortlet.handlePSTrigger
(DsmBasePortlet.java:791)
at com.tivoli.dsm.snapin.DsmLaunchedPortlet.actionPerformedImpl
(DsmLaunchedPortlet.java:270)
at com.tivoli.dsm.snapin.DsmBasePortlet.actionPerformed
(DsmBasePortlet.java:503)
at com.ibm.wps.pe.pc.legacy.SPIPortletInterceptorImpl.handleEvents
(SPIPortletInterceptorImpl.java(Compiled Code))
at
com.ibm.wps.pe.pc.legacy.invoker.impl.PortletDelegateImpl._dispatch
(PortletDelegateImpl.java(Compiled Code))
at
com.ibm.wps.pe.pc.legacy.invoker.impl.PortletDelegateImpl.dispatch
(PortletDelegateImpl.java(Compiled Code))
at org.apache.jetspeed.portlet.Portlet.doPost(Portlet.java:512)
at javax.servlet.http.HttpServlet.service(HttpServlet.java(Compiled
Code))
at com.ibm.wps.pe.pc.legacy.cache.CacheablePortlet.service
(CacheablePortlet.java(Compiled Code))
at javax.servlet.http.HttpServlet.service(HttpServlet.java(Compiled
Code))
at org.apache.jetspeed.portlet.Portlet.service(Portlet.java
(Compiled Code))
at com.ibm.ws.webcontainer.servlet.ServletWrapper.service
(ServletWrapper.java(Compiled Code))
at com.ibm.ws.webcontainer.servlet.ServletWrapper.handleRequest
(ServletWrapper.java(Compiled Code))
at com.ibm.ws.webcontainer.webapp.WebAppRequestDispatcher.include
(WebAppRequestDispatcher.java(Compiled Code))
at
com.ibm.wps.pe.pc.legacy.invoker.impl.PortletInvokerImpl.callMethod
(PortletInvokerImpl.java(Compiled Code))
at com.ibm.wps.pe.pc.legacy.invoker.impl.PortletInvokerImpl.action
(PortletInvokerImpl.java(Compiled Code))
at com.ibm.wps.pe.pc.legacy.PortletContainerImpl.callPortletMethod
(PortletContainerImpl.java(Compiled Code))
at com.ibm.wps.pe.pc.legacy.EventEnvironmentImpl.includePortlet
(EventEnvironmentImpl.java:177)
at com.ibm.wps.pe.pc.legacy.event.ActionEventImpl.prepare
(ActionEventImpl.java:201)
at
com.ibm.wps.pe.pc.legacy.event.EventQueueManager.processEventLoop
(EventQueueManager.java:89)
at com.ibm.wps.pe.pc.legacy.PortletContainerImpl.performEvents
(PortletContainerImpl.java:221)
at com.ibm.wps.pe.pc.PortletContainerImpl.performEvents
(PortletContainerImpl.java:227)
at com.ibm.wps.engine.phases.WPActionPhase.processPortlets
(WPActionPhase.java:947)
at com.ibm.wps.engine.phases.WPActionPhase.execute
(WPActionPhase.java:489)
at com.ibm.wps.state.phases.AbstractActionPhase.next
(AbstractActionPhase.java:130)
at com.ibm.wps.engine.phases.config.CfgPhase$Action.next
(CfgPhase.java:303)
at com.ibm.wps.engine.Se

Re: NDMP and TSM 6.1 TOC support

2009-05-06 Thread john bremer

Colin and Remco,

Thank you for your responses.

Were the "issues that were identified with performance" the excruciating
length of time to load a TOC (even from disk) to the TSM database?

I hope we will be given a mechanism to retain a TOC in DB2 when it is
created for as long as we need it.

Thanks again.

John Bremer



Howdy,

The TSM support for NDMP backups using TOC's was disabled at the GA for
6.1.0.0 due issues that were identified with performance.  Those issues are
being worked on and are expected to be resolved in a fix-pack for TSM V6.
At the current time, this fix is expected in the 6.1.2.0 fix-pack which is
tenatively scheduled for delivery the end of June.  Please note that the
schedule and target fix-pack are our current projections and may be subject
to change at IBM's discretion.

The other server function that utilized TOC's is BACKUPSETS.  The
support for BACKUPSETS was also disabled at TSM 6.1.0.0 GA and is also
being worked on.  This is also expected to be fixed in the same fix-pack as
mentioned above.

Thanks,
Colin

-
Colin Dawson
TSM Server Development
col...@us.ibm.com



  From:   john bremer 

  To: ADSM-L@VM.MARIST.EDU

  Date:   05/06/2009 11:45 AM

  Subject:[ADSM-L] NDMP and TSM 6.1 TOC support






All,

One of our NDMP backup customers recently sent us an announcement from
NetApp detailing the integration of NetApp SanPirror to Tape and
Snapshot technologies with IBM TSM 6.  The enhancement will greatly
speed up what used to cumulative Differential backups with NDMP to TSM.

Later, the following was sent to my customer regarding the same
announcement:

==
"Faster full image backup with new Network Data Management Protocol
(NDMP) integration SnapMirror to Tape integration enables backup of
large filesystems to secondary media for disaster recovery.

Extremely fast incremental backup
Note in TSM6.1 TOCs are not supported with NDMP
 If using TOC - Don't upgrade"



Has anyone seen similar information about these changes?  Are TOCs no
longer supported in TSM 6.1 server.

Thanks.

John Bremer



Re: Register Node or create client schedule fails using ISC

2009-05-06 Thread James Choate
Thanks,
That was it.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of Carol 
Trible
Sent: Wednesday, May 06, 2009 1:02 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Register Node or create client schedule fails using ISC

The missing dsmcmd.xml file will cause the error from the Adminstration
Center on ISC. That file should have been placed in
the /usr/tivoli/tsm/server/bin directory when the server code was
installed. I believe the fileset is tivoli.tsm.server.webcon.


Carol Trible
 IBM Tivoli Storage Manager Development





I am running TSM 5.5.2.1 on AIX 5.3 TL8

When I  register a new node, or create a client schedule using the ISC
logged in as iscadmin, I get the following error from the ISC.
Create Client Node

  Exception Type PortletException
Message com.tivoli.dsm.snapin.DsmLaunchedPortlet:base.handlePSTrigger
Exception java.lang.NullPointerException
at com.tivoli.dsm.adminapi.api.DsmAdminAPI.SearchOpGroup
(DsmAdminAPI.java:1855)
at com.tivoli.dsm.adminapi.api.DsmAdminAPI.queryOperation
(DsmAdminAPI.java:778)
at com.tivoli.dsm.snapin.ps.DsmGenericValidator.GetExecuteOpBean
(DsmGenericValidator.java:1630)
at com.tivoli.dsm.snapin.ps.DsmGenericValidator.ParseNameToken
(DsmGenericValidator.java:1135)
at com.tivoli.dsm.snapin.ps.DsmGenericValidator.validate
(DsmGenericValidator.java:545)
at com.ibm.psw.wcl.core.form.WForm.performFormValidation(Unknown
Source)
at com.ibm.psw.wcl.core.form.WForm.handleValidate(Unknown Source)
at com.ibm.psw.wcl.components.wizard.WWizardStep.validate(Unknown
Source)
at com.ibm.psw.wcl.components.wizard.WWizard
$EWizardLayout.commandPerformed(Unknown Source)
at com.ibm.psw.wcl.core.CommandHandler.handleCommand(Unknown
Source)
at com.ibm.psw.wcl.core.form.AWInputComponent
$EInputComponentCommandListener.commandPerformed(Unknown Source)
at com.ibm.psw.wcl.core.form.WForm.callInputCommandListeners
(Unknown Source)
at com.ibm.psw.wcl.core.form.WForm.handleCommand(Unknown Source)
at com.ibm.psw.wcl.core.form.WForm$EFormCallback.handleTrigger
(Unknown Source)
at com.ibm.psw.wcl.core.trigger.Trigger.process(Unknown Source)
at com.ibm.psw.wcl.core.trigger.TriggerManager.processTrigger
(Unknown Source)
at
com.ibm.psw.wcl.portlet.legacy.WclPortletTriggerManager.handleRequest
(Unknown Source)
at com.ibm.psw.wcl.portlet.legacy.WclPortletFacade.handleRequest
(Unknown Source)
at com.ibm.psw.wcl.portlet.legacy.WclPortletFacade.handleRequest
(Unknown Source)
at com.tivoli.dsm.snapin.DsmBasePortlet.handlePSTrigger
(DsmBasePortlet.java:791)
at com.tivoli.dsm.snapin.DsmLaunchedPortlet.actionPerformedImpl
(DsmLaunchedPortlet.java:270)
at com.tivoli.dsm.snapin.DsmBasePortlet.actionPerformed
(DsmBasePortlet.java:503)
at com.ibm.wps.pe.pc.legacy.SPIPortletInterceptorImpl.handleEvents
(SPIPortletInterceptorImpl.java(Compiled Code))
at
com.ibm.wps.pe.pc.legacy.invoker.impl.PortletDelegateImpl._dispatch
(PortletDelegateImpl.java(Compiled Code))
at
com.ibm.wps.pe.pc.legacy.invoker.impl.PortletDelegateImpl.dispatch
(PortletDelegateImpl.java(Compiled Code))
at org.apache.jetspeed.portlet.Portlet.doPost(Portlet.java:512)
at javax.servlet.http.HttpServlet.service(HttpServlet.java(Compiled
Code))
at com.ibm.wps.pe.pc.legacy.cache.CacheablePortlet.service
(CacheablePortlet.java(Compiled Code))
at javax.servlet.http.HttpServlet.service(HttpServlet.java(Compiled
Code))
at org.apache.jetspeed.portlet.Portlet.service(Portlet.java
(Compiled Code))
at com.ibm.ws.webcontainer.servlet.ServletWrapper.service
(ServletWrapper.java(Compiled Code))
at com.ibm.ws.webcontainer.servlet.ServletWrapper.handleRequest
(ServletWrapper.java(Compiled Code))
at com.ibm.ws.webcontainer.webapp.WebAppRequestDispatcher.include
(WebAppRequestDispatcher.java(Compiled Code))
at
com.ibm.wps.pe.pc.legacy.invoker.impl.PortletInvokerImpl.callMethod
(PortletInvokerImpl.java(Compiled Code))
at com.ibm.wps.pe.pc.legacy.invoker.impl.PortletInvokerImpl.action
(PortletInvokerImpl.java(Compiled Code))
at com.ibm.wps.pe.pc.legacy.PortletContainerImpl.callPortletMethod
(PortletContainerImpl.java(Compiled Code))
at com.ibm.wps.pe.pc.legacy.EventEnvironmentImpl.includePortlet
(EventEnvironmentImpl.java:177)
at com.ibm.wps.pe.pc.legacy.event.ActionEventImpl.prepare
(ActionEventImpl.java:201)
at
com.ibm.wps.pe.pc.legacy.event.EventQueueManager.processEventLoop
(EventQueueManager.java:89)
at com.ibm.wps.pe.pc.legacy.PortletContainerImpl.performEvents
(PortletContainerImpl.java:221)
at com.ibm.wps.pe.pc.PortletContainerImpl.performEvents
(PortletContainerImpl.java:227)
at com.ibm.wps.engine.phases.WPActionPhase.processPortlets
(WPActionPhase.java:947)
at com.

Re: Tough questions for Netapp representatives

2009-05-06 Thread Clark, Robert A
Unfortunately, this turns the "Can I get it done in time without running
out of scratch?" problem into "Can I get it done in more time, across a
constrained network, without running out of disk storage pool?".

Not always a win. Also, 5.4 solved the "How do I make an offsite copy of
NDMP data?" problem, even if you are going NDMP direct to tape.

[RC]

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
Paul Zarnowski
Sent: Wednesday, May 06, 2009 8:58 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Tough questions for Netapp representatives

Are you referring to the older version of NDMP, or the "Filer-to-Server"
capability that was added with TSM 5.4?  This allows you (I believe) to
store the NDMP data in normal TSM storage pools, rather than sending
directly to a NAS-attached tape library.

At 11:38 AM 5/6/2009, Shawn Drew wrote:
>Also,
>since you have to do periodic fulls and differential incrementals, it 
>generates so much waste data that we can't store all the tapes in the 
>library, so we have to run around for tapes.


--
Paul ZarnowskiPh: 607-255-4757
Manager, Storage Services Fx: 607-255-8521
719 Rhodes Hall, Ithaca, NY 14853-3801Em: p...@cornell.edu


DISCLAIMER:
This message is intended for the sole use of the addressee, and may contain 
information that is privileged, confidential and exempt from disclosure under 
applicable law. If you are not the addressee you are hereby notified that you 
may not use, copy, disclose, or distribute to anyone the message or any 
information contained in the message. If you have received this message in 
error, please immediately advise the sender by reply email and delete this 
message.


Re: LTO for long term archiving

2009-05-06 Thread Len Boyle
Here is a URL for a company selling 300g disks with a 50 year media archive 
life. I wonder if TSM will work with this media. 

http://www.inphase-technologies.com/products/default.asp?tnn=3

Here is the URL's for GE's story. Of course this is not a product yet. 

http://hardware.slashdot.org/article.pl?sid=09/04/27/179205&art_pos=2

http://www.nytimes.com/2009/04/27/technology/business-computing/27disk.html?_r=1





-Original Message-
From: Len Boyle 
Sent: Wednesday, May 06, 2009 4:49 PM
To: ADSM: Dist Stor Manager
Subject: RE: LTO for long term archiving

Over ten years ago I was at a presentation by a MD that was proposing the 
creation of a xml standard for medical records to help with this. His personal 
stake was in making it easier to use historical medical records for medical 
research. 

Also along these lines I remember seeing a small article in a trade magazine 
that a company, I think Rockwell, was going to make a system for recording 
holograms on microfiche. There would be a machine that could read the 
microfiche and display remotely. In theory the microfiche can be used for over 
100 years, better than existing computer optical and magnetic media. 
Also because it was a hologram if the media was scratched or otherwise damaged, 
the data could still be retrieved. As far as I know this never was released to 
the public. 

Also along these lines I saw an article this past week that GE was working on 
new optical media, which I believe was on the order of 500 gig. This would be a 
large increase over existing dvd's but still smaller than existing tapes and 
disks.


len

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of Kelly 
Lipp
Sent: Wednesday, May 06, 2009 2:04 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] LTO for long term archiving

Here here.  I'll reiterate: the data must be stored in the lowest common 
denominator format.  No databases, no strange and wondrous binary formats.  
Simple text is the best.  That's why microfiche is still used!

I would be very concerned about medical data: it is in so many different 
formats and one has to wonder if our elected officials and their beaurocratic 
minions in the various regulatory agencies are aware of these facts and the 
problem they are going to have in the future.

Kelly Lipp
CTO
STORServer, Inc.
485-B Elkton Drive
Colorado Springs, CO 80907
719-266-8777 x7105
www.storserver.com


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of 
Kauffman, Tom
Sent: Wednesday, May 06, 2009 7:38 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] LTO for long term archiving

Soapbox time.

The media is not important - any sane retention policy will require that the 
information be copied at least annually, with enough copies for redundancy.

BUT -

to be able to PROCESS the data 25 years from now imposes additional 
requirements.

First and formost -- the data will NOT be in any format except unloaded 
flat-file; ASCII, UTF-8, or UTF-16 encoding. You will NOT be able to process 
proprietary data formats 25 years from now.

Look at the Domesday Book - the version written on parchment or vellum in 1086 
is still readable. The BBC digitized it in 1986 and found it almost impossible 
to find systems to read the digitized version in 2002 - 16 years later. You're 
trying for 25 years.

Tom Kauffman
NIBCO, Inc

> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf
> Of Evans, Bill
> Sent: Tuesday, May 05, 2009 5:34 PM
> To: ADSM-L@VM.MARIST.EDU
> Subject: Re: LTO for long term archiving
>
> I truly doubt that archiving drives, servers and tapes for 25 years
> each
> time the technology updates  will let you read the tapes because the
> drive and server will probably not even boot up and run.
>
> You will have to update the data every two LTO cycles or so.  LTO will
> read two generations back and 25 years from now we will be on LTO14 or
> 15, so LTO4 is toast.   Or, more probably, we will be storing into some
> kind of flash drive at Petabyte capacities.
>
> I think that Blu-Ray DVD will meet your 25 year mark without having to
> retrieve and update to new media.  I know that my 1986 CD's (those not
> seriously scratched or warped from laying on the dash ) still work on
> today's systems.  Properly stored DVD's would need to have players
> stored also, but, these are mechanically simpler than LTO drives and
> servers and would most likely still run.  They are also much cheaper,
> so
> having a new DVD in storage every 5 years is no big expense.
>
> The bigger issue is where and how do you keep track of all of this?  I
> think TSM's HSM is probably capable, however, I'm not real comfortable
> recommending it.  We have had several years of problems running HSM on
> Solaris and have finally turned it off.
>
> What is needed is a good archiving tool that can keep an updated DB of
> content and storage locatio

Re: LTO for long term archiving

2009-05-06 Thread Len Boyle
Over ten years ago I was at a presentation by a MD that was proposing the 
creation of a xml standard for medical records to help with this. His personal 
stake was in making it easier to use historical medical records for medical 
research. 

Also along these lines I remember seeing a small article in a trade magazine 
that a company, I think Rockwell, was going to make a system for recording 
holograms on microfiche. There would be a machine that could read the 
microfiche and display remotely. In theory the microfiche can be used for over 
100 years, better than existing computer optical and magnetic media. 
Also because it was a hologram if the media was scratched or otherwise damaged, 
the data could still be retrieved. As far as I know this never was released to 
the public. 

Also along these lines I saw an article this past week that GE was working on 
new optical media, which I believe was on the order of 500 gig. This would be a 
large increase over existing dvd's but still smaller than existing tapes and 
disks.


len

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of Kelly 
Lipp
Sent: Wednesday, May 06, 2009 2:04 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] LTO for long term archiving

Here here.  I'll reiterate: the data must be stored in the lowest common 
denominator format.  No databases, no strange and wondrous binary formats.  
Simple text is the best.  That's why microfiche is still used!

I would be very concerned about medical data: it is in so many different 
formats and one has to wonder if our elected officials and their beaurocratic 
minions in the various regulatory agencies are aware of these facts and the 
problem they are going to have in the future.

Kelly Lipp
CTO
STORServer, Inc.
485-B Elkton Drive
Colorado Springs, CO 80907
719-266-8777 x7105
www.storserver.com


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of 
Kauffman, Tom
Sent: Wednesday, May 06, 2009 7:38 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] LTO for long term archiving

Soapbox time.

The media is not important - any sane retention policy will require that the 
information be copied at least annually, with enough copies for redundancy.

BUT -

to be able to PROCESS the data 25 years from now imposes additional 
requirements.

First and formost -- the data will NOT be in any format except unloaded 
flat-file; ASCII, UTF-8, or UTF-16 encoding. You will NOT be able to process 
proprietary data formats 25 years from now.

Look at the Domesday Book - the version written on parchment or vellum in 1086 
is still readable. The BBC digitized it in 1986 and found it almost impossible 
to find systems to read the digitized version in 2002 - 16 years later. You're 
trying for 25 years.

Tom Kauffman
NIBCO, Inc

> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf
> Of Evans, Bill
> Sent: Tuesday, May 05, 2009 5:34 PM
> To: ADSM-L@VM.MARIST.EDU
> Subject: Re: LTO for long term archiving
>
> I truly doubt that archiving drives, servers and tapes for 25 years
> each
> time the technology updates  will let you read the tapes because the
> drive and server will probably not even boot up and run.
>
> You will have to update the data every two LTO cycles or so.  LTO will
> read two generations back and 25 years from now we will be on LTO14 or
> 15, so LTO4 is toast.   Or, more probably, we will be storing into some
> kind of flash drive at Petabyte capacities.
>
> I think that Blu-Ray DVD will meet your 25 year mark without having to
> retrieve and update to new media.  I know that my 1986 CD's (those not
> seriously scratched or warped from laying on the dash ) still work on
> today's systems.  Properly stored DVD's would need to have players
> stored also, but, these are mechanically simpler than LTO drives and
> servers and would most likely still run.  They are also much cheaper,
> so
> having a new DVD in storage every 5 years is no big expense.
>
> The bigger issue is where and how do you keep track of all of this?  I
> think TSM's HSM is probably capable, however, I'm not real comfortable
> recommending it.  We have had several years of problems running HSM on
> Solaris and have finally turned it off.
>
> What is needed is a good archiving tool that can keep an updated DB of
> content and storage location that users can browse.
>
> We recently restored a power point file written by Office version (?)
> on
> an OS 9 Mac.  This could NOT be read by Office XP, 2003, 2007 (PC) or
> Office 2004 or 2008 (Mac).  We had to find an old Mac OS/9 that still
> had a copy of Office 2000, read it, write it back to the 2000 version
> .ppt file before any 2004-2009 software could read it.  If it had been
> 18 years instead of 9 years, then we never would have been able to read
> it at all, that old OS 9 Mac would never have been saved.
>
> This will happen more and more as our progr

Re: Feature in TSM6.1...

2009-05-06 Thread Kiran
Hi,

Actually I want to expire only particular type of data from my archive copy
group? Is it possible?

Regards,
Kiran.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of
Remco Post
Sent: Wednesday, May 06, 2009 6:40 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Feature in TSM6.1...

Even with tsm 5.5 you can select the managementclass of your data from
the include file. Now, the hard part is determining what type of data
you are dealing with?

On 6 mei 2009, at 09:38, Tandon K wrote:

> Hi,
>
> Does TSM 6.1 has any feature of implementing Policies in Policy
> Domain  based on Data Type in Backup and Archive Copy Groups?
>
> Regards,
> Tandon.Disclaimer:
> This email message (including attachments if any) may contain
> privileged, proprietary, confidential information, which may be
> exempt from any kind of disclosure whatsoever and is intended solely
> for the use of addressee (s). If you are not the intended recipient,
> kindly inform us by return e-mail and also kindly disregard the
> contents of the e-mail, delete the original message and destroy any
> copies thereof immediately. You are notified that any dissemination,
> distribution or copying of this communication is strictly prohibited
> unless approved by the sender.
>
> DQ Entertainment (DQE) has taken every reasonable precaution to
> minimize the risk of transmission of computer viruses with this e-
> mail; DQE is not liable for any damage you may sustain as a result
> of any virus in this e-mail. DQE shall not be liable for the views
> expressed in the e-mail. DQE reserves the right to monitor and
> review the content of all messages sent to or from this e-mail address

--
Met vriendelijke groeten/Kind regards,

Remco Post
r.p...@plcs.nl


Disclaimer:
This email message (including attachments if any) may contain privileged, 
proprietary, confidential information, which may be exempt from any kind of 
disclosure whatsoever and is intended solely for the use of addressee (s). If 
you are not the intended recipient, kindly inform us by return e-mail and also 
kindly disregard the contents of the e-mail, delete the original message and 
destroy any copies thereof immediately. You are notified that any 
dissemination, distribution or copying of this communication is strictly 
prohibited unless approved by the sender.

DQ Entertainment (DQE) has taken every reasonable precaution to minimize the 
risk of transmission of computer viruses with this e-mail; DQE is not liable 
for any damage you may sustain as a result of any virus in this e-mail. DQE 
shall not be liable for the views expressed in the e-mail. DQE reserves the 
right to monitor and review the content of all messages sent to or from this 
e-mail address