Antwort: Antwort: [ADSM-L] dsmserv.exe 6.3.1 command line troubleshooting

2012-07-24 Thread Josh Davis
Sven,
"Run as administrator" did it.  Thanks.  I knew I was missing something
2008 specific.

Ullrich,
Sorry, I should have put my commandlines in the original post.
I already use -k and -u flags.

Also, as shown in the OP, I was already in the instance directory.
Habit, and easier than messing with DSMSERVDIR if that still works.

With friendly Regards,
Josh-Daniel S. Davis

-- Original message --
From: Sven Seefeld 
Date: Tue, Jul 24, 2012 at 12:48 AM

> Hi Josh,
>
> that may be a windows thing (UAC):
> Even though your instance owner is admin, you have to explicitly
> request windows to run the cmd-prompt in (100%) admin-mode: right
> click cmd and choose "Run as Administrator", same thing with "dsmadmc
> -console" on my windows server :(
>
>
> Regards,
>
>Sven
>

-- Forwarded message --
From: Ullrich Mänz 
Date: Tue, Jul 24, 2012 at 3:09 AM

> Hello Josh,
>
> in a Windows environment you need to start dsmserv in foreground using the
> parameter "-k"
> Try to run from commandline (so called DOS box):
>
> dsmerv.exe -k server1
>
> (server1 is usually the first TSM instance name)
>
> You need to cd to the instance directory where dsmserv.opt resides, first.
> I've also found problems starting dsmser in the foreground if "C:\program
> files\tivoli\tsm\server" is part of the program path (there might be
> another dsmserv.opt in this path which is located/read before your
> instance option file).
>
> best regards
>
> Ullrich Mänz
> Data Center Services


dsmserv.exe 6.3.1 command line troubleshooting

2012-07-23 Thread Josh Davis
Windows 2008R2 x64 Datacenter
TSMS 6.3.0.0 for Windows (12:47:17 on Oct 10 2011)

My current server hangs on start, and I can't figure out why.

*wheeze* back in the day, you could run dsmserv.exe from a command prompt
on Windows, and get stdout.

So, I log in as the instance user, run cmd.exe, go to my instance
directory, and run dsmserv.exe (it's in my path).

A separate console window pops up, and immediately closes.
Redirecting doesn't put anything in the file.
If I look closely, I see some output in the fraction of a second that the
new console window is up.

Is there not a way to keep dsmserv from spawning a NEW window anymore?

Am I really going to have to resort to creating a batch file with a pause
at the end?

I don't know when this became the case.  I don't recall it in 6.2, but I
don't have to use CLI all that often.

I'm hoping this is just some 2k8 thing and being a unixy guy, I'm just too
stupid to figure it out.

With friendly Regards,
Josh-Daniel S. Davis


Re: Deduplication with TSM.

2012-04-28 Thread Josh Davis
The size of storage is not enough information to size a system.
The number of sessions determines system size.

If you have four clients, 1 gig per night, you could run 8GB RAM, Core2
2GHz and be okay.

Realistically, 32GB per instance is good.  db2sysc will use about 20GB per
instance if it's available.
This handles a couple hundred clients, deduplication, etc.

As for processors, one processor for every 2 DB directories, plus one
processor for TSM internals is minimal.
If you will have high I/O, then one hardware thread for every IDENTIFY
DUPLICATES process is good.
If you will use client-side dedupe most of the time, then you you only use
IDENTIFY DUPLICATES when you move data into the pool server side.

Higher GHz matters for the identify processes, though branch prediction is
still important (POWER5 or POWER7 are better than POWER6)
Higher hardware thread counts matter for client session responsiveness
(POWER7 uses fewer cores than POWER6/POWER5)
Higher I/O backplane matters for amount of raw data coming in (Low end
POWER wins over low-end Intel)
Higher IOPS for the DB volumes are necessary to keep clients from slowing
down.  (hash compares)
Lower network latency matters for client performance (hash compares)

With friendly Regards,
Josh-Daniel S. Davis
OmniTech Industries




On Thu, Apr 26, 2012 at 8:43 AM, Francisco Molero  wrote:

> Hi colleagues,
>
>
> I am going to implement a very big disk pool with dedup around 100 TB. TSM
> disk Storage pool ( neither VTLs nor DataDomain) . Somebody knows what TSM
> server I need  ( RAM and CPU) or what ratio can I hope... I am thinking
> about source Dedup...
>
>
> Any experiences?
>
>
> Thanks..
>


BACKUP DB DB08C000 lvmread.c(1245): Memory allocation failed: object Resync read page buffer, size 4096

2011-04-11 Thread Josh Davis
I ran into an odd issue, and it took me a while to figure out the
cause.  I'm posting this because I found very few hits about
lvmread.c, and none matched.  Most gave info about memory consumption,
which is no factor here.

If you have to replace the OS on your TSM server, and you're not using
AD, then be sure to re-take ownership of the storage volumes.

I didn't notice problems with the FILE class stgpool volumes, but the
DBB/DBS volumes, BACKUP DEVCONFIG, PREPARE, etc did fail.

This kind of problem is more likely if you lose your user DB, because
windows uses machine IDs to generate userIDs, and you don't really get
to make your own UIDs.  So, if you use AD and lose the user DB, or if
you're not using AD and you replace the host but keep the stg drives,
then this will be an issue.

Here's a sample of the output, with the thread context reports omitted:

2011-04-11 06:00:30 ANR1360I Output volume
H:\TIVOLI\TSM\SERVER1\02519629.DBS opened (sequence number 1).
(SESSION: 2390, PROCESS: 234)
2011-04-11 06:00:30 ANR0132E lvmread.c(1245): Memory allocation
failed: object Resync read page buffer, size 4096. (SESSION: 2390,
PROCESS: 234)
2011-04-11 06:00:30 ANRD_1112316882 (iccopy.c:1625)
Thread<26>: Unable to read from db volume. (SESSION: 2390, PROCESS:
234)
2011-04-11 06:00:30 ANRD Thread<26> issued message  from:
(SESSION: 2390, PROCESS: 234)
2011-04-11 06:00:30 ANRD_2749162976 (icback.c:406) Thread<26>:
Backup rc=6. (SESSION: 2390, PROCESS: 234)
2011-04-11 06:00:30 ANRD Thread<26> issued message  from:
(SESSION: 2390, PROCESS: 234)
2011-04-11 06:00:30 ANR4581W Database backup/restore terminated -
internal server error detected. (SESSION: 2390, PROCESS: 234)
2011-04-11 06:00:30 ANRD Thread<26> issued message 4581 from:
(SESSION: 2390, PROCESS: 234)
2011-04-11 06:00:30 ANRD Thread<26>  DB08C000 Unknown
(SESSION: 2390, PROCESS: 234)

...

-JD


Re: Windows servers with a kazillion files and Win2K8...

2011-03-08 Thread Josh Davis
The issue is that with 75M files, it's paramount to keep the metadata
in something faster than spinning disk.

Windows 2003 32-bit uses 36-bit memory addressing.  You can use 32G of
RAM.  That might help, but what you WANT in cache never seems to STAY
in cache.  Bulk data pushes your MFT out of cache.

What I'd recommend is using something like IBM's Easy Tier, though
most SAN manufacturers have something similar.

Basically, you have a bunch of spinning disk for your bulk data, and a
small number of enterprise grade SSDs installed.  The array code will
figure out where your IOPS bottlenecks are, and will move just the
offending blocks into the faster media.  This amounts to having your
MFT and a few small hotspots on flash.

It would probably take one or two filesystem scans for optimal
performance to be reached.

While Enterprise SSD is not as fast as being in RAM, it's much faster
than being on spinning disk.  Don't confuse with consumer grade SSD
which just won't have the IOPS performance for what you're doing.

For IBM, the lowest cost way to get into EasyTier is the "StorWize
V7000".  It's a combination storage drawer and SVC.  I'm not sure
about EMC, Hitachi and NetApp's comparable products.

With friendly Regards,
Josh-Daniel S. Davis

On Mon, Feb 28, 2011 at 9:51 AM, Strand, Neil B.  wrote:
> Wanda,
>   If it is a 32 bit system, the most memory that can be addressed is
> 4G.
> 2^32 = 4,294,967,296 bytes
> 4,294,967,296 / (1,024 x 1,024) = 4,096 MB = 4GB
>
> Moving to a 64bit system would allow additional memory to be fed to the
> beast.
>
> If the windows servers are not running the application but simply
> providing filespace to the application that is running on another
> server, see if the following is possible:
> - Implement DFS and provide a virtual tree that is composed of multiple
> physical data repositories. Each repository could be backed up using a
> proxy - recovery may be a bit convoluted, but possible.  Identify the
> problem not as a backup problem but a data management problem that
> requires some level of granularity to be introduced to the environment.
>
> Cheers,
> Neil Strand
> Storage Engineer - Legg Mason
> Baltimore, MD.
> (410) 580-7491
> Whatever you can do or believe you can, begin it.
> Boldness has genius, power and magic.
>
>
> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
> Prather, Wanda
> Sent: Friday, February 25, 2011 2:35 PM
> To: ADSM-L@VM.MARIST.EDU
> Subject: Re: [ADSM-L] Windows servers with a kazillion files and
> Win2K8...
>
> Thanks for the reply and the reference; I'll read that.
> It's a 32 bit system.
> Do you think adding RAM will help with the issues navigating the file
> tree?
>
> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
> Storer, Raymond
> Sent: Friday, February 25, 2011 2:22 PM
> To: ADSM-L@VM.MARIST.EDU
> Subject: Re: [ADSM-L] Windows servers with a kazillion files and
> Win2K8...
>
> Wanda, is this a 32 or 64 bit system? An NTFS file system will support
> about 4 Billion files on a single volume
> http://technet.microsoft.com/en-us/library/cc781134(WS.10).aspx . If you
> are having performance issues with this and you can switch it to a 64bit
> platform and add loads of RAM, I would do it.
>
> Ray
>
> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
> Prather, Wanda
> Sent: Friday, February 25, 2011 2:03 PM
> To: ADSM-L@VM.MARIST.EDU
> Subject: [ADSM-L] Windows servers with a kazillion files and Win2K8...
>
> I have a site with an application that generates kazillions of tiny
> files that are stored forever.
> I've already yelled about it, but it's a purchased, customer-facing
> black-box app that they really can't change.
> (Naturally, when it was bought umpty years ago, nobody thought about the
> problem reaching this size or what the ramifications would be.)  Every
> day the app creates more files.
>
> They have multiple Win2K3 servers that already have multiple luns
> containing over 35M files each, one is over 75M files.
>
> We are using journaling to back them up successfully (most days).
> But it's a struggle just to expand the file tree with Windows explorer,
> and there are exposures on the days when the journal gets overrun (takes
> 72 hours for TSM to scan the filesystem and revalidate the journal).
>
> Looking for anything that might help save our bacon.
>
> Has anybody had experience with this issue and Win2K8?
> Does Win2K8 do any better than Win2K3 at handling huge numbers of files
> in 1 NTFS directory?
> Upgrading the OS is something application-independent we might be able
> to do.
>
> Thanks for any insight!
> W
>
>
> Wanda Prather  |  Senior Technical Specialist  |
> wprat...@icfi.com  |
> www.jasi.com ICF Jacob & Sundstrom  | 401 E. Pratt St,
> Suite 2214, Baltimore, MD 21202 | 410.539.1135
>
>
> CONFIDENTIALITY NOTICE:  This email a

Re: Tsm backing up mysql databases

2011-03-08 Thread Josh Davis
I find that periodically, through system updates, I have to recompile
adsmpipe anyway.

I've got the original, and several builds for Linux, binary and source at:
http://omnitech.net/images/linkto/adsmpipe/adsmpipe.7z

With friendly Regards,
Josh-Daniel S. Davis
OmniTech Industries


On Thu, Mar 3, 2011 at 3:18 PM, Remco Post  wrote:
> rick,
>
> I could mail you my copy of adsmpipe for linux64. BTW, it's not exactly 
> rocket science to compile
>
> On 3 mrt 2011, at 16:45, Richard van Denzel wrote:
>
>> You could always do a mysqldump and then just backup the dump.
>> We've got this running on several machines and this also never failes.
>>
>> Richard.
>>
>> 2011/3/1 Richard Rhodes 
>>
>>> At times I've wanted to play around with adsmpipe, but I could never find
>>> precompiled versions.  Are there precompiled versions (windows, aix)
>>> anywhere for downloading?
>>>
>>> Thanks
>>>
>>> Rick
>>>
>>>
>>>
>>> From:   Remco Post 
>>> To:     ADSM-L@VM.MARIST.EDU
>>> Date:   02/28/2011 10:15 AM
>>> Subject:        Re: Tsm backing up mysql databases
>>> Sent by:        "ADSM: Dist Stor Manager" 
>>>
>>>
>>>
>>> I've used adsmpipe for both postgresql and mysql with great success.
>>> Google for the redpaper redp-3980-00 for supporting scripts.
>>>
>>> --
>>>
>>> Gr., Remco
>>>
>>> Op 24 feb. 2011 om 14:51 heeft "Lee, Gary D."  het volgende
>>> geschreven:
>>>
 We now have a request to back up a server running some library app that
>>> uses mysql for its databases.

 The only guidance I have seen so far searching the internet is to use
>>> adsmpipe.

 Are any of you doing mysql backups, if so how?



 Gary Lee
 Senior System Programmer
 Ball State University
 phone: 765-285-1310

>>>
>>>
>>>
>>>
>>> -
>>> The information contained in this message is intended only for the
>>> personal and confidential use of the recipient(s) named above. If
>>> the reader of this message is not the intended recipient or an
>>> agent responsible for delivering it to the intended recipient, you
>>> are hereby notified that you have received this document in error
>>> and that any review, dissemination, distribution, or copying of
>>> this message is strictly prohibited. If you have received this
>>> communication in error, please notify us immediately, and delete
>>> the original message.
>>>
>


Re: EXPORT TOSERVER

2011-03-08 Thread Josh Davis
Last time I checked, the formal upgrade instructions say TSM 5.3 and
up are supported for migration, and that migration can be with DB
upgrade or via export.

That would imply to me that a TSM 5.3 server should be able to export
into a TSM 6.2 server; however, I wouldn't expect the inverse.
Usually, no more than 2 releases back.

With friendly Regards,
Josh-Daniel S. Davis


On Fri, Mar 4, 2011 at 3:31 PM, Bill Boyer  wrote:
> What are the TSM server version restrictions when doing an EXPORT NODE to
> another server? I have a client that wants to install a TSM 6.2 server and
> export some data from their existing 5.3 TSM server. I've been searching and
> must not have the right combination on keywords, but I'm not finding any
> requirements or restrictions on this. It would be difficult if we had to use
> media for this and they only want a few nodes from their existing server.
> And really don't want to have to put any more effort in to the old box.like
> upgrades.
>
>
> "Life is not about waiting for the storms to pass...
> It's about learning how to dance in the rain." - ??
>
> Bill Boyer
>


Re: 5 out of 9 aint bad

2011-03-08 Thread Josh Davis
Just to be sure, can you verify you have the EE license applied?

With friendly Regards,
Josh-Daniel S. Davis


On Fri, Mar 4, 2011 at 9:32 AM, Laks, Brian  wrote:
> I have 9 LTO-4 drives connected to our TSM server, only 5 of them work at any 
> time after upgrading to a new fiber card.
>
> We have tried two different fiber cards (qlogic and emulex) and still only 5 
> drives.
>
> We have Confirmed latest drivers and firmware with IBM support.  The old 2gig 
> fiber card still works with all 9 drives, but is considerably slower than 5 
> drives on an 4g card.
>
> The new cards are both multi port cards, but only one port is being used.  
> Multipath drivers are not being used.  9 drives are zoned to one port.
>
> IBM support believes it to be hardware since the old card works, so we 
> purchased a second card of different manufacture.  Now the problem exists on 
> two fiber cards of different manufacture so I'm real reluctant to think its a 
> hardware problem any more.
>
> Interestingly, the 5 good drives vary.  I can unload the drivers and reload 
> everything and drives that were previously unavailable work while drives that 
> were working are then unavailable.  It seems kind of random.  All the dives 
> show up in the OS, and TSMDLST show them all as well.  I uninstalled old 
> drivers and reinstalled them exactly as per IBM support instructions, and 
> rebuilt the drive and library paths in tsm during each reload as per IBM 
> support guidelines.
>
> Has anyone seen anything like this?  I'm absolutely baffled.  I'm thinking we 
> are going to have to zone 5 drives to one port and 4 drives to the other, but 
> the SAN admin type is reluctant since all 9 drives work with the original 
> card.  My guess is that somewhere in the drivers its smart enough to know 
> that 9 LTO-4's to a single 4g port is silly in the first place.
>
> Maybe someone wants 4 LTO-4's so this problem just goes away :)  The 5 drives 
> work with fewer problems and better throughput than the 9 drives on the old 
> card.
>
> Thanks for reading, Have a great day.
>
> _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
> CONFIDENTIALITY NOTICE: If you have received this email in error, please 
> immediately notify the sender by e-mail at the address shown.This email 
> transmission may contain confidential information.This information is 
> intended only for the use of the individual(s) or entity to whom it is 
> intended even if addressed incorrectly. Please delete it from your files if 
> you are not the intended recipient. Thank you for your compliance.
>


Re: Frustrated by slowness in TSM 6.2

2010-11-18 Thread Josh Davis
Maybe something simple like verifying TCPWIN on the receiving side is 2x
TCPBUF on th sender.
Set COMPRESS=NO to make sure you're not misreading retransmits.
Check topas during your local backup to itself.
Check nmon's disk stats during the backup to see if you've got a hot LUN.
Check the same from any disk perf monitoring.
Check errpt
Check your db2 logs for any sort of errors

With the XIV, streaming thruput should be fine.  It's only the IOPS that
will be weak.  Your physical limit would be around 16k IOPS, though you have
on-disk cache and write combining, as well as the 120GB of cache (8*15).
 You could run into some back-side 10GE saturation if your LUN pathing isn't
well balanced.

VIO servers also have some limitations.  If you're using VIO MPIO, are you
set for round robin at every stage?  By default, you'll be active/passive
between the two vscsi adapters, and then whatever you're doing for load
balance on the VIO servers.

Also, the VIO servers will need CPU to drive IOPS.  Check topas on the VIO
servers during your tests.

NPIV is preferred for lower latency through the VIO server, plus you can run
4-path multipath with load balancing on the client rather than having the
VIO server(s) muddle through.

---
Sincerely,
Josh-Daniel S. Davis

On Fri, Oct 8, 2010 at 11:27, Andrew Carlson  wrote:

> Hi all
>
> I am running TSM 6.2.1.1 on AIX V615 in a LPAR on a P770.  The LPAR
> has 6 shared CPU's, 12 virtual CPU;s, and 64GB of memory.  There are 2
> VIO servers with 4 fiber channel connections to XIV storage for the DB
> and LOG, and 2 10Gbit Ethernet in each VIO in an Etherchannel
> configuration.  The storage pool is on Data Domain DD880's, 2 per AIX,
> 1 per instance.
>
> I am seeing consistenly poor performance from this setup.  I have
> tested network from VIO to cloud, and LPAR to VIO, which seems fine.
> I tested LPAR to Data Domain, and things seem fine.  But, when backups
> are running (and I only have a few nodes there yet, this is a new
> setup), TSM doesn't seem to want to go over 20 to 30MB/s throughput.
> I tried backing up the TSM server over lo, and that was a little
> better at 50MB/s, but not screaming.  I tried using chunk of SAN as a
> disk pool ahead of the Data Domain, no change.  I am at my whit's end.
>
> If anyone has any ideas, please let me know.  Thanks.
>
> --
> Andy Carlson
> ---
> Gamecube:$150,PSO:$50,Broadband Adapter: $35, Hunters License: $8.95/month,
> The feeling of seeing the red box with the item you want in it:Priceless.


Re: De-dup ratio's

2010-11-12 Thread Josh Davis
I vote +1 for low dedupe ratios being due to precompressed data:
* Even MS Office files are actually ZIP files now.
* Windows keeps gigs of installers, which are mostly precompressed cabinets
* Many application data dumps are precompressed
* All practical media files are precompressed
* Many file servers contain a large amount of the above datatypes, plus tgz
or zip or rar or 7z or whatever as snapshots
* Many TSM environments enable client side compression, and a few enable
client-side encryption.
* TSM already does basic deduplication by using incremental strategy on a
file level.

If all of your OS images are clones of a golden image, then it helps a
little, even with noncompressible data.
Using gzip's option --rsyncable or any other content or dedupe aware options
can sometimes help a little
If using TSM client side compression (for bandwidth reasons), then TSM
client-side dedupe can see through that.
As the others have already stated, the best option is to separate out your
non-compressible data.

Dedupe is just compression with a very large dictionary.  Recompressing
doesn't work very well most of the time.  Even deduplicating multiple
versions of a document is tough with compressed XML formats.  You change the
file, recompress it, and the dictionary changes.  Because of that, the end
payload is vastly different.For example, make a word docx that's a
couple of megs.  Modify it in several places, re-save it.  Then, try to zip
the two together, and you won't get a 45% savings.


Re: SSL CPU

2010-09-27 Thread Josh Davis
Paul,
Did you find out a definitive answer on this?

Initial searching shows that the crypto cards work on AIX, and are accessible
through a standardized API that banks use.  The card itself seems to be a dual
PPC405e on card with a Linux service processor and DMA based communication back
to the OS.

However, I could not find anything indicating that TSM could make use of this.
 A FITS/DCR through your account rep for TSM to support SSL acceleration through
Crypto Coprocessor might be a good thing too.  The Crypto cards state they
support SSL acceleration, among other things.

The alternatives (stunnel, client side encryption) are less than desirable
compromises.
 With friendly regards,
Josh-Daniel S. Davis





From: Paul Zarnowski 
To: ADSM-L@VM.MARIST.EDU
Sent: Wed, September 8, 2010 9:49:09 AM
Subject: [ADSM-L] SSL CPU

I'm looking for recommendations & experiences on TSM SSL.

There is interest from our security group here in enabling SSL for TSM
sessions.  Naturally, the easiest plan for the security folks would be to just
enable it for everything.  There is guidance in the IBM documentation to only
use it where it is needed, and to consider adding server resources if you use
it.  I'm looking for something a little more quantifiable.  Are there any rules
of thumb out there that would be helpful?

Also, does anyone know if encryption chips are available on p-Series servers
that TSM SSL can make use of?

Thanks in advance.
..Paul



--
Paul ZarnowskiPh: 607-255-4757
Manager, Storage Services Fx: 607-255-8521
719 Rhodes Hall, Ithaca, NY 14853-3801Em: p...@cornell.edu


segfault: dsmserv 6.1.3.4 RHEL 5.5 x86_64

2010-06-14 Thread Josh Davis
I'm at 1h, 40m waiting on callback, but I thought I'd post this for people 
searching.

I found the issue because DBB to tape would crash, but BA STG to tape did not.  
Neither did BA DB T=F DEVC=FILECLASS.

I found out later that the customer loaded some tapes and checked them in, but 
it didn't click that they weren't labelled.

The gdb backtrace shows that it's crashing in ScsiAutoLabelVolume, and just 
after that, I hit a crash during BA STG writing to a new tape.

This is easy enough to work around, don't let autolable run, but it's an 
offering, and it shouldn't crash.

Here's my writeup:

ENV: dsmserv 6.1.3.4, RHEL 5.5 x86_64
PROBLEM: dsmserv crashes when autolabelling a tape
* Tapes processed with LABEL LIBVOL are fine.
* dsmserv drops core in the instance directory.
* No actlog and no stderr/stdout at crash
* Only a segfault indication in /var/log/messages but no details
* db2diag shows rc -50 from the dsm library and has a minicore
* DB2 stays running.
* Before and after, no more than 450mb of swap used.
* System has 16G of RAM & two 4-core Intel Xeon E5530 2.4GHz procs
* gdb backtrace on the core file shows:
#0  ScsiAutoLabelVolume (driveP=0x2aaab0108908, newLabelP=0x445b4eb0 "42L3",
readLabelP=0x445b3e80 "", isScratch=True, isBlank=True, createWorm=False)
at mmsscsi.c:19973
#1  0x009aa531 in ScsiMountVolume (volNameP=0x21640c50 "SCRTCH",
poolNameP=, mntDescP=,
callbackArgP=0x1f9d4ae8) at mmsscsi.c:19464
#2  0x0095580e in MmsMountVolume (volNameP=0x21640c50 "SCRTCH",
poolNameP=0x21641d10 "TAPEPOOL", mntDescP=0x445b5fa0,
callbackArgP=0x1f9d4ae8) at mms.c:1392
#3  0x00ce33aa in LtoOpen (args=0x21640c48) at pvrlto.c:288
#4  0x0091d675 in AgentThread (argP=0x21640bf8) at pvr.c:12986
#5  0x00c807f4 in StartThread (startInfoP=0x1e5c7a08) at pkthread.c:3325
#6  0x0032bcc0673d in start_thread () from /lib64/libpthread.so.0
#7  0x0032bc0d3d1d in clone () from /lib64/libc.so.6

ACTION TAKEN: isolated issue as above, created PMR with IBM RC.

ACTION PLAN: Waiting on callback.
* autolabel should not drop core.

TESTCASE: DB2 cores, dsmserv cores, db2diag, actlog



 With friendly regards,
Josh-Daniel S. Davis


Re: why create a 12TB LUN

2010-05-30 Thread Josh Davis
I scanned through, but maybe I missed, bug was it ever determined what's 
actually going onto this 12TB LUN?  Aside from our longings for kilobytes vs 
terabytes, and concerns of 75M files in a filesystem, I didn't see for sure.

An example:
I have a customer with 14TB on one sys.  It's something like 64 LUNs, though 
huge is huge.

Anyway, the 14TB was used for Oracle.  Their backup strategy was to copy it to 
an NFS mounted DataDomain box via 2 gigabit ports.

Last I checked, their backup times were on the order of 7 days, but only enough 
disk space for 3 days of logs.

I think they were moving to a 10gbit DD box and were hoping that would solve 
their issues.

Earlier recommendation was RMAN multiple channels into TSM, either direct fibre 
or through virtual ethernet.

They were a recent conversion from HP and were pretty established on OmniBack.  
TSM wasn't really an option (but neither was Omniback?)

Also, resource constraints were abound (Spec'd for 5 systems and a bunch of 
4-port HBAs, deployed as 4-systems with a bunch of single and a couple of dual 
port HBAs).



If journaling isn't an option, and it's multi-millions of files, then many 
parallel backups.  Either virtualmountpoints or separate filesystems.  A 
separate scheduler for each filesystem, resourceutil 10 on each one, and decent 
CPU and I/O bandwidth to handle the scans.

I found that on the above customer's DMX, we could get about 96kIOPS arraywide. 
 From the DB system, we could get 64kIOPS, but since 40-50 other LPARs were 
deployed onto it, plus vmware, etc, the usable IOPS was around 28k.  Stat for a 
file is an I/O, so scanning a large number of files needs a large IOPS support.

If it's a *NIX, it might be worth using some sort of cluster filesystem.  Then 
you could put the metadata on decent SSDs for higher IOPS on scans...

Lots of options.
 With friendly regards,
Josh-Daniel S. Davis



- Original Message 
From: "Gill, Geoffrey L." 
To: ADSM-L@VM.MARIST.EDU
Sent: Wed, May 26, 2010 6:03:38 PM
Subject: [ADSM-L] why create a 12TB LUN

I'm guessing many of you will find this quite odd, I know I did, but I
had someone come to me and say they were going to ask for a 12TB LUN and
wanted to back it up. Without even mentioning the product they want to
use, obviously not TSM though, and I'm not even sure it would make
difference, how would you manage to get a 12TB LUN backed up daily. I
would expect it to be at least 75% full if not more, and even without
knowing what percentage of data changes on it, it would seem to me the
request seems strange. They're thinking of getting a VTL and backing up
through fiber direct, not across the network, but no idea which one or
what sort of throughput to expect.



Have any of you been approached with this sort of request and if so what
was your response? I'm sort of dumbfounded at this point since I've not
heard or seen this anywhere.

Thanks,



Geoff Gill
TSM/PeopleSoft Administrator

SAIC M/S-B1P

4224 Campus Pt. Ct.

San Diego, CA  92121
(858)826-4062 (office)

(858)412-9883 (blackberry)


Re: how to reuse a tape that holds exports

2010-04-15 Thread Josh Davis
You cannot append export tapes.
You can export multiple nodes at once.

OPTION 1: Make a new export tape on a scratch volume
OPTION 2: DELETE VOLHIST and make a new export.

-JD


- Original Message 
From: yoda woya 
To: ADSM-L@VM.MARIST.EDU
Sent: Thu, April 15, 2010 9:58:26 AM
Subject: Re: [ADSM-L] how to reuse a tape that holds exports

Any assistance on thos will be greatly appreciate.

There is still space in 000117 to append another export, why is the system
not letting me:




04/15/2010 10:52:57  ANR2017I Administrator YODA issued command: EXPORT
  NODE EPSILON-DOM-M fsid=5 filedata=all scratch=no
  preview=no dev=lto volumenames=000117  (SESSION: 1600)
04/15/2010 10:52:57  ANR0984I Process 70 for EXPORT NODE started in the
  BACKGROUND at 10:52:57. (SESSION: 1600, PROCESS: 70)
04/15/2010 10:52:57  ANR0609I EXPORT NODE started as process 70. (SESSION:
  1600, PROCESS: 70)
more...   ( to continue, 'C' to cancel)

04/15/2010 10:52:57  ANR0402I Session 1630 started for administrator YODA
  (Server) (Memory IPC). (SESSION: 1600)
04/15/2010 10:52:57  ANR1409W Volume 000117 already in use - skipped.
(SESSION:
  1600, PROCESS: 70)
04/15/2010 10:52:57  ANR0692E EXPORT NODE: Out of space on sequential media,
  scratch media could not be mounted. (SESSION: 1600,
  PROCESS: 70)
04/15/2010 10:52:57  ANR0985I Process 70 for EXPORT NODE running in the
  BACKGROUND completed with completion state FAILURE at
  10:52:57. (SESSION: 1600, PROCESS: 70)

On Tue, Apr 13, 2010 at 5:10 PM, Shawn Drew <
shawn.d...@americas.bnpparibas.com> wrote:

> You need to delete it from the volume history   "help del volhist"
>
> Regards,
> Shawn
> 
> Shawn Drew
>
>
>
>
>
> Internet
> yodaw...@gmail.com
>
> Sent by: ADSM-L@VM.MARIST.EDU
> 04/13/2010 02:34 PM
> Please respond to
> ADSM-L@VM.MARIST.EDU
>
>
> To
> ADSM-L
> cc
>
> Subject
> [ADSM-L] how to reuse a tape that holds exports
>
>
>
>
>
>
> I do not want the exports but would like to reuse the tape as scratch..
> what
> is the procedure I must undergo
>
>
>
> This message and any attachments (the "message") is intended solely for
> the addressees and is confidential. If you receive this message in error,
> please delete it and immediately notify the sender. Any use not in accord
> with its purpose, any dissemination or disclosure, either whole or partial,
> is prohibited except formal approval. The internet can not guarantee the
> integrity of this message. BNP PARIBAS (and its subsidiaries) shall (will)
> not therefore be liable for the message if modified. Please note that
> certain
> functions and services for BNP Paribas may be performed by BNP Paribas RCC,
> Inc.
>


Re: BMR for Exchange server - Fastback

2010-02-01 Thread Josh Davis
If TBMR is actually Fastback, that's interesting. Fastback is FilesX, a product 
and company that IBM purchased April 21, 2008.

FilesX is a block level filesystem incremental backup product for MS Windows.  
They can use VSS for 2003 OS and 2005 Exchange/SQL.  For older Exchange/SQL it 
will initiate a quiesce and will be filesystem aware and will manage the 
backups properly.

Exchange individual mailbox restore is to mount up the backup over the network 
as a drive.  The Exchange agent can access the exchange DB on disk and can pull 
out individual messages and save them to the running exchange server.

For normal files, you can just copy them out of the mounted drive (looks like 
local, RW, NTFS but changes are lost on unmount).

Full restore is to overmount the filesystem with the fastback server copy.  At 
that point, it's similar to a mirror with one copy being the FB server and one 
copy being the local disk.  You can mount/start your apps as soon as you 
initiate the restore.  Any data overwritten will not be restored, and any data 
requested will be restored first in queue.  No dismount is requred at the end 
of the restore.

Bare Metal Recovery uses a PEmode CD to boot and initiate the beginning 
recovery.

Fastback DR is formally through replication to a DR hub via FTP.  Repository 
files are sequential access on disk.

To integrate this with TSM, you use TSM to back up the Fastback repository, 
whether it's a local repository, or the repository of a DR hub.  Fastback comes 
with scripts to integrate with TSM and a couple of other vendors' enterprise 
products.

With friendly regards,
Josh-Daniel S. Davis





From: Johnny Lea 
To: ADSM-L@VM.MARIST.EDU
Sent: Wed, January 27, 2010 1:20:50 PM
Subject: [ADSM-L] BMR for Exchange server

I'm lost trying to come up with a bare metal restore product for our Exchange 
servers.
Cristie's CBMR says it treats Exchange data as plain files with nothing to 
handle the internal structure of Exchange.
Not sure about Cristie's TBMR. (my IBM re-seller tells me he talked to Cristie 
and they told him that their TBMR is IBM Fastback BMR.  That sounds strange.  
Did IBM buy them?)
I haven't found anyone yet at IBM who can tell me how Fastback for Bare Metal 
Restore works with Exchange.
Acronis...I know nothing about.

I'd love something that would work with my TSM environment.
Can anyone suggest anything?

Thanks.
Johnny


Individuals who have received this information in error or are not authorized 
to receive it must promptly return or dispose of the information and notify the 
sender. Those individuals are hereby notified that they are strictly prohibited 
from reviewing, forwarding, printing, copying, distributing or using this 
information in any way.


Re: TSM Clients on Red Hat Enterprise Linux

2010-02-01 Thread Josh Davis
Nothing specific to Linux or Redhat.

General things:
If your TSM server is *NIX, you can increase your TCPWINdowsize to 262144 on 
the server and use a TCPBUFfersize of 131072.
Older Windows will have problems with anything over 63.
Enabling jumbo frames also helps with CPU load from network transmission.
TXNBytelimit and TXNGroupmax can be cranked up to improve the transmit 
performance.





From: Werner Korsten 
To: ADSM-L@VM.MARIST.EDU
Sent: Wed, January 27, 2010 10:23:18 AM
Subject: [ADSM-L] TSM Clients on Red Hat Enterprise Linux

Hello all,

The business is in the process of building a number of new Linux clients,
and we will then need to back these up. We are new to Linux, most of the
estate that we backup are Windows, AIX & Netware clients.

I've been able to install the TSM clients on the a few test servers and
these have been backing up without many issues presenting themselves. We
currently use TSM Server 5.5.2.1, and TSM client  5.5.1.4 installed on Red
Hat Enterprise Linux Server release 5.3.

Is anyone else running TSM Clients on Red Hat? I'm looking for suggestions
so we can optimise the option files for these new servers. I've had a look
at a performance redbook IBM have previously pointed me to, but it doesn't
contain anything specific for Linux. IBM have advised there doesn't seem
to be too many problems with performance on Linux, though I would still
like to make sure we have it as streamlined as possible.

Any ideas?

Thanks,
Werner

Werner Korsten
IS Graduate Trainee

http://www.standardlife.com




This e-mail is confidential and, if you are not the intended recipient,
please return it to us and do not retain or disclose it. We filter and
monitor e-mails in order to protect our system and the integrity,
confidentiality and availability of e-mails. We cannot guarantee that
e-mails are risk free and are not responsible for any related damage or
unauthorised alteration of e-mails by third parties after sending.

For more information on Standard Life group, visit our website
http://www.standardlife.com/

Standard Life plc (SC286832), Standard Life Assurance Limited* (SC286833)
and Standard Life Employee Services Limited (SC271355) are all registered
in Scotland at Standard Life House, 30 Lothian Road, Edinburgh EH1 2DH.
*Authorised and regulated by the Financial Services Authority. 0131 225
2552. Calls may be recorded/monitored. Standard Life group includes
Standard Life plc and its subsidiaries.

Please consider the environment. Think - before you print.


Re: ?anyone using TSM to backup Panasus PanFS?

2010-02-01 Thread Josh Davis
You could attach the TSM server to the clustered filesystem and allow it to 
perform the backup from its own client.  This might get you better bandwidth if 
it's done properly.

MEMORYEFFICIENT YES will scan one directory at a time per producer
thread.  This keeps from eating up all system memory, at the expense of
some scan speed.  Memory usage is 400-1200 bytes per file in the list to be 
backed up.

RESOURCEUTIL 10 is the limit and gets you only 4-6 producers.  If you have good 
bandwidth, and a large number of execution threads, it may be beneficial to run 
parallel backups.

Parallel backups can be used to spread the workload across more threads
and/or more systems.  virtualnodename or asnode can be used if
necessary.

VIRTUALMOUNTPOINT can be used to get around scanning areas which should
not be backed up rather than excluding those areas.

To prevent retransmits, you could perform backups of a replica or a snapshot.

Incremental by Date is valuable for your busy days.  It won't expire
files, but it saves a bunch of time scanning.


Limitations for path/filename depth:

AIX HP-UX  Solaris:
   File_space_name   1024
   Path_name or directory_name   1023
   File_name  256

Linux
   File_space_name   1024
   Path_name or directory_name768
   File_name  256

Windows XP/2000/2003
   File_space_name   1024
   Path_name or directory_name248
   File_name  248 With friendly regards,
Josh-D. S. Davis





From: James R Owen 
To: ADSM-L@VM.MARIST.EDU
Sent: Tue, January 26, 2010 5:09:53 PM
Subject: [ADSM-L] ?anyone using TSM to backup Panasus PanFS?

Yale uses Panasus PanFS, a massive parallel storage system, to store research 
data generated from HPC clusters.  In considering feasibility to backup PanFS 
using TSM,
we are concerned about whether TSM is appropriate to backup and restore:

1. very large volumes,
2. deep  subdirectory hierarchy  with 100's to 1000's of sublevels,
3. large numbers of files within individual subdirectories,
4. much larger numbers of files within each directory hierarchy.

Are there effective maximum limits for any of the above, beyond which
TSM becomes inappropriate to effectively perform backups and restores?

Please advise about the feasibility and any configuration recommendation(s)
to maximize PanFS backup and restore efficiency using TSM.

Thanks for your help.
--
jim.o...@yale.edu   (w#203.432.6693, c#203.494.9201, h#203.387.3030)


Re: sector-based incremental backup of filesystem

2010-01-23 Thread Josh Davis
For now, the best you could hope for would be an image mode snapshot backup 
going into a deduplicated storage pool.


Alternatively, if your application has a TSM connect agent (such as TDP for 
Databases), then you could use that to get the application data incrementally 
rather than pulling the blocks.

Otherwise, if you have a home-made application, it shouldn't be hard to write 
an incremental image backup tool.

Use OS and/or Array commands to:
* Quiesce the filesystem
* Lock writes
* snapshot
* resume r/w
* present the snapshot devices

Then, using the code from adsmpipe, hash procedures, and standard file I/O:
* Try to pull an existing hash table out of TSM.  Backup 0 would have none.
* Tead in the snapshot raw logical volume and generate hashes for every block.
* For any block without a hash, write out in binary format the sector offset, 
number of blocks, and then the raw data
* Once the whole "image" is written, then you can save the hash table as a 
second file.
* If you wanted to be fancy, you could index by hash and gain some manner of 
block-level dedupe.

Finally, you'd destroy the filesystem snapshot if necessary.

Hash code and file I/O code are readily available on the internet.  adsmpipe 
can also be found on the internet.
I would recommend static linking if your UNIX of choice prefers it.  On Linux, 
ADSMPIPE is very sensitive to libc changes.

As for something like FastBack Mounter, you'd need to write a block level 
device driver, which is more complicated.




From: Mehdi Salehi 
To: ADSM-L@VM.MARIST.EDU
Sent: Sat, January 23, 2010 1:34:40 AM
Subject: [ADSM-L] sector-based incremental backup of filesystem

Hi,
Is there any way for sector-based incremental image backup in Unix systems?
B/A client helps incremental-by-date, but it is inefficient for some
applications. What I am looking for is a feature like what FastBack or
UltraBac present for Windows.

Thanks


REMOVE=UNTILEEFULL on 3584 with library sharing

2007-11-01 Thread Josh Davis

In other environments, I've used MOVE DRM REMOVE=untileefull with
scsi/fibre attached 3584 and it works fine.

In this environment, we have a library manager and 4 library clients.
The 3584 has 20x 3592-E05 drives and 64 virtual I/O slots.

MOVE DRM REMOVE=BULK works fine from the library clients and the
library manager.

When trying REMOVE=UNTILEEFULL from the library clients:
A) A request posts on the library manager stating that all I/O slots are
full or inaccessible and it can't move the one specific volser out.

B) This request blocks all other robot activity from the LM and all of the
LCs.

C) CANCEL PROC of any MOVE DRM, CHECKIN or LABEL process will not
cancel.

D) CANCEL REQ on one request will simply pop up for the next tape.  If
MOVE DRM was cancelled, then that LC's process will free up and the next
LC will cause a request to post on the LM.

The Virtual I/O slots look and operate like normal I/O slots to TSM, and
REMOVE=BULK works for CHECKOUT and MOVE DRM on all of the LCs and the LM.


At this point, I'm not sure if the issue is that library sharing can't
handle UNTILEEFULL, or if the virtual I/O slots are a problem.

I was hoping to hear if anyone else was doing something similar, or had
similar problems.

TSM is 5.3.4.2 on AIX all/around.
Atape is 10.6
AIX is 5.3.0.0-TL06


Library manager still shows REMOTE

2007-08-16 Thread Josh Davis

I've seen a few hits where the only fix was to
DELETE VOLHIST TYPE=REMOTE FORCE=YES
but where it was never identified HOW the volumes got stuck.

I found one way at my customer site.

MOVE DRM notifies the library manager that the volume is TYPE=REMOTE.
The LOCATION is filled with the servername.

If you SET SERVERNAME on a library client AFTER it's already moved DRM
volumes offsite, there is no way to refresh the LOCATION on the library
manager.

AUDIT LIBR doesn't work because it's not in the library.
UPD VOLHIST won't update location on a REMOTE volume,
though it will say the command completed.

Preemptively deleting the volume history can be a problem too,
so manual handling is the only way out.

I've got a PMR open to see if UPD VOLHIST can be enhanced with FORCE=YES
also, or otherwise to allow REMOTE volumes to have location updated.

-Josh-Daniel S. Davis
Certified TSM Implementor
CATE AIX/pLPAR


Misleading errors

2007-06-27 Thread Josh Davis

Things I've run into today which took a little bit of tinkering because
searching didn't come up with anything...

##

During DSMSERV RESTORE DB, while using a manual library, if your DEFINE
PATH for the drive is incorrect, TSM will report an error about the
LIBRARY's path rather than the drive's path:

ANR8500E No paths are defined for library MANLIB in device configuration
information file.

##

Also, when restoring, and the source system drives were IBM, but
the target drives are not, and you don't define TSM devices,
you'll get:

ANR8880W Device type of drive DRIVE01 in library MANLIB was
determined to be GENERIC_TAPE.
ANR4619I Snapshot database backup series 70 using device class
LTO.
ANR4622I   Volume 1: 42.
ANR4634I Starting point-in-time database restore to date
06/22/07 07:03:10.
ANR0300I Recovery log format started; assigned capacity 1020
megabytes.
ANR2032E RESTORE DB: Command failed - internal server error
detected.
ANRD ThreadId <0> issued message 2032 from:
ANRD ThreadId <0>  0x000100018368 outMsgf
ANRD ThreadId <0>  0x00010070bbcc AdmRestoreDb
ANRD ThreadId <0>  0x00010021ab8c admRestoreDatabase
ANRD ThreadId <0>  0x00015240 RestoreDb
ANRD ThreadId <0>  0x00012f14 main



This is similar to the error mentioned on the IBM site:

ANRD icrest.c(2079): ThreadId<0> Rc=33 reading header record.
ANR2032E RESTORE DB: Command failed - internal server error detected

http://www-1.ibm.com/support/docview.wss?uid=swg21167860

###

If you're attempting DSMC RESTORE and you receive:

ANS1107E Invalid option/value: '-PITDATE=06/23/2007'

You may have DATEFORMAT set in your dsm.sys (dsm.opt for windows).

Dateformat 2 uses 2 digit year
Dateformat 3 is ISO standard:  2007-06-23
(-MM-DD, dashes, not slashes)


##

Sincerely,
Josh-Daniel S. Davis
TSM/AIX Consultant


Re: inputs for expansion

2007-04-09 Thread Josh Davis

All of the useful information is un VOLHISTORY and SUMMARY tables.

Most likely, you only keep 30 days of summary, and probably 60 days of
volume history.

A subset of the information would be included in the accounting log if you
have accounting enabled.

As Lawrence said before, external applications can be used to maintain
historical data.

Q STAT will tell you how long you're keeping summary records, and whether
accounting is enabled.

For accounting record format, see:
http://publib.boulder.ibm.com/infocenter/tivihelp/v1r1/index.jsp?topic=/com.ibm.itsmfdt.doc/anrai5335.htm

TPC, TSM Manager, Bocada, and TSM Operational Reporting all can generate
reports for you.  Operational Reporting would probably require that you
generate your own queries in order to get useful reports.  I'm not sure if
Admin Center has any useful or configurable queries yet.

-Josh Davis




On Mon, 9 Apr 2007, Avy Wong wrote:


Date: Mon, 9 Apr 2007 16:57:16 -0400
From: Avy Wong <[EMAIL PROTECTED]>
Reply-To: "ADSM: Dist Stor Manager" 
To: ADSM-L@VM.MARIST.EDU
Subject: Re: inputs for expansion

Lawrence,
 I understand that there are other factors like speed of data growth,
sizing my tape library, # of slots, numbers of servers/nodes, size of data
in TSM's DB2, policies( data retention time), etc., I would like to compare
year 05, 06, 07 . Where would I be able to find these historical data?

Avy Wong
Business Continuity Administrator
Mohegan Sun
1 Mohegan Sun Blvd
Uncasville, CT 06382
(860) 862-8164
cell (860) 961-6976






Lawrence Clark
<[EMAIL PROTECTED]
WAY.STATE.NY.US>   To
Sent by: "ADSM:   ADSM-L@VM.MARIST.EDU
Dist Stor  cc
Manager"
<[EMAIL PROTECTED] Subject
.EDU> Re: [ADSM-L] inputs for expansion


04/09/2007 04:15
PM


Please respond to
"ADSM: Dist Stor
Manager"
<[EMAIL PROTECTED]
  .EDU>






low cost products like tsm manager keep historical growth.


[EMAIL PROTECTED] 04/09/07 4:31 PM >>>

Hello,
 What factors should I be looking at if I want to see how fast my
TSM
is growing yearly. What data should I collect to justify its yearly
growth?
year to year comparison? Any suggestions ?


Thanks,
Avy Wong
Business Continuity Administrator
Mohegan Sun
1 Mohegan Sun Blvd
Uncasville, CT 06382
(860) 862-8164
cell (860) 961-6976


**

The information contained in this message may be privileged and
confidential and
protected from disclosure. If the reader of this message is not the
intended recipient, or
an employee or agent responsible for delivering this message to the
intended recipient,
you are hereby notified that any dissemination, distribution, or copy
of this
communication is strictly prohibited. If you have received this
communication in error,
please notify us immediately by replying to the message and deleting it
from your
computer.
**



The information contained in this electronic message and any attachments to
this message are intended for the exclusive use of the addressee(s) and may
contain information that is confidential, privileged, and/or otherwise
exempt from disclosure under applicable law.  If this electronic message is
from an attorney or someone in the Legal Department, it may also contain
confidential attorney-client communications which may be privileged and
protected from disclosure.  If you are not the intended recipient, be
advised that you have received this message in error and that any use,
dissemination, forwarding, printing, or copying is strictly prohibited.
Please notify the New York State Thruway Authority immediately by either
responding to this e-mail or calling (518) 436-2700, and destroy all copies
of this message and any attachments.


**
The information contained in this message may be privileged and confidential and
protected from disclosure. If the reader of this message is not the intended 
recipient, or
an employee or agent responsible for delivering this message to the intended 
recipient,
you are hereby notified that any dissemination, distribution, or copy of this
communication is strictly prohibited. If you have received this communication 
in error,
please notify us immediately by replying to the message and deleting it from 
your
computer.
**




Re: HSM for Windows: FileAttributesFilter

2007-04-09 Thread Josh Davis

sorry for the streak posting.
Aparently, I've answered my own question again for this one.

HSM 5.4 automatically recalls the file if you try to compress the
stubfile, and if you migrate a compressed file, it is decompressed first.

Docs should probably reflect this rather than warning about compressing
files.
-Josh


On Mon, 9 Apr 2007, Josh Davis wrote:


Date: Mon, 9 Apr 2007 16:31:38 -0500
From: Josh Davis <[EMAIL PROTECTED]>
Reply-To: "ADSM: Dist Stor Manager" 
To: ADSM-L@VM.MARIST.EDU
Subject: HSM for Windows: FileAttributesFilter

Has anyone tried setting the FileAttributesFilter to 800, 4800, or 4806?

Will it omit if ANY bit is set, or only if ALL bits are set?
If ALL, then can you set multiple filter lines in registry?

The reason is that this filesystem may contain compressed or encrypted
files and I'd like those to not migrate since there are known issues.

-Josh Davis




Re: TDPO and encryption

2007-04-09 Thread Josh Davis

Experience, no.
Support, maybe.  TSM supports 3592 drive encryption.  It's important to
make sure you have some way, external to TSM, to store the keys so you can
get at the tapes in a recovery scenario.

AES encryption from the API should work fine, though I'm not sure you have
a way of initially encrypting the key.  I think the server can generate
and manage the keys for you, but I've not gotten that to work on baclient,
let alone API.

-Josh


On Thu, 22 Mar 2007, Allen S. Rout wrote:


Date: Thu, 22 Mar 2007 15:14:29 -0400
From: Allen S. Rout <[EMAIL PROTECTED]>
Reply-To: "ADSM: Dist Stor Manager" 
To: ADSM-L@VM.MARIST.EDU
Subject: Re: TDPO and encryption


On Thu, 22 Mar 2007 10:24:32 +0100, Hans Christian Riksheim <[EMAIL PROTECTED]> 
said:



does anyone have any experience with rman/tdpo and encryption?


The last docs I saw about it, when distilled down, said "It might
work.  If it doesn't, then it's not supported.".

Underwhelming, but I understand that much of the API workflow is just
plain different than the B/A client flow.

- Allen S. Rout




Re: Windows TSM CAD errors,

2007-04-09 Thread Josh Davis

I recently ran into this sort of error when I tried to replicate the
registry entries rather than fail over a cluster.

Add/Remove will not clean up your service entries.

You can try dsmcutil, but I ended up having to create a new scheduler,
then search for the scheduler name and save its' registry entries.  Thwhen
I went to remove, both showed up, so I was able to delete the bad one,
then reimport the good one, then delete the good one.

There are some legacy registry entries that can't be removed with regedit
and these seem to be the ones in the way.

-Josh Davis

On Fri, 16 Mar 2007, Timothy Hughes wrote:


Date: Fri, 16 Mar 2007 12:19:57 -0400
From: Timothy Hughes <[EMAIL PROTECTED]>
Reply-To: "ADSM: Dist Stor Manager" 
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Windows TSM CAD errors,

Richard thanks,

I tried to remove the TSM services and install them again but for some
reason
getting the message Service is already installed. I think I am going to
be forced
to use the Add/Remove windows utility to remove the TSM client.

Tim

Richard Sims wrote:


See IBM Technote 1137681 for basic handling, and possibly search for
further circumstances.

   Richard Sims





HSM for Windows: FileAttributesFilter

2007-04-09 Thread Josh Davis

Has anyone tried setting the FileAttributesFilter to 800, 4800, or 4806?

Will it omit if ANY bit is set, or only if ALL bits are set?
If ALL, then can you set multiple filter lines in registry?

The reason is that this filesystem may contain compressed or encrypted
files and I'd like those to not migrate since there are known issues.

-Josh Davis


HSM for Windows additional notice

2007-04-09 Thread Josh Davis

Notes about recall:
   Right click, properties
   If it's on a directory, it's ok.
   If it's on a migrated file, it forces a recall


Notes about DSMFILEINFO:
   dsmfileinfo doesn't seem to accept wildcards
   dsmfileinfo strips the backslash off of the end of quoted portions of
the filename



Something I have a question about:

If you have HSM back-up before migrate, it uses the BA client with a list
of files.

Once done, HSM logs in to see if the files were backed up. This means that
you cannot use a different node name for HSM without setting DSM_CONFIG.

Am I misunderstanding?  This seems to be a problem, since if you migrate,
you're guaranteeing you can only have one copy of the files.  The next
backup will back up the stub, and eventually, your real version will fall
off the other end.

This is all fine of the HSM config never gets damaged, and you use NOLIMIT
for the default archive copygroup, but that seems really limiting.


-Josh-Daniel S. Davis


Re: HSM for Windows additional notice

2007-04-09 Thread Josh Davis

One correction - recall on properties happens for executable file types.

Still need info on backup before migrate seeming to require HSM to use the
same node name as the baclient.

I'm at HSM 5.4.0.3 and TSMC 5.4.0.2


On Mon, 9 Apr 2007, Josh Davis wrote:


Date: Mon, 9 Apr 2007 15:56:43 -0500 (CDT)
From: Josh Davis <[EMAIL PROTECTED]>
To: ADSM-L@VM.MARIST.EDU
Subject: HSM for Windows additional notice

Notes about recall:
  Right click, properties
  If it's on a directory, it's ok.
  If it's on a migrated file, it forces a recall


Notes about DSMFILEINFO:
  dsmfileinfo doesn't seem to accept wildcards
  dsmfileinfo strips the backslash off of the end of quoted portions of the
filename



Something I have a question about:

If you have HSM back-up before migrate, it uses the BA client with a list of
files.

Once done, HSM logs in to see if the files were backed up. This means that
you cannot use a different node name for HSM without setting DSM_CONFIG.

Am I misunderstanding?  This seems to be a problem, since if you migrate,
you're guaranteeing you can only have one copy of the files.  The next backup
will back up the stub, and eventually, your real version will fall off the
other end.

This is all fine of the HSM config never gets damaged, and you use NOLIMIT
for the default archive copygroup, but that seems really limiting.


-Josh-Daniel S. Davis



Re: Shrinking scratch pools - tips?

2007-04-09 Thread Josh Davis

Chip,
I have seen many options here, but the first, most simple thing to do is
to make sure that TSM actually knows anything at all about your missing
tapes.

First, find out how many tapes you are supposed to have, total, based on
what has been purchased, minus what has been destroyed.

Second, find out what tapes are known by TSM:
SELECT count(*) from LIBVOLUMES where STATUS='Scratch'
SELECT count(*) from VOLHISTORY where TYPE not like 'STG%'
SELECT COUNT(*) from VOLUMES

If you are missing volumes at this point, then volumes are either being
checked out as Scratch, or volume history is being deleted while the
volumes are checked out.  This commonly happens with DELETE VOLHIST T=DBB
or for DBS when you're using DRM.

If everything is fine at this point, then you're simply consuming more
tapes.  You would want to see where tapes have been used frequencly now
that they were not used in the past.

Creative queries off of VOLHISTORY or simply dumping it and looking for a
growing trend in STGNEW in one pool might help.

You might be able to see something from Q VOL if one stgpool has more
volumes than you recall.

Or as was already stated by others in the list, Q LIBV and look for many
Private volumes with a blank LAST USE.  This would indicate, most likely,
that the volume was not labelled before being checked in.


On Fri, 23 Mar 2007, Bell, Charles (Chip) wrote:


Date: Fri, 23 Mar 2007 09:41:06 -0500
From: "Bell, Charles (Chip)" <[EMAIL PROTECTED]>
Reply-To: "ADSM: Dist Stor Manager" 
To: ADSM-L@VM.MARIST.EDU
Subject: Shrinking scratch pools - tips?

Since this a GREAT place for info, etc., I though I would ask for
tips/how-to's on tracking down why my scratch pools are dwindling, for
LTO/LTO2/VTL. My guess is I have a couple of clients that are sending out a
vast amount of data to primary/copy. But without a good reporting tool, how
can I tell? Expiration/reclamation runs fine, and I am going to run a check
against my Iron Mountain inventory to see if there is anything there that
should be here. What else would you guys/gals look at?  :-)  Thanks in
advance!



God bless you!!!

Chip Bell
Network Engineer I
IBM Tivoli Certified Deployment Professional
Baptist Health System
Birmingham, AL







-
Confidentiality Notice:
The information contained in this email message is privileged and

confidential information and intended only for the use of the
individual or entity named in the address. If you are not the
intended recipient, you are hereby notified that any dissemination,
distribution, or copying of this information is strictly
prohibited. If you received this information in error, please
notify the sender and delete this information from your computer
and retain no copies of any of this information.





Re: HSM for Windows Programming Question

2007-04-09 Thread Josh Davis

Kelly,
I didn't see a reply for you on this.

I'm sure there is an HSM API, and I don't think it's documented outside of
IBM.  Evidence of that is in the HSM for Windows which doesn't use the
Space Management API at all.

The type of information you are looking for is very much server-internal.
There's not a client-side way to query for the exact location of your
file. I believe the metadata you're looking to acquire would require the
use of the admin API.

Further, acquiring the data isn't just a simple query.  It's a very
complex set of dependencies.  One example pass-through might bw:

You would need to obtain the object ID of the item to be
recalled, and then find out where all copies of that object existed.
Then, you would need to determine the device type.  Then, you would need
to determine whether there was a reservation queued up for that device
class.

If the class were available for access (file, tape), or if it were
not under heavy load (more than 1-3 I/O operations per disk), then you
could use some table of your own creation to guess at the access latency
for the device type of the device class.

This still wouldn't take into account the underlying structure of the
data.  For example, if it's a VTS, or if the disk volumes are on shared
arrays, or even shared LUNs, then  your access times wouldn't match.  You
could monitor the internals from your own code to get a feeling for the
latencies of the individual system, but that's getting into speculative
code.



On Mon, 12 Mar 2007, Kelly Lipp wrote:


Date: Mon, 12 Mar 2007 14:02:18 -0600
From: Kelly Lipp <[EMAIL PROTECTED]>
Reply-To: "ADSM: Dist Stor Manager" 
To: ADSM-L@VM.MARIST.EDU
Subject: HSM for Windows Programming Question

Gang,

I have a potential customer interested in using HSM for Windows.  They
have an application that manages the file data and want to manage
retrieves from the HSM.  Specifically, they want to be able to determine
where in the TSM storage pool hierarchy the file exists so they can
determine the length of time required to do the retrieve.  In addition,
they are interested in being able to "batch" retrieves and generate
information about how long the "batch" would take and then either do it
in real-time or push it off to some other time and let the end user know
what's going on.

Does anybody know if there is an API to the HSM client and if so, is it
documented?  And if not, will the dsmfileinfo and dsmfind programs yield
some of this information in a programmable way?

Any help, as always, is appreciated!

Thanks,

Kelly J. Lipp
VP Manufacturing & CTO
STORServer, Inc.
485-B Elkton Drive
Colorado Springs, CO 80907
719-266-8777
[EMAIL PROTECTED]





Re: HSM for Windows

2007-04-09 Thread Josh Davis

The API used on the filesystem side really shouldn't have any effect on
the backend implementation.

The reason it's different is because Tivoli/IBM purchased the product from
a separate company.

As such, HSM for Windows is not derived from HSM for AIX.


The Redbook doesn't clarify all of the info, but it does indicate NOLIMIT
is recommended.   The actual Tivoli whitebook clearly states many
limitations and exceptions.

The regular BA client will only back up the stub (NTFS junction) once
migrated, and if the stub's permissions change, it is rebacked up.

The BA client doesn't integrate with nor is it aware of the behaviour of
the HSM for Windows client.

Backup before Migrate shoudl also be used.  This will back up the file
via the HSM client.

HSM for Windows clients should have a separate policy domain.
There should be only the default management class.  The copygroup for
backup and archive should both be set to NOLIMIT.

In the future, there may be the ability to delete versions of migrated
files from the HSM GUI.



On Thu, 29 Mar 2007, Helder Garcia wrote:


Date: Thu, 29 Mar 2007 06:08:49 -0300
From: Helder Garcia <[EMAIL PROTECTED]>
Reply-To: "ADSM: Dist Stor Manager" 
To: ADSM-L@VM.MARIST.EDU
Subject: Re: HSM for Windows

I think the majority of tools that implements truly hierarchical space
management must rely on DM API - http://en.wikipedia.org/wiki/DMAPI , which
is not supported by NTFS, that's why HSM for Windows is a "specific"
implementation, a really separate product from space management for Unix.

On 3/28/07, Weeks, Debbie <[EMAIL PROTECTED]> wrote:


Thanks.  We are HIGHLY disappointed with this product.  I have reviewed
the documentation again, and still do not find this little caveat
spelled out clearly anywhere in what we have.  I used to administer
IBM's SMS/HSM on MVS back in the day, so I guess my expectation were
high.  No wonder the license was so cheap!

Does anyone know if there are plans to make this a truly hierarchical
space management tool for Windows clients?

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Remco Post
Sent: Wednesday, March 28, 2007 2:16 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: HSM for Windows

Weeks, Debbie wrote:
> I have not sent anything to this list in quite some time, so please be

> gentle if this topic has been thoroughly discussed in this forum
before.
> We are running TSM 5.3.4.0 on AIX, and recently purchased HSM to
> archive from our Windows  2003 file servers.
>
> In following the instructions to implement HSM we found that the
> documentation pointed to settings on the management class for HSM.
> However, once we implemented we found that those settings were not
> being recognized, and data was going directly to tape as indicated by
> the Archive settings.  We contacted support, and their response was
> that we needed to use the Archive settings to point our HSM jobs to a
> storage pool.  This doesn't seem right though, as we want files to
> initially go to disk, then migrate to tape according to a last access
> date, not based on a migration threshold on the storage pool.
>
> Does anyone out there have a hint as to why we cannot use the HSM
> settings?  Any help is much appreciated.

HSM for windows is designed to use the TSM archive API to store and
recall files. Therefore you must use an archive mgmtclass to manage
those files. Of course this can be a different mgmtclass than the one
used to manage your normal archive activities.

--
Met vriendelijke groeten,

Remco Post

SARA - Reken- en Netwerkdiensten  http://www.sara.nl
High Performance Computing  Tel. +31 20 592 3000Fax. +31 20 668 3167
PGP Key fingerprint = 6367 DFE9 5CBC 0737 7D16  B3F6 048A 02BF DC93 94EC

"I really didn't foresee the Internet. But then, neither did the
computer industry. Not that that tells us very much of course - the
computer industry didn't even foresee that the century was going to
end." -- Douglas Adams





--
Helder Garcia




hidden flags to EXPIRE INV and CLEANUP EXPTABLE

2006-04-20 Thread Josh Davis

UNDOCUMENTED OPTIONS FOR EXPIRE INVENTORY:
There are two undocumented/unsupported options for EXPIRE INV;
BEGINNODEID and ENDNODEID.

These accept the decimal node number of a node and can be used to
expire a specific node's filespaces, or a specific range.



WHY WOULD YOU EVER WANT TO USE THESE?
EXPIRE INV won't check for filespace lock before parsing a filespace.

As such, if you're running expiration, and a node is backing up a
filespace when expire inventory gets to it, expire inventory will wait
indefinitely.

When this happens, CANCEL EXPIRATION or CANCEL PROC will register as
"Cancel Pending" but will hang there until the lock is released.

Officially there's supposed to be a resource timeout, but IBM wasn't
able to give details on how long this is.




HOW TO FIND THE NODE NUMBERS:
Node numbers are sequential, starting at 1, and are in REG_TIME order.
Deletions leave gaps.

The short way would be a SELECT statement.  Supposedly this can be
done, but I couldn't figure out the column name.  IBM doesn't like to
give info regarding undocumented/unsupported options since that might
make them liable to support or defend them in the future.

The long way is to use SHOW commands.  Use SHOW OBJDIR to find the
btree node for the Nodes table.  This SHOULD be 38.

SHOW NODE 38 (hopefully) will show the top level of the tree.
On average, there are about 11 second-level leaf nodes per first level
leaf node.

If you do SHOW NODE on each subtree, and save these to a file, you'll
have the raw data for the nodes table.

In the data section, field 1 is the node number in hex, and field 2 is
the node name in all-caps ascii.



OTHER USES FOR THE NODEID:
This can be used with SHOW LOCKS and SHOW THREADS to find out which
node is holding the lock preventing expire inv from continuing.  From
there, you can kill a session so that expire inv can continue or be
cancelled.

This can also be converted to decimal so you can run EXPIRE INV
BEGINNODE=10 ENDNODE=20 or similar to operate only on a specific subset
of nodes.  This could be used to avoid nodes which have long-running
transactions, to quick-expire a huge bunch of data that was just
deleted, or to set up scheduled expirations for heavy-expire nodes.

These same flags work on CLEANUP EXPTABLE.  Since there is no way to
cancel CLEANUP EXPTABLE, then running it on a small subset of nodes can
help if you suspect you're not expiring all that you should be, but
don't want to risk having to shutdown TSM to abort it when you're 50
million objects in and it's a week after you started it.


WHY I'M SHARING THIS INFO:
I've opened a DCR requesting EXPIRE INVENTORY be given an option to
allow detection and skipping of locked filespaces, and that it should
be implemented without killing expiration or the session/process
holding the filespace lock.

The FITS request number is MR0420061821 if you or anyone wants to be
added to the notify/me-too list for this.

If your sales rep doesn't know where/how to get to FITS, it's on
D03DB004.boulder.ibm.com.  I think it's under m_dir (marketing).

This was way longer than I anticipated, but seemed useful enough to
risk sharing.

--
Josh


End of Service

2002-03-21 Thread Josh Davis

Just in case no one noticed, TSM v4.1 goes end of service at the end of
June.  If you find that you don't have time to get upgraded by then,
you may want to sign up for extended support:

https://www.tivoli.com/secure/Tivoli_Electronic_Support/prodextension.nsf/SupExt?OpenForm

-Josh



HOWTO: TSM Server Quickinst

2002-02-07 Thread Josh Davis

I whipped this up because I run into so many people who want to figure
it out on their own, but really are just lost in the slew of new
concepts.  Here is a basic list of things to do when setting up your tsm
server the first time.  Suggestions and corrections are welcome.

-Josh


---

Basic In-Order TSM Server Setup Checklist
-

Physical Installation:
   Install physical devices
   Power on system
   Install o/s maintenance
   Install tsm server and drivers
   Define devices to the O/S
  349x, 357x, 358x and 359x use Atape/IBMTAPE/lmcpd
  all others use TSM drivers

Make a larger database and log:
   define dbvolume(make space)
   define dbcopy  (mirror elsewhere)
   extend db  (tell tsm to use the new space)
   define logv(make space)
   define logc(mirror elsewhere)
   extend log (tell tsm to use the new space)
   define spacetrigger(auto-grow db and log)
   define dbbackuptrigger (only if SET LOGMODE ROLLFORWARD)

Ensure proper licensing:
   register license   (see the server/bin directory)

Storage setup inside TSM:
   define library  (top level of tape storage)
   define devclass (points to a library)
   define drive(points to a library)
   define stg  (points to a device class)
   define vol  (points to a devclass and storage pool)
   label libvol(prepares a tape for TSM use)
   checkin libv(makes tape show up in Q LIBVOL)

For Policy/Node Information:
   define domain   (top level of policy information)
   define policyset(contained within a policy domain)
   define mgmtclass(contained within policyset)
   define copygroup T=BACK (contained within a management class)
   define copygroup T=ARCH (contained within a management class)
   activate policy (only one policyset active per domain)
   register node   (belongs to a policy domain)

Typical Storage Pool Hierarchy:
   Copygroup "destination" is disk pool
   Disk pool "NEXT" is tape pool
   An extra tape pool of type "COPYPOOL"

Other Important Things to Look Into:
   Administrator's Guide, Working With Network of TSM Servers
  Virtual Volumes
  Library Sharing
   Administrator's Guide, Chapter 1: Introduction
  Overview of storage hierarchy and TSM concepts
   Administrator's Guide, Protecting The Server
  Protection and recovery of the system
   Administrator's Guide, Disaster Recovery Manager
  Integrated, licensed tool for server protection
   DEFINE SCHED T=C
  client backup schedules
   DEFINE SCHED T=A
  server administrative schedules (don't overlap these)
 BACKUP STG (sync up copy pool)
 UPDATE STG RECLAIM=50 (reclaim free space from tapes)
 UPDATE STG RECLAIM=100 (turn off reclaimation)
 UPDATE STG HI=0 LO=0  (migrate the disk pool to tape)
 UPDATE STG HI=90 LO=70  (migrate only during overflow)
 BACKUP DB (TSM database is critical to server function)

In all procedures WATCH FOR ERRORS.

If you have questions, see http://www.tivoli.com for Documentation.
   TSM Administrator's Guide is procedural information
   TSM Administrator's Reference is command syntax
   TSM Messages Guide is decyphering for ANR/ANS messages

If you can't figure it out
   http://www.adsm.org - ADSM User's Group (searchable)
   http://www.tivoli.com - Online Knowledge Base
   IBM support: 800-237-5511 x 8



HOWTO: TSM Policy Information Demystified

2002-02-07 Thread Josh Davis

---
TSM Policy Settings
---

POLICY DOMAIN - This is a container for policy and scheduling info
*BACKRETention* is a fallback value for any files which
  have been backed up under the specified policy domain,
  but for which there is now a lack of an active policy
  set.
*ARCHRETention* is a fallback value for any files which
  have been archived under the specified policy domain,
  but for which there is now a lack of an active policy
  set.

   POLICY SET - This is a set of management classes within a POLICY
DOMAIN.
Only the active version has any effect.  The active
version is created by issuing ACTIVATE POLICYSET
.
After activating, you may edit the original set without
affecting the active copy.

  MANAGEMENT CLASS - This is a classification for particular sets
 of objects.  Minimal set to be bound is an entire
 archive for archiving (ARCHMC option) or a single file,
 directory,  filespace or everything for a backup
 (INCLUDE option).
 There can be many management classes per policy set,
 but only ones in the active policy set may be
 referenced
 **The additional parameters for this structure are used
only for HSM migration.

 BACKUP COPYGROUP - This is the structure which defines
 retention of backup data.  These settings are the most
 confusing because their operation overlaps and may
 require some to be set to "NOLIMIT" to attain the
 desired effect.

*VERSIONS DATA EXISTS* - This value specifies the maximum
 number of versions of a file which may be retained.
 If this is exceeded by additional backups or imports,
 then the next expiration run on the server will remove
 the oldest versions until there are only this many
 left.
 **This may be "NOLIMIT" or a whole number.

*RETAIN EXTRA VERSIONS* - This value specifies the maximum
 number of days to retain an inactive copy when there is
 an active copy in the database.
 **This may be "NOLIMIT" or a whole number.

*RETAIN ONLY VERSION* - This value specifies the maximum
 number of days to retain the only inactive version of
 a file once there are no active versions.
 **This may be "NOLIMIT" or a while number.
 **This does not affect active copies which are always
retained.

*VERSIONS DATA DELETED* - This is the maximum number of
 inactive versions of a file to retain once the file no
 longer has an active version.

*DESTINATION* - This specifies the storage pool to which
 newly backed up data will go if it is bound to this
 management class during backup.
 **Rebound data will not move to a new pool
   automatically.
 **Data may be moved from this pool through migration or
MOVE DATA commands.

 Archive Copygroup - This is the structure which defines
 retention of archive data.

*RETAIN VERSION* - This is the only retention parameter
 for archives, and is the number of days that the entire
 archive will be kept.

*DESTINATION* - This specifies the storage pool to which
 newly archived data will go if it is bound to this
 management class during archive operation.
 **Rebound data will not move to a new pool
   automatically.
 **Data may be moved from this pool through migration or
MOVE DATA commands.

   Schedule - This is a time/date/frequency setting for when commands
  may be automatically run by the tsm server either on the
  tsm server (administrative) or on the tsm client (client).

Client Association - a separate entity that binds clients to schedules.

Node - A separate entity which defines a client and allows access.




---
CAVEATS
---

NOTE: Once an archive is made, it may not be rebound.  In otherwords,
you may not specify a different management class for it. It IS possible
to change the copy-group retention period for this management class, and
then re-activate the policyset.  This will change it's retention;
however, be cautious with this, as you will affect ANY data in the same
management class.

NOTE: Management classes with the same name but whic