Re: Advice for archiving 80 billion of small files.

2017-01-26 Thread Bo Nielsen
Hi All,

Thanks you for all the replies. It was  very helpful for me, but not for the 
management. 
I think they want anything else. Because many of the files have Japanese or 
Chinese characters,
So it's difficult to index the Tar files.

Regards

Bo  

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Skylar 
Thompson
Sent: 20. januar 2017 15:43
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Advice for archiving 80 billion of small files.

Do you need to recover files individually? If so, then image backup (at least 
on its own) won't be a good option. One thing you could do is tar up chunks 
(maybe a million files) and archive/backup those chunks. Keep a catalog 
(hopefully a database with indexes) of which files are in which tar balls, and 
then when you go to restore you only have to recover 1/8 of your data to 
get one file.

On Fri, Jan 20, 2017 at 02:18:04PM +, Bo Nielsen wrote:
> Hi all,
>
> I need advice.
> I must archive 80 billion small files, but that is not possible, as I see it.
> since it will fill in the TSM's Database about 73 Tb.
> The filespace is mounted on a Linux server.
> Is there a way to pack/zip the files, so it's a smaller number of files.
> anybody who has tried this ??
>
> Regards,
>
> Bo Nielsen
>
>
> IT Service
>
>
>
> Technical University of Denmark
>
> IT Service
>
> Frederiksborgvej 399
>
> Building 109
>
> DK - 4000 Roskilde
>
> Denmark
>
> Mobil +45 2337 0271
>
> boa...@dtu.dk<mailto:boa...@dtu.dk>

--
-- Skylar Thompson (skyl...@u.washington.edu)
-- Genome Sciences Department, System Administrator
-- Foege Building S046, (206)-685-7354
-- University of Washington School of Medicine


Re: Advice for archiving 80 billion of small files.

2017-01-20 Thread Rick Adamson
Bo,
What is the total space occupied by this data?

-Rick 


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Bo 
Nielsen
Sent: Friday, January 20, 2017 9:18 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Advice for archiving 80 billion of small files.

Hi all,

I need advice.
I must archive 80 billion small files, but that is not possible, as I see it.
since it will fill in the TSM's Database about 73 Tb.
The filespace is mounted on a Linux server.
Is there a way to pack/zip the files, so it's a smaller number of files.
anybody who has tried this ??

Regards,

Bo Nielsen


IT Service



Technical University of Denmark

IT Service

Frederiksborgvej 399

Building 109

DK - 4000 Roskilde

Denmark

Mobil +45 2337 0271

boa...@dtu.dk<mailto:boa...@dtu.dk>


Re: Advice for archiving 80 billion of small files.

2017-01-20 Thread Graham Stewart
Whenever we have a situation like this we always tar.gz into one file first, 
before archiving to TSM .. either the top level directory or multiple lower 
level directories, if it makes sense to do so, based on what the retrieval 
expectations might be.

Also very useful is to tar.gz verbosely, write that out to a file and archive 
that file along with the gz. That way, later, you can get a reference of what 
was in the archive without having to retrieve and extract the gz file. And 
demonstrate to auditors that the files were archived successfully.


--
Graham Stewart
Network and Storage Services Manager
Information Technology Services
University of Toronto Libraries
416-978-6337


If there is some high level structureOn Jan 20, 2017, at 09:22, Bo Nielsen 
<boa...@dtu.dk<mailto:boa...@dtu.dk>> wrote:

Hi all,

I need advice.
I must archive 80 billion small files, but that is not possible, as I see it.
since it will fill in the TSM's Database about 73 Tb.
The filespace is mounted on a Linux server.
Is there a way to pack/zip the files, so it's a smaller number of files.
anybody who has tried this ??

Regards,

Bo Nielsen


IT Service



Technical University of Denmark

IT Service

Frederiksborgvej 399

Building 109

DK - 4000 Roskilde

Denmark

Mobil +45 2337 0271

boa...@dtu.dk<mailto:boa...@dtu.dk>


Re: Advice for archiving 80 billion of small files.

2017-01-20 Thread Skylar Thompson
Do you need to recover files individually? If so, then image backup (at
least on its own) won't be a good option. One thing you could do is tar up
chunks (maybe a million files) and archive/backup those chunks. Keep a
catalog (hopefully a database with indexes) of which files are in which tar 
balls, and
then when you go to restore you only have to recover 1/8 of your data
to get one file.

On Fri, Jan 20, 2017 at 02:18:04PM +, Bo Nielsen wrote:
> Hi all,
>
> I need advice.
> I must archive 80 billion small files, but that is not possible, as I see it.
> since it will fill in the TSM's Database about 73 Tb.
> The filespace is mounted on a Linux server.
> Is there a way to pack/zip the files, so it's a smaller number of files.
> anybody who has tried this ??
>
> Regards,
>
> Bo Nielsen
>
>
> IT Service
>
>
>
> Technical University of Denmark
>
> IT Service
>
> Frederiksborgvej 399
>
> Building 109
>
> DK - 4000 Roskilde
>
> Denmark
>
> Mobil +45 2337 0271
>
> boa...@dtu.dk<mailto:boa...@dtu.dk>

--
-- Skylar Thompson (skyl...@u.washington.edu)
-- Genome Sciences Department, System Administrator
-- Foege Building S046, (206)-685-7354
-- University of Washington School of Medicine


Re: Advice for archiving 80 billion of small files.

2017-01-20 Thread Sasa Drnjevic
On 20.1.2017. 15:31, Zoltan Forray wrote:
> TAR and/or GZip?


We do it here that way.

Regards,

--
Sasa Drnjevic
www.srce.unizg.hr


>
> On Fri, Jan 20, 2017 at 9:27 AM, Anderson Douglas <ander.doug...@gmail.com>
> wrote:
>
>> Try image backup
>>
>> Em 20/01/2017 12:23, "Bo Nielsen" <boa...@dtu.dk> escreveu:
>>
>>> Hi all,
>>>
>>> I need advice.
>>> I must archive 80 billion small files, but that is not possible, as I see
>>> it.
>>> since it will fill in the TSM's Database about 73 Tb.
>>> The filespace is mounted on a Linux server.
>>> Is there a way to pack/zip the files, so it's a smaller number of files.
>>> anybody who has tried this ??
>>>
>>> Regards,
>>>
>>> Bo Nielsen
>>>
>>>
>>> IT Service
>>>
>>>
>>>
>>> Technical University of Denmark
>>>
>>> IT Service
>>>
>>> Frederiksborgvej 399
>>>
>>> Building 109
>>>
>>> DK - 4000 Roskilde
>>>
>>> Denmark
>>>
>>> Mobil +45 2337 0271
>>>
>>> boa...@dtu.dk<mailto:boa...@dtu.dk>
>>>
>>
>
>
>
> --
> *Zoltan Forray*
> Spectrum Protect (p.k.a. TSM) Software & Hardware Administrator
> Xymon Monitor Administrator
> VMware Administrator (in training)
> Virginia Commonwealth University
> UCC/Office of Technology Services
> www.ucc.vcu.edu
> zfor...@vcu.edu - 804-828-4807
> Don't be a phishing victim - VCU and other reputable organizations will
> never use email to request that you reply with your password, social
> security number or confidential personal information. For more details
> visit http://infosecurity.vcu.edu/phishing.html
>


Re: Advice for archiving 80 billion of small files.

2017-01-20 Thread Yi, Ung
Good one. :-)

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Anderson Douglas
Sent: Friday, January 20, 2017 9:27 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Advice for archiving 80 billion of small files.

Try image backup

Em 20/01/2017 12:23, "Bo Nielsen" <boa...@dtu.dk> escreveu:

> Hi all,
>
> I need advice.
> I must archive 80 billion small files, but that is not possible, as I 
> see it.
> since it will fill in the TSM's Database about 73 Tb.
> The filespace is mounted on a Linux server.
> Is there a way to pack/zip the files, so it's a smaller number of files.
> anybody who has tried this ??
>
> Regards,
>
> Bo Nielsen
>
>
> IT Service
>
>
>
> Technical University of Denmark
>
> IT Service
>
> Frederiksborgvej 399
>
> Building 109
>
> DK - 4000 Roskilde
>
> Denmark
>
> Mobil +45 2337 0271
>
> boa...@dtu.dk<mailto:boa...@dtu.dk>
>


Re: Advice for archiving 80 billion of small files.

2017-01-20 Thread Zoltan Forray
TAR and/or GZip?

On Fri, Jan 20, 2017 at 9:27 AM, Anderson Douglas <ander.doug...@gmail.com>
wrote:

> Try image backup
>
> Em 20/01/2017 12:23, "Bo Nielsen" <boa...@dtu.dk> escreveu:
>
> > Hi all,
> >
> > I need advice.
> > I must archive 80 billion small files, but that is not possible, as I see
> > it.
> > since it will fill in the TSM's Database about 73 Tb.
> > The filespace is mounted on a Linux server.
> > Is there a way to pack/zip the files, so it's a smaller number of files.
> > anybody who has tried this ??
> >
> > Regards,
> >
> > Bo Nielsen
> >
> >
> > IT Service
> >
> >
> >
> > Technical University of Denmark
> >
> > IT Service
> >
> > Frederiksborgvej 399
> >
> > Building 109
> >
> > DK - 4000 Roskilde
> >
> > Denmark
> >
> > Mobil +45 2337 0271
> >
> > boa...@dtu.dk<mailto:boa...@dtu.dk>
> >
>



--
*Zoltan Forray*
Spectrum Protect (p.k.a. TSM) Software & Hardware Administrator
Xymon Monitor Administrator
VMware Administrator (in training)
Virginia Commonwealth University
UCC/Office of Technology Services
www.ucc.vcu.edu
zfor...@vcu.edu - 804-828-4807
Don't be a phishing victim - VCU and other reputable organizations will
never use email to request that you reply with your password, social
security number or confidential personal information. For more details
visit http://infosecurity.vcu.edu/phishing.html


Re: Advice for archiving 80 billion of small files.

2017-01-20 Thread Anderson Douglas
Try image backup

Em 20/01/2017 12:23, "Bo Nielsen" <boa...@dtu.dk> escreveu:

> Hi all,
>
> I need advice.
> I must archive 80 billion small files, but that is not possible, as I see
> it.
> since it will fill in the TSM's Database about 73 Tb.
> The filespace is mounted on a Linux server.
> Is there a way to pack/zip the files, so it's a smaller number of files.
> anybody who has tried this ??
>
> Regards,
>
> Bo Nielsen
>
>
> IT Service
>
>
>
> Technical University of Denmark
>
> IT Service
>
> Frederiksborgvej 399
>
> Building 109
>
> DK - 4000 Roskilde
>
> Denmark
>
> Mobil +45 2337 0271
>
> boa...@dtu.dk<mailto:boa...@dtu.dk>
>


Advice for archiving 80 billion of small files.

2017-01-20 Thread Bo Nielsen
Hi all,

I need advice.
I must archive 80 billion small files, but that is not possible, as I see it.
since it will fill in the TSM's Database about 73 Tb.
The filespace is mounted on a Linux server.
Is there a way to pack/zip the files, so it's a smaller number of files.
anybody who has tried this ??

Regards,

Bo Nielsen


IT Service



Technical University of Denmark

IT Service

Frederiksborgvej 399

Building 109

DK - 4000 Roskilde

Denmark

Mobil +45 2337 0271

boa...@dtu.dk<mailto:boa...@dtu.dk>


Advice on backup and restore of virtualised vCenter Server / Platform Services Controller instances

2016-06-21 Thread Schofield, Neil (Storage & Middleware, Backup & Restore)
It's interesting to see how Chapter 12 of the vSphere Installation and Setup 
Guide has changed between Updates 1 and 2 of vSphere 6.0:

Update 1: 
https://pubs.vmware.com/vsphere-60/topic/com.vmware.ICbase/PDF/vsphere-esxi-vcenter-server-601-installation-setup-guide.pdf
Update 2: 
https://pubs.vmware.com/vsphere-60/topic/com.vmware.ICbase/PDF/vsphere-esxi-vcenter-server-602-installation-setup-guide.pdf

Update 2 now makes explicit the possibility of using a third party product 
which employs VADP (such as IBM Spectrum Protect?) as an alternative to vSphere 
Data Protection for backing up VMs containing a vCenter Server and/or Platform 
Services Controller. However the backup and recovery steps within are still 
very much focused on using vSphere Data Protection.

So if I did want to use Spectrum Protect for VMware, are there any specific 
guidelines relating to VMs containing vCenter Server / Platform Services 
Controller? For instance VMware only support full image backups of such VMs and 
require that all vCenter Server and Platform Services Controller instances are 
backed up at the same time. Do IBM have any best practices as to how these 
recommendations should be implemented with Spectrum Protect?

There are probably also specific considerations when recovering such a VM using 
Spectrum Protect for VMware - eg recovering the VM containing the vCenter 
Server when no vCenter Server is available. Please can anyone point me at any 
helpful IBM docs which cover this scenario?

Regards

Neil Schofield
IBM Spectrum Protect SME
Backup & Recovery | Storage & Middleware | Central Infrastructure Services | 
Infrastructure & Service Delivery | Group IT
LLOYDS BANKING GROUP




Lloyds Banking Group plc. Registered Office: The Mound, Edinburgh EH1 1YZ. 
Registered in Scotland no. SC95000. Telephone: 0131 225 4555. Lloyds Bank plc. 
Registered Office: 25 Gresham Street, London EC2V 7HN. Registered in England 
and Wales no. 2065. Telephone 0207626 1500. Bank of Scotland plc. Registered 
Office: The Mound, Edinburgh EH1 1YZ. Registered in Scotland no. SC327000. 
Telephone: 03457 801 801. Cheltenham & Gloucester plc. Registered Office: 
Barnett Way, Gloucester GL4 3RL. Registered in England and Wales 2299428. 
Telephone: 0345 603 1637

Lloyds Bank plc, Bank of Scotland plc are authorised by the Prudential 
Regulation Authority and regulated by the Financial Conduct Authority and 
Prudential Regulation Authority.

Cheltenham & Gloucester plc is authorised and regulated by the Financial 
Conduct Authority.

Halifax is a division of Bank of Scotland plc. Cheltenham & Gloucester Savings 
is a division of Lloyds Bank plc.

HBOS plc. Registered Office: The Mound, Edinburgh EH1 1YZ. Registered in 
Scotland no. SC218813.

This e-mail (including any attachments) is private and confidential and may 
contain privileged material. If you have received this e-mail in error, please 
notify the sender and delete it (including any attachments) immediately. You 
must not copy, distribute, disclose or use any of the information in it or any 
attachments. Telephone calls may be monitored or recorded.


Advice for configuration

2014-01-01 Thread Robert Ouzen
Hi to all

I will got in a few days a new Storage dedicated  for TSM 7.1 dedup.

I am wonder if I can get some tips how to configure it.

The Storage is an IBM V3200 with 12 X 3T , one disk for hot spare.

Questions like:

· Raid ?

· Define a unique storage pool or multiple storage pools

· How to configure the devclass (capacity of volumes)
   Etc etc….

My TSM servers are V7.1.0 on Windows 2008R2  64B

Any advice will be appreciate.

Best Regards and Happy New Year.

Robert


very urgent advice from you people ! regarding DB % 99 full

2011-12-08 Thread TEN
please help me out :

i need some one who can help me immediately !!! its a production site ...
 Re: very urgent advice from you !

this is my information lease tell the step by step process  also this copy are 
mirrored


Available Assigned Maximum Maximum Page Total Used Pct Max.
Space Capacity Extension Reduction Size Usable Pages Util Pct
(MB) (MB) (MB) (MB) (bytes) Pages Util
-  - - --- - - - -
204,784 204,784 0 1,032 4,096 52,424,70 52,160,00 99.5 99.7
4 5


Volume Name Copy Volume Name Copy Volume Name Copy
(Copy 1) Status (Copy 2) Status (Copy 3) Status
 --  --  --
/dev/rmtlk_dblv- Sync'd Undef- Undef-
02 ined ined
/dev/rmtlk_dblv- Sync'd Undef- Undef-
01 ined ined
/dev/rmtlk_dblv- Sync'd Undef- Undef-
03 ined ined
/dev/rmtlk_dblv- Sync'd Undef- Undef-
04 ined ined


# lsvg tsmdbvg
VOLUME GROUP: tsmdbvg VG IDENTIFIER: 000f26c1d9000128228b3448
VG STATE: active PP SIZE: 256 megabyte(s)
VG PERMISSION: read/write TOTAL PPs: 1794 (459264 megabytes)
MAX LVs: 256 FREE PPs: 630 (161280 megabytes)
LVs: 13 USED PPs: 1164 (297984 megabytes)
OPEN LVs: 13 QUORUM: 2 (Enabled)
TOTAL PVs: 2 VG DESCRIPTORS: 3
STALE PVs: 0 STALE PPs: 0
ACTIVE PVs: 2 AUTO ON: yes
MAX PPs per VG: 32512
MAX PPs per PV: 1016 MAX PVs: 32
LTG size (Dynamic): 256 kilobyte(s) AUTO SYNC: no
HOT SPARE: no BB POLICY: relocatable


# lsvg -l tsmdbvg
tsmdbvg:
LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT
mtlk_fbdblv01 raw 2 2 1 open/syncd N/A
mtlk_loglv01 raw 26 26 2 open/syncd N/A
mtlk_tstdb01 raw 100 100 2 open/syncd N/A
mtlk_loglv02 raw 26 26 2 open/syncd N/A
mtlk_dblv02 raw 200 200 2 open/syncd N/A
mtlk_dblv01 raw 200 200 2 open/syncd N/A
mtlk_tstlog01 raw 4 4 2 open/syncd N/A
mtlk_dblv03 raw 200 200 2 open/syncd N/A
mtlk_tstdb02 raw 100 100 2 open/syncd N/A
mtlk_tstlog02 raw 4 4 2 open/syncd N/A


i need one more volume like dev/rmtlk_dbl05  how to define it .. mirror  
define mirror copy

please help me out ! .

+--
|This was sent by mytechiel...@gmail.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--


Re: very urgent advice from you people ! regarding DB % 99 full

2011-12-08 Thread Meunier, Yann
Hold all activity on your server !

Disable sessions !

And after add your device.

But your DB is already too large !

Make your volume with Smitty storage then LVM then LV then Add a Logical Volume

Don’t forget to make your device in Raw format !

After for allocate your device(s)

On tsm :

For a primary volume
Define dbvol /dev/rmtlk
For a copy sync volume
Define dbcopy your_dbvolprimary your_dbvolcopy

After define make a extend db with your wishes for space

Best regards,

Yann MEUNIER

-Message d'origine-
De : ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] De la part de TEN
Envoyé : jeudi 8 décembre 2011 08:00
À : ADSM-L@VM.MARIST.EDU
Objet : [ADSM-L] very urgent advice from you people ! regarding DB % 99 full

please help me out :

i need some one who can help me immediately !!! its a production site ...
 Re: very urgent advice from you !

this is my information lease tell the step by step process  also this copy are 
mirrored


Available Assigned Maximum Maximum Page Total Used Pct Max.
Space Capacity Extension Reduction Size Usable Pages Util Pct
(MB) (MB) (MB) (MB) (bytes) Pages Util
-  - - --- - - - -
204,784 204,784 0 1,032 4,096 52,424,70 52,160,00 99.5 99.7
4 5


Volume Name Copy Volume Name Copy Volume Name Copy
(Copy 1) Status (Copy 2) Status (Copy 3) Status
 --  --  --
/dev/rmtlk_dblv- Sync'd Undef- Undef-
02 ined ined
/dev/rmtlk_dblv- Sync'd Undef- Undef-
01 ined ined
/dev/rmtlk_dblv- Sync'd Undef- Undef-
03 ined ined
/dev/rmtlk_dblv- Sync'd Undef- Undef-
04 ined ined


# lsvg tsmdbvg
VOLUME GROUP: tsmdbvg VG IDENTIFIER: 000f26c1d9000128228b3448
VG STATE: active PP SIZE: 256 megabyte(s)
VG PERMISSION: read/write TOTAL PPs: 1794 (459264 megabytes)
MAX LVs: 256 FREE PPs: 630 (161280 megabytes)
LVs: 13 USED PPs: 1164 (297984 megabytes)
OPEN LVs: 13 QUORUM: 2 (Enabled)
TOTAL PVs: 2 VG DESCRIPTORS: 3
STALE PVs: 0 STALE PPs: 0
ACTIVE PVs: 2 AUTO ON: yes
MAX PPs per VG: 32512
MAX PPs per PV: 1016 MAX PVs: 32
LTG size (Dynamic): 256 kilobyte(s) AUTO SYNC: no
HOT SPARE: no BB POLICY: relocatable


# lsvg -l tsmdbvg
tsmdbvg:
LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT
mtlk_fbdblv01 raw 2 2 1 open/syncd N/A
mtlk_loglv01 raw 26 26 2 open/syncd N/A
mtlk_tstdb01 raw 100 100 2 open/syncd N/A
mtlk_loglv02 raw 26 26 2 open/syncd N/A
mtlk_dblv02 raw 200 200 2 open/syncd N/A
mtlk_dblv01 raw 200 200 2 open/syncd N/A
mtlk_tstlog01 raw 4 4 2 open/syncd N/A
mtlk_dblv03 raw 200 200 2 open/syncd N/A
mtlk_tstdb02 raw 100 100 2 open/syncd N/A
mtlk_tstlog02 raw 4 4 2 open/syncd N/A


i need one more volume like dev/rmtlk_dbl05  how to define it .. mirror  
define mirror copy

please help me out ! .

+--
|This was sent by mytechiel...@gmail.com via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--
Ce message et toutes les pièces jointes (ci-après le « message ») sont 
confidentiels et établis à l’intention exclusive de ses destinataires. Toute 
utilisation de ce message non conforme à sa destination, toute diffusion ou 
toute publication, totale ou partielle, est interdite, sauf autorisation 
expresse. Si vous recevez ce message par erreur, merci de le détruire sans en 
conserver de copie et d’en avertir immédiatement l’expéditeur. Internet ne 
permettant pas de garantir l’intégrité de ce message, la Caisse des Dépôts et 
Consignations décline toute responsabilité au titre de ce message s’il a été 
modifié, altéré, déformé ou falsifié. Par ailleurs et malgré toutes les 
précautions prises pour éviter la présence de virus dans nos envois, nous vous 
recommandons de prendre, de votre côté, les mesures permettant d'assurer la 
non-introduction de virus dans votre système informatique.

This email message and any attachments (“the email”) are confidential and 
intended only for the recipient(s) indicated. If you are not an intented 
recipient, please be advised that any use, dissemination, forwarding or copying 
of this email whatsoever is prohibited without Caisse des Depots et 
Consignations's prior written consent. If you have received this email in 
error, please delete it without saving a copy and notify the sender 
immediately. Internet emails are not necessarily secured, and declines 
responsibility for any changes that may have been made to this email after it 
was sent. While we take all reasonable precautions to ensure that viruses are 
not transmitted via emails, we recommend that you take your own measures to 
prevent viruses from entering your computer system.



Re: snapdiff advice

2011-09-22 Thread Frank Ramke
Hello Paul and David,

A frequently asked questions website has been created for snapshot
differencing.
We have attempted to answer the questions you have recently raised.

https://www.ibm.com/developerworks/wikis/display/tivolistoragemanager/Snapshot
+Differencing+FAQ


Re: snapdiff advice

2011-09-22 Thread Shawn Drew
I'd like to add a question to the FAQ if possible.  I'll ask it frequently
if it helps getting it added!

The documentation explicitly states that vfiler support (Multistore) is
not supported.  Is support for this somewhere on the roadmap? or is there
something on the Netapp side that prevents this?

Regards,
Shawn

Shawn Drew





Internet
ra...@us.ibm.com

Sent by: ADSM-L@VM.MARIST.EDU
09/22/2011 09:07 AM
Please respond to
ADSM-L@VM.MARIST.EDU


To
ADSM-L
cc

Subject
Re: [ADSM-L] snapdiff advice






Hello Paul and David,

A frequently asked questions website has been created for snapshot
differencing.
We have attempted to answer the questions you have recently raised.

https://www.ibm.com/developerworks/wikis/display/tivolistoragemanager/Snapshot

+Differencing+FAQ



This message and any attachments (the message) is intended solely for
the addressees and is confidential. If you receive this message in error,
please delete it and immediately notify the sender. Any use not in accord
with its purpose, any dissemination or disclosure, either whole or partial,
is prohibited except formal approval. The internet can not guarantee the
integrity of this message. BNP PARIBAS (and its subsidiaries) shall (will)
not therefore be liable for the message if modified. Please note that certain
functions and services for BNP Paribas may be performed by BNP Paribas RCC, Inc.


Re: snapdiff advice

2011-09-22 Thread Remco Post
Hi All,

I've got another question, maybe not for the FAQ ;-)

Has anybody got this working?

And maybe for the FAQ:

Are there TSM server requirements? Does SnapDiff require the TSM server to be 
at version 6, or is a 5.5 server supported?


On 22 sep. 2011, at 17:31, Shawn Drew wrote:

 I'd like to add a question to the FAQ if possible.  I'll ask it frequently
 if it helps getting it added!
 
 The documentation explicitly states that vfiler support (Multistore) is
 not supported.  Is support for this somewhere on the roadmap? or is there
 something on the Netapp side that prevents this?
 
 Regards,
 Shawn
 
 Shawn Drew
 
 
 
 
 
 Internet
 ra...@us.ibm.com
 
 Sent by: ADSM-L@VM.MARIST.EDU
 09/22/2011 09:07 AM
 Please respond to
 ADSM-L@VM.MARIST.EDU
 
 
 To
 ADSM-L
 cc
 
 Subject
 Re: [ADSM-L] snapdiff advice
 
 
 
 
 
 
 Hello Paul and David,
 
 A frequently asked questions website has been created for snapshot
 differencing.
 We have attempted to answer the questions you have recently raised.
 
 https://www.ibm.com/developerworks/wikis/display/tivolistoragemanager/Snapshot
 
 +Differencing+FAQ
 
 
 
 This message and any attachments (the message) is intended solely for
 the addressees and is confidential. If you receive this message in error,
 please delete it and immediately notify the sender. Any use not in accord
 with its purpose, any dissemination or disclosure, either whole or partial,
 is prohibited except formal approval. The internet can not guarantee the
 integrity of this message. BNP PARIBAS (and its subsidiaries) shall (will)
 not therefore be liable for the message if modified. Please note that certain
 functions and services for BNP Paribas may be performed by BNP Paribas RCC, 
 Inc.

-- 
Met vriendelijke groeten/Kind Regards,

Remco Post
r.p...@plcs.nl
+31 6 248 21 622


Re: snapdiff advice

2011-09-22 Thread Shawn Drew
I did have this working on TSM 5.5 with a TSM 6.2 AIX client (nfs shares).


 We just have a subset of volumes that are on vfilers that is preventing
me from adopting this solution whole-hog


Regards,
Shawn

Shawn Drew





Internet
r.p...@plcs.nl

Sent by: ADSM-L@VM.MARIST.EDU
09/22/2011 01:25 PM
Please respond to
ADSM-L@VM.MARIST.EDU


To
ADSM-L
cc

Subject
Re: [ADSM-L] snapdiff advice






Hi All,

I've got another question, maybe not for the FAQ ;-)

Has anybody got this working?

And maybe for the FAQ:

Are there TSM server requirements? Does SnapDiff require the TSM server to
be at version 6, or is a 5.5 server supported?


On 22 sep. 2011, at 17:31, Shawn Drew wrote:

 I'd like to add a question to the FAQ if possible.  I'll ask it
frequently
 if it helps getting it added!

 The documentation explicitly states that vfiler support (Multistore) is
 not supported.  Is support for this somewhere on the roadmap? or is
there
 something on the Netapp side that prevents this?

 Regards,
 Shawn
 
 Shawn Drew





 Internet
 ra...@us.ibm.com

 Sent by: ADSM-L@VM.MARIST.EDU
 09/22/2011 09:07 AM
 Please respond to
 ADSM-L@VM.MARIST.EDU


 To
 ADSM-L
 cc

 Subject
 Re: [ADSM-L] snapdiff advice






 Hello Paul and David,

 A frequently asked questions website has been created for snapshot
 differencing.
 We have attempted to answer the questions you have recently raised.


https://www.ibm.com/developerworks/wikis/display/tivolistoragemanager/Snapshot


 +Differencing+FAQ



 This message and any attachments (the message) is intended solely for
 the addressees and is confidential. If you receive this message in
error,
 please delete it and immediately notify the sender. Any use not in
accord
 with its purpose, any dissemination or disclosure, either whole or
partial,
 is prohibited except formal approval. The internet can not guarantee the
 integrity of this message. BNP PARIBAS (and its subsidiaries) shall
(will)
 not therefore be liable for the message if modified. Please note that
certain
 functions and services for BNP Paribas may be performed by BNP Paribas
RCC, Inc.

--
Met vriendelijke groeten/Kind Regards,

Remco Post
r.p...@plcs.nl
+31 6 248 21 622



This message and any attachments (the message) is intended solely for
the addressees and is confidential. If you receive this message in error,
please delete it and immediately notify the sender. Any use not in accord
with its purpose, any dissemination or disclosure, either whole or partial,
is prohibited except formal approval. The internet can not guarantee the
integrity of this message. BNP PARIBAS (and its subsidiaries) shall (will)
not therefore be liable for the message if modified. Please note that certain
functions and services for BNP Paribas may be performed by BNP Paribas RCC, Inc.


Re: snapdiff advice

2011-09-22 Thread Remco Post
good to hear. We have major issues with a windows client for CIFS shares. The 
client keeps on complaining about the right to backup the file permissions 
without any clear indication of what is wrong. We even tried runninf the client 
as a domain admin with full access rights to everything and still no go

On 22 sep. 2011, at 20:57, Shawn Drew wrote:

 I did have this working on TSM 5.5 with a TSM 6.2 AIX client (nfs shares).
 
 
 We just have a subset of volumes that are on vfilers that is preventing
 me from adopting this solution whole-hog
 
 
 Regards,
 Shawn
 
 Shawn Drew
 
 
 
 
 
 Internet
 r.p...@plcs.nl
 
 Sent by: ADSM-L@VM.MARIST.EDU
 09/22/2011 01:25 PM
 Please respond to
 ADSM-L@VM.MARIST.EDU
 
 
 To
 ADSM-L
 cc
 
 Subject
 Re: [ADSM-L] snapdiff advice
 
 
 
 
 
 
 Hi All,
 
 I've got another question, maybe not for the FAQ ;-)
 
 Has anybody got this working?
 
 And maybe for the FAQ:
 
 Are there TSM server requirements? Does SnapDiff require the TSM server to
 be at version 6, or is a 5.5 server supported?
 
 
 On 22 sep. 2011, at 17:31, Shawn Drew wrote:
 
 I'd like to add a question to the FAQ if possible.  I'll ask it
 frequently
 if it helps getting it added!
 
 The documentation explicitly states that vfiler support (Multistore) is
 not supported.  Is support for this somewhere on the roadmap? or is
 there
 something on the Netapp side that prevents this?
 
 Regards,
 Shawn
 
 Shawn Drew
 
 
 
 
 
 Internet
 ra...@us.ibm.com
 
 Sent by: ADSM-L@VM.MARIST.EDU
 09/22/2011 09:07 AM
 Please respond to
 ADSM-L@VM.MARIST.EDU
 
 
 To
 ADSM-L
 cc
 
 Subject
 Re: [ADSM-L] snapdiff advice
 
 
 
 
 
 
 Hello Paul and David,
 
 A frequently asked questions website has been created for snapshot
 differencing.
 We have attempted to answer the questions you have recently raised.
 
 
 https://www.ibm.com/developerworks/wikis/display/tivolistoragemanager/Snapshot
 
 
 +Differencing+FAQ
 
 
 
 This message and any attachments (the message) is intended solely for
 the addressees and is confidential. If you receive this message in
 error,
 please delete it and immediately notify the sender. Any use not in
 accord
 with its purpose, any dissemination or disclosure, either whole or
 partial,
 is prohibited except formal approval. The internet can not guarantee the
 integrity of this message. BNP PARIBAS (and its subsidiaries) shall
 (will)
 not therefore be liable for the message if modified. Please note that
 certain
 functions and services for BNP Paribas may be performed by BNP Paribas
 RCC, Inc.
 
 --
 Met vriendelijke groeten/Kind Regards,
 
 Remco Post
 r.p...@plcs.nl
 +31 6 248 21 622
 
 
 
 This message and any attachments (the message) is intended solely for
 the addressees and is confidential. If you receive this message in error,
 please delete it and immediately notify the sender. Any use not in accord
 with its purpose, any dissemination or disclosure, either whole or partial,
 is prohibited except formal approval. The internet can not guarantee the
 integrity of this message. BNP PARIBAS (and its subsidiaries) shall (will)
 not therefore be liable for the message if modified. Please note that certain
 functions and services for BNP Paribas may be performed by BNP Paribas RCC, 
 Inc.

-- 
Met vriendelijke groeten/Kind Regards,

Remco Post
r.p...@plcs.nl
+31 6 248 21 622


Re: snapdiff advice

2011-09-22 Thread Huebschman, George J.
I have the same issue...at least it sounds the same, with messages like
this
(Paraphrased)
ANS4013E Invalid File Handle, and ANS4007E Access to the object denied

I have been manually toggling virus scanning off and triggering manual
backups of the errant files.  That gets them backed up.
Size of the file seems to be a factor...but not always.

George Huebschman
Storage Support Administrator
Legg Mason, LMTS

When you have a choice, spend your money where you would want to work if
it was your only choice, because soon it might be.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
Remco Post
Sent: Thursday, September 22, 2011 3:30 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] snapdiff advice

good to hear. We have major issues with a windows client for CIFS
shares. The client keeps on complaining about the right to backup the
file permissions without any clear indication of what is wrong. We even
tried runninf the client as a domain admin with full access rights to
everything and still no go

On 22 sep. 2011, at 20:57, Shawn Drew wrote:

 I did have this working on TSM 5.5 with a TSM 6.2 AIX client (nfs
shares).


 We just have a subset of volumes that are on vfilers that is
preventing
 me from adopting this solution whole-hog


 Regards,
 Shawn
 
 Shawn Drew





 Internet
 r.p...@plcs.nl

 Sent by: ADSM-L@VM.MARIST.EDU
 09/22/2011 01:25 PM
 Please respond to
 ADSM-L@VM.MARIST.EDU


 To
 ADSM-L
 cc

 Subject
 Re: [ADSM-L] snapdiff advice






 Hi All,

 I've got another question, maybe not for the FAQ ;-)

 Has anybody got this working?

 And maybe for the FAQ:

 Are there TSM server requirements? Does SnapDiff require the TSM
server to
 be at version 6, or is a 5.5 server supported?


 On 22 sep. 2011, at 17:31, Shawn Drew wrote:

 I'd like to add a question to the FAQ if possible.  I'll ask it
 frequently
 if it helps getting it added!

 The documentation explicitly states that vfiler support (Multistore)
is
 not supported.  Is support for this somewhere on the roadmap? or is
 there
 something on the Netapp side that prevents this?

 Regards,
 Shawn
 
 Shawn Drew





 Internet
 ra...@us.ibm.com

 Sent by: ADSM-L@VM.MARIST.EDU
 09/22/2011 09:07 AM
 Please respond to
 ADSM-L@VM.MARIST.EDU


 To
 ADSM-L
 cc

 Subject
 Re: [ADSM-L] snapdiff advice






 Hello Paul and David,

 A frequently asked questions website has been created for snapshot
 differencing.
 We have attempted to answer the questions you have recently raised.



https://www.ibm.com/developerworks/wikis/display/tivolistoragemanager/Sn
apshot


 +Differencing+FAQ



 This message and any attachments (the message) is intended solely
for
 the addressees and is confidential. If you receive this message in
 error,
 please delete it and immediately notify the sender. Any use not in
 accord
 with its purpose, any dissemination or disclosure, either whole or
 partial,
 is prohibited except formal approval. The internet can not guarantee
the
 integrity of this message. BNP PARIBAS (and its subsidiaries) shall
 (will)
 not therefore be liable for the message if modified. Please note that
 certain
 functions and services for BNP Paribas may be performed by BNP
Paribas
 RCC, Inc.

 --
 Met vriendelijke groeten/Kind Regards,

 Remco Post
 r.p...@plcs.nl
 +31 6 248 21 622



 This message and any attachments (the message) is intended solely
for
 the addressees and is confidential. If you receive this message in
error,
 please delete it and immediately notify the sender. Any use not in
accord
 with its purpose, any dissemination or disclosure, either whole or
partial,
 is prohibited except formal approval. The internet can not guarantee
the
 integrity of this message. BNP PARIBAS (and its subsidiaries) shall
(will)
 not therefore be liable for the message if modified. Please note that
certain
 functions and services for BNP Paribas may be performed by BNP Paribas
RCC, Inc.

--
Met vriendelijke groeten/Kind Regards,

Remco Post
r.p...@plcs.nl
+31 6 248 21 622

IMPORTANT:  E-mail sent through the Internet is not secure. Legg Mason 
therefore recommends that you do not send any confidential or sensitive 
information to us via electronic mail, including social security numbers, 
account numbers, or personal identification numbers. Delivery, and or timely 
delivery of Internet mail is not guaranteed. Legg Mason therefore recommends 
that you do not send time sensitive 
or action-oriented messages to us via electronic mail.

This message is intended for the addressee only and may contain privileged or 
confidential information. Unless you are the intended recipient, you may not 
use, copy or disclose to anyone any information contained in this message. If 
you have received this message in error, please notify the author by replying 
to this message and then kindly delete the message. Thank you.


Re: snapdiff advice

2011-09-22 Thread Cameron Hanover
We had to set the virus scanner to ignore the ~snapshot directory to get rid of 
the bulk of our access denied problems.
We still have an ongoing problem with wide UTF characters (tickets open with 
IBM to resolve it), despite OnTap 7.3.4 and TSM 6.2.3.1.  If I recall 
correctly, we were seeing those invalid file handle messages related to this, 
as well.  We're also getting a fair number of ANS1304W: An active backup 
version could not be found, which I haven't been able to figure why.
To the original poster's question, we're running snapdiff from a 6.2 client to 
a 5.4 server.  IBM will tell you to piss off if you have any problems, though, 
unless you can reproduce it against a 6.1 or 6.2 server.

-
Cameron Hanover
chano...@umich.edu

Fill with mingled cream and amber, 
I will drain that glass again. 
Such hilarious visions clamber 
Through the chamber of my brain — 
Quaintest thoughts — queerest fancies 
Come to life and fade away; 
What care I how time advances? 
I am drinking ale today.
—-Edgar Allan Poe

On Sep 22, 2011, at 3:44 PM, Huebschman, George J. wrote:

 I have the same issue...at least it sounds the same, with messages like
 this
 (Paraphrased)
 ANS4013E Invalid File Handle, and ANS4007E Access to the object denied
 
 I have been manually toggling virus scanning off and triggering manual
 backups of the errant files.  That gets them backed up.
 Size of the file seems to be a factor...but not always.
 
 George Huebschman
 Storage Support Administrator
 Legg Mason, LMTS
 
 When you have a choice, spend your money where you would want to work if
 it was your only choice, because soon it might be.
 
 -Original Message-
 From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
 Remco Post
 Sent: Thursday, September 22, 2011 3:30 PM
 To: ADSM-L@VM.MARIST.EDU
 Subject: Re: [ADSM-L] snapdiff advice
 
 good to hear. We have major issues with a windows client for CIFS
 shares. The client keeps on complaining about the right to backup the
 file permissions without any clear indication of what is wrong. We even
 tried runninf the client as a domain admin with full access rights to
 everything and still no go
 
 On 22 sep. 2011, at 20:57, Shawn Drew wrote:
 
 I did have this working on TSM 5.5 with a TSM 6.2 AIX client (nfs
 shares).
 
 
 We just have a subset of volumes that are on vfilers that is
 preventing
 me from adopting this solution whole-hog
 
 
 Regards,
 Shawn
 
 Shawn Drew
 
 
 
 
 
 Internet
 r.p...@plcs.nl
 
 Sent by: ADSM-L@VM.MARIST.EDU
 09/22/2011 01:25 PM
 Please respond to
 ADSM-L@VM.MARIST.EDU
 
 
 To
 ADSM-L
 cc
 
 Subject
 Re: [ADSM-L] snapdiff advice
 
 
 
 
 
 
 Hi All,
 
 I've got another question, maybe not for the FAQ ;-)
 
 Has anybody got this working?
 
 And maybe for the FAQ:
 
 Are there TSM server requirements? Does SnapDiff require the TSM
 server to
 be at version 6, or is a 5.5 server supported?
 
 
 On 22 sep. 2011, at 17:31, Shawn Drew wrote:
 
 I'd like to add a question to the FAQ if possible.  I'll ask it
 frequently
 if it helps getting it added!
 
 The documentation explicitly states that vfiler support (Multistore)
 is
 not supported.  Is support for this somewhere on the roadmap? or is
 there
 something on the Netapp side that prevents this?
 
 Regards,
 Shawn
 
 Shawn Drew
 
 
 
 
 
 Internet
 ra...@us.ibm.com
 
 Sent by: ADSM-L@VM.MARIST.EDU
 09/22/2011 09:07 AM
 Please respond to
 ADSM-L@VM.MARIST.EDU
 
 
 To
 ADSM-L
 cc
 
 Subject
 Re: [ADSM-L] snapdiff advice
 
 
 
 
 
 
 Hello Paul and David,
 
 A frequently asked questions website has been created for snapshot
 differencing.
 We have attempted to answer the questions you have recently raised.
 
 
 
 https://www.ibm.com/developerworks/wikis/display/tivolistoragemanager/Sn
 apshot
 
 
 +Differencing+FAQ
 
 
 
 This message and any attachments (the message) is intended solely
 for
 the addressees and is confidential. If you receive this message in
 error,
 please delete it and immediately notify the sender. Any use not in
 accord
 with its purpose, any dissemination or disclosure, either whole or
 partial,
 is prohibited except formal approval. The internet can not guarantee
 the
 integrity of this message. BNP PARIBAS (and its subsidiaries) shall
 (will)
 not therefore be liable for the message if modified. Please note that
 certain
 functions and services for BNP Paribas may be performed by BNP
 Paribas
 RCC, Inc.
 
 --
 Met vriendelijke groeten/Kind Regards,
 
 Remco Post
 r.p...@plcs.nl
 +31 6 248 21 622
 
 
 
 This message and any attachments (the message) is intended solely
 for
 the addressees and is confidential. If you receive this message in
 error,
 please delete it and immediately notify the sender. Any use not in
 accord
 with its purpose, any dissemination or disclosure, either whole or
 partial,
 is prohibited except formal approval. The internet can not guarantee

Re: snapdiff advice

2011-09-22 Thread Paul Zarnowski
I add my voice to this question.  We would really like to see this.

At 11:31 AM 9/22/2011, Shawn Drew wrote:
I'd like to add a question to the FAQ if possible.  I'll ask it frequently
if it helps getting it added!

The documentation explicitly states that vfiler support (Multistore) is
not supported.  Is support for this somewhere on the roadmap? or is there
something on the Netapp side that prevents this?

Regards,
Shawn

Shawn Drew





Internet
ra...@us.ibm.com

Sent by: ADSM-L@VM.MARIST.EDU
09/22/2011 09:07 AM
Please respond to
ADSM-L@VM.MARIST.EDU


To
ADSM-L
cc

Subject
Re: [ADSM-L] snapdiff advice






Hello Paul and David,

A frequently asked questions website has been created for snapshot
differencing.
We have attempted to answer the questions you have recently raised.

https://www.ibm.com/developerworks/wikis/display/tivolistoragemanager/Snapshot

+Differencing+FAQ



This message and any attachments (the message) is intended solely for
the addressees and is confidential. If you receive this message in error,
please delete it and immediately notify the sender. Any use not in accord
with its purpose, any dissemination or disclosure, either whole or partial,
is prohibited except formal approval. The internet can not guarantee the
integrity of this message. BNP PARIBAS (and its subsidiaries) shall (will)
not therefore be liable for the message if modified. Please note that certain
functions and services for BNP Paribas may be performed by BNP Paribas RCC, 
Inc.


--
Paul ZarnowskiPh: 607-255-4757
Manager, Storage Services Fx: 607-255-8521
719 Rhodes Hall, Ithaca, NY 14853-3801Em: p...@cornell.edu


Re: snapdiff advice

2011-07-18 Thread Paul Zarnowski
We're running 8.0.1P3 on an n6210 gateway, in front of an SVC.  The error we 
get is:
tsm incr -snapdiff Y:

Incremental by snapshot difference of volume 'Y:'
ANS2840E Incremental backup using snapshot difference is not supported for Data
ONTAP file server version '0.0.0'. Upgrade the file server 'x.x.x.x' to Dat
a ONTAP version '7.3' or later in order to perform incremental backup 
operations
 using snapshot difference.
ANS2832E Incremental by snapshot difference failed for 
\\10.16.78.101\test_cifs.
 Please see error log for details.
ANS5283E The operation was unsuccessful.

I'll be opening an ETR on this, but if anyone has any ideas, let me know.  
Thanks!  Note that TSM thinks the ONTAP version is '0.0.0' for some reason.

..Paul



At 01:31 PM 7/15/2011, Frank Ramke wrote:
Hi Paul,

8.0.1 should work.  Ensure the NetApp user id has sufficient capabilities
and that it's password is not expired.


Frank Ramke



From:   Paul Zarnowski p...@cornell.edu
To: ADSM-L@vm.marist.edu
Date:   07/15/2011 12:10 PM
Subject:Re: snapdiff advice
Sent by:ADSM: Dist Stor Manager ADSM-L@vm.marist.edu



David,

Did you get this working with 8.0.1?  We're getting this error:

tsm incremental -snapdiff Y: -diffsnapshot=latest

Incremental by snapshot difference of volume 'Y:'
ANS2840E Incremental backup using snapshot difference is not supported
for Data
ONTAP file server version '0.0.0'. Upgrade the file server '10.16.78.101'
to Dat
a ONTAP version '7.3' or later in order to perform incremental backup
operations
 using snapshot difference.
ANS2832E Incremental by snapshot difference failed for
\\10.16.78.101\test_cifs.
 Please see error log for details.
ANS5283E The operation was unsuccessful.


At 03:41 PM 6/29/2011, David Bronder wrote:
Clark, Margaret wrote:

 Back in March, I watched a recorded presentation about DB2 reorgs
 within TSM server 6.2.2.0, and discovered that OnTap will only allow
 snapdiff backups to work correctly with releases 7.3.3 and 8.1, NOT
 8.0.  Apparently OnTap 7.3.3 and 8.1 contain the File Access Protocol
 (FAP), but 8.0 does not, so snapdiff would fail.  - Margaret Clark

My understanding is that it's more convoluted than that, but the FAP
and Unicode support for ONTAP 8 came with version 8.0.1, not 8.1.
So I should be good to go on my array running 8.0.1P5 with a 6.2.2.0
or newer TSM client.  (My NFS testing is on an older array at 7.3.2,
but that should still be OK if there are no Unicode filenames.  Even
if there were, it should fail in a different way than I'm seeing.)

--
Hello World.David Bronder - Systems
Admin
Segmentation Fault  ITS-EI, Univ. of
Iowa
Core dumped, disk trashed, quota filled, soda warm.
david-bron...@uiowa.edu


--
Paul ZarnowskiPh: 607-255-4757
Manager, Storage Services Fx: 607-255-8521
719 Rhodes Hall, Ithaca, NY 14853-3801Em: p...@cornell.edu


--
Paul ZarnowskiPh: 607-255-4757
Manager, Storage Services Fx: 607-255-8521
719 Rhodes Hall, Ithaca, NY 14853-3801Em: p...@cornell.edu


Re: snapdiff advice

2011-07-15 Thread Paul Zarnowski
David,

Did you get this working with 8.0.1?  We're getting this error:

tsm incremental -snapdiff Y: -diffsnapshot=latest

Incremental by snapshot difference of volume 'Y:'
ANS2840E Incremental backup using snapshot difference is not supported for Data
ONTAP file server version '0.0.0'. Upgrade the file server '10.16.78.101' to 
Dat
a ONTAP version '7.3' or later in order to perform incremental backup 
operations
 using snapshot difference.
ANS2832E Incremental by snapshot difference failed for 
\\10.16.78.101\test_cifs.
 Please see error log for details.
ANS5283E The operation was unsuccessful.


At 03:41 PM 6/29/2011, David Bronder wrote:
Clark, Margaret wrote:

 Back in March, I watched a recorded presentation about DB2 reorgs
 within TSM server 6.2.2.0, and discovered that OnTap will only allow
 snapdiff backups to work correctly with releases 7.3.3 and 8.1, NOT
 8.0.  Apparently OnTap 7.3.3 and 8.1 contain the File Access Protocol
 (FAP), but 8.0 does not, so snapdiff would fail.  - Margaret Clark

My understanding is that it's more convoluted than that, but the FAP
and Unicode support for ONTAP 8 came with version 8.0.1, not 8.1.
So I should be good to go on my array running 8.0.1P5 with a 6.2.2.0
or newer TSM client.  (My NFS testing is on an older array at 7.3.2,
but that should still be OK if there are no Unicode filenames.  Even
if there were, it should fail in a different way than I'm seeing.)

--
Hello World.David Bronder - Systems Admin
Segmentation Fault  ITS-EI, Univ. of Iowa
Core dumped, disk trashed, quota filled, soda warm.   david-bron...@uiowa.edu


--
Paul ZarnowskiPh: 607-255-4757
Manager, Storage Services Fx: 607-255-8521
719 Rhodes Hall, Ithaca, NY 14853-3801Em: p...@cornell.edu


Re: snapdiff advice

2011-07-15 Thread Frank Ramke
Hi Paul,

8.0.1 should work.  Ensure the NetApp user id has sufficient capabilities
and that it's password is not expired.


Frank Ramke



From:   Paul Zarnowski p...@cornell.edu
To: ADSM-L@vm.marist.edu
Date:   07/15/2011 12:10 PM
Subject:Re: snapdiff advice
Sent by:ADSM: Dist Stor Manager ADSM-L@vm.marist.edu



David,

Did you get this working with 8.0.1?  We're getting this error:

tsm incremental -snapdiff Y: -diffsnapshot=latest

Incremental by snapshot difference of volume 'Y:'
ANS2840E Incremental backup using snapshot difference is not supported
for Data
ONTAP file server version '0.0.0'. Upgrade the file server '10.16.78.101'
to Dat
a ONTAP version '7.3' or later in order to perform incremental backup
operations
 using snapshot difference.
ANS2832E Incremental by snapshot difference failed for
\\10.16.78.101\test_cifs.
 Please see error log for details.
ANS5283E The operation was unsuccessful.


At 03:41 PM 6/29/2011, David Bronder wrote:
Clark, Margaret wrote:

 Back in March, I watched a recorded presentation about DB2 reorgs
 within TSM server 6.2.2.0, and discovered that OnTap will only allow
 snapdiff backups to work correctly with releases 7.3.3 and 8.1, NOT
 8.0.  Apparently OnTap 7.3.3 and 8.1 contain the File Access Protocol
 (FAP), but 8.0 does not, so snapdiff would fail.  - Margaret Clark

My understanding is that it's more convoluted than that, but the FAP
and Unicode support for ONTAP 8 came with version 8.0.1, not 8.1.
So I should be good to go on my array running 8.0.1P5 with a 6.2.2.0
or newer TSM client.  (My NFS testing is on an older array at 7.3.2,
but that should still be OK if there are no Unicode filenames.  Even
if there were, it should fail in a different way than I'm seeing.)

--
Hello World.David Bronder - Systems
Admin
Segmentation Fault  ITS-EI, Univ. of
Iowa
Core dumped, disk trashed, quota filled, soda warm.
david-bron...@uiowa.edu


--
Paul ZarnowskiPh: 607-255-4757
Manager, Storage Services Fx: 607-255-8521
719 Rhodes Hall, Ithaca, NY 14853-3801Em: p...@cornell.edu


Re: snapdiff advice

2011-06-29 Thread Clark, Margaret
Back in March, I watched a recorded presentation about DB2 reorgs within TSM 
server 6.2.2.0, and discovered that OnTap will only allow snapdiff backups to 
work correctly with releases 7.3.3 and 8.1, NOT 8.0.  Apparently OnTap 7.3.3 
and 8.1 contain the File Access Protocol (FAP), but 8.0 does not, so snapdiff 
would fail.  - Margaret Clark

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of David 
Bronder
Sent: Monday, June 27, 2011 1:24 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] snapdiff advice

Hi folks.

I'm trying to get snapdiff backups of our NetApp (OnTAP version 8.0.1P5)
working so I can move away from everybody's favorite NDMP backups...

So far, I'm not having much luck.  I don't know whether I'm just Doing
It Wrong (tm) or if something else is going on.  In particular, on both
Windows 2008 R2 (6.2.3.0) and RHEL 5.6 (6.2.2.0), I'm getting failures
like the following, depending on the dsmc invocation:

  ANS1670E The file specification is not valid. Specify a valid Network
   Appliance or N-Series NFS (AIX, Linux) or CIFS (Windows) volume.

  ANS2831E  Incremental by snapshot difference cannot be performed on
   'volume-name' as it is not a NetApp NFS or CIFS volume.

(These are shares at the root of full volumes, not Q-trees.  I'm using a
CIFS share for the Windows client, and an NFS share for the Linux client,
with the correct respective permission/security styles.  TSM server is
still 5.5, but my understanding is that that should be OK.)

For those of you who have snapdiff working, could you share any examples
of how you're actually doing it?  E.g., your dsmc invocation, how you're
mounting the share (must a Windows share be mapped to a drive letter?),
or anything relevant in the dsm.opt or dsm.sys (other than the requisite
testflags if using an older OnTAP).  Or anything else you think is useful
that the documentation left out.

(Also of interest would be how you're scheduling your snapdiff backups,
and how you have that coexisting with local filesystems on the client
running the snapdiff backups.)

Thanks,
=Dave

--
Hello World.David Bronder - Systems Admin
Segmentation Fault  ITS-EI, Univ. of Iowa
Core dumped, disk trashed, quota filled, soda warm.   david-bron...@uiowa.edu


Re: Re: snapdiff advice

2011-06-29 Thread David Bronder
Clark, Margaret wrote:

 Back in March, I watched a recorded presentation about DB2 reorgs
 within TSM server 6.2.2.0, and discovered that OnTap will only allow
 snapdiff backups to work correctly with releases 7.3.3 and 8.1, NOT
 8.0.  Apparently OnTap 7.3.3 and 8.1 contain the File Access Protocol
 (FAP), but 8.0 does not, so snapdiff would fail.  - Margaret Clark

My understanding is that it's more convoluted than that, but the FAP
and Unicode support for ONTAP 8 came with version 8.0.1, not 8.1.
So I should be good to go on my array running 8.0.1P5 with a 6.2.2.0
or newer TSM client.  (My NFS testing is on an older array at 7.3.2,
but that should still be OK if there are no Unicode filenames.  Even
if there were, it should fail in a different way than I'm seeing.)

--
Hello World.David Bronder - Systems Admin
Segmentation Fault  ITS-EI, Univ. of Iowa
Core dumped, disk trashed, quota filled, soda warm.   david-bron...@uiowa.edu


Re: snapdiff advice

2011-06-28 Thread Colwell, William F.
Hi Dave,

 

I can't comment on your error messages, but you asked how I schedule
snapdiff backups.

 

The schedule invokes a command on the client.  Here is a shortened
version of the command file.

 

echo on

for /f tokens=2-4 delims=/  %%a in ('date /t') do (set
date=%%a-%%b-%%c)

echo %date%

 

net use share-name

... 12 more net use statements ...

 

 

 

dsmc i -snapdiff share-name -optfile=dsm-unix1.opt 
c:\backuplogs\xxx\snapdiff-%date%.txt

... 12 more dsmc commands ...

dsmc i c: -optfile=dsm-unix1.opt 
c:\backuplogs\vscan64\local-%date%.txt

 

 

The last line backs up the local file system.  

 

 

Regards,

 

Bill Colwell

Draper Lab

 

 

 

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
David Bronder
Sent: Monday, June 27, 2011 4:24 PM
To: ADSM-L@VM.MARIST.EDU
Subject: snapdiff advice

 

Hi folks.

 

I'm trying to get snapdiff backups of our NetApp (OnTAP version 8.0.1P5)

working so I can move away from everybody's favorite NDMP backups...

 

So far, I'm not having much luck.  I don't know whether I'm just Doing

It Wrong (tm) or if something else is going on.  In particular, on both

Windows 2008 R2 (6.2.3.0) and RHEL 5.6 (6.2.2.0), I'm getting failures

like the following, depending on the dsmc invocation:

 

  ANS1670E The file specification is not valid. Specify a valid Network

   Appliance or N-Series NFS (AIX, Linux) or CIFS (Windows)
volume.

 

  ANS2831E  Incremental by snapshot difference cannot be performed on

   'volume-name' as it is not a NetApp NFS or CIFS volume.

 

(These are shares at the root of full volumes, not Q-trees.  I'm using a

CIFS share for the Windows client, and an NFS share for the Linux
client,

with the correct respective permission/security styles.  TSM server is

still 5.5, but my understanding is that that should be OK.)

 

For those of you who have snapdiff working, could you share any examples

of how you're actually doing it?  E.g., your dsmc invocation, how you're

mounting the share (must a Windows share be mapped to a drive letter?),

or anything relevant in the dsm.opt or dsm.sys (other than the requisite

testflags if using an older OnTAP).  Or anything else you think is
useful

that the documentation left out.

 

(Also of interest would be how you're scheduling your snapdiff backups,

and how you have that coexisting with local filesystems on the client

running the snapdiff backups.)

 

Thanks,

=Dave

 

--

Hello World.David Bronder - Systems
Admin

Segmentation Fault  ITS-EI, Univ. of
Iowa

Core dumped, disk trashed, quota filled, soda warm.
david-bron...@uiowa.edu


Re: snapdiff advice

2011-06-28 Thread David Bronder
Thanks, Alain!

For the CIFS case, it looks like I was missing the net use step; doing
that first made the backup work (we already had the NetApp configured
with a user with correct capabilities and the password for that user was
already set on the TSM client side).

I'm still having no luck with snapdiff of NFS from a Linux client.  The
share is mounted (exported read/write with root access), I'm using the
same NetApp account for the API connection (and the password is set at
the client), but the backup fails with the same errors as I originally
described.  I was missing the TIVsm-BAhdw RPM at first, but correcting
that made no difference.

(My NFS testing is on a filer running 7.3.2, but I'm not worried about
Unicode support at this point; I just want to get snapdiff to work at
all.  Neither the SNAPDIFFNAMEFILTEROFF nor SNAPDIFFONTAPFAP testflags
had any effect on the attempts, either.)

=Dave


Alain Richard wrote:
 
 We are using snapdiff for almost a year. We use some trick's found in
 the forum.
 
 First, mount your share before starting the backup  net use \\dffdg\fgdgj
 Second, If you talking to a NAS like a Netapp, you need to have http
 access For TSM : httpd.admin.enableon, and use command tsm set
 password -type=filer sdfggd $logname.
 
 Be sure to have at least tsm6.2.2 client and Ontap 7.3.3 if you want
 the Unicode work!
 Don't forget to do full scan at least once a month just in case it
 misses some files.
 
  Alain
 
 -Message d'origine-
 De : ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] De la part de 
 David Bronder
 Envoyé : 27 juin 2011 16:24
 À : ADSM-L@VM.MARIST.EDU
 Objet : [ADSM-L] snapdiff advice
 
 Hi folks.
 
 I'm trying to get snapdiff backups of our NetApp (OnTAP version 8.0.1P5)
 working so I can move away from everybody's favorite NDMP backups...
 
 So far, I'm not having much luck.  I don't know whether I'm just Doing
 It Wrong (tm) or if something else is going on.  In particular, on both
 Windows 2008 R2 (6.2.3.0) and RHEL 5.6 (6.2.2.0), I'm getting failures
 like the following, depending on the dsmc invocation:
 
   ANS1670E The file specification is not valid. Specify a valid Network
Appliance or N-Series NFS (AIX, Linux) or CIFS (Windows) volume.
 
   ANS2831E  Incremental by snapshot difference cannot be performed on
'volume-name' as it is not a NetApp NFS or CIFS volume.
 
 (These are shares at the root of full volumes, not Q-trees.  I'm using a
 CIFS share for the Windows client, and an NFS share for the Linux client,
 with the correct respective permission/security styles.  TSM server is
 still 5.5, but my understanding is that that should be OK.)
 
 For those of you who have snapdiff working, could you share any examples
 of how you're actually doing it?  E.g., your dsmc invocation, how you're
 mounting the share (must a Windows share be mapped to a drive letter?),
 or anything relevant in the dsm.opt or dsm.sys (other than the requisite
 testflags if using an older OnTAP).  Or anything else you think is useful
 that the documentation left out.
 
 (Also of interest would be how you're scheduling your snapdiff backups,
 and how you have that coexisting with local filesystems on the client
 running the snapdiff backups.)
 
 Thanks,
 =Dave
 


-- 
Hello World.David Bronder - Systems Admin
Segmentation Fault  ITS-EI, Univ. of Iowa
Core dumped, disk trashed, quota filled, soda warm.   david-bron...@uiowa.edu


Re: Re: snapdiff advice

2011-06-28 Thread David Bronder
Thanks, Bill.  I was coming to the conclusion that this would have to be
scripted, at least for the CIFS shares.  Do you leave the various shares
mapped/used, or do you net use /delete them at the end of your script?

I might experiment with a PRESCHEDULECMD script to do the net use bits
and then add the shares to the DOMAIN (if that works -- I'm totally not
a Windows admin).  Though I suppose it'd be wiser to script the entire
thing, as you did, so there's only one place to add or remove filespaces.

=Dave


Colwell, William F. wrote:

 I can't comment on your error messages, but you asked how I schedule
 snapdiff backups.

 The schedule invokes a command on the client.  Here is a shortened
 version of the command file.

 echo on
 for /f tokens=2-4 delims=/  %%a in ('date /t') do (set date=%%a-%%b-%%c)
 echo %date%

 net use share-name
 ... 12 more net use statements ...

 dsmc i -snapdiff share-name -optfile=dsm-unix1.opt 
 c:\backuplogs\xxx\snapdiff-%date%.txt

 ... 12 more dsmc commands ...

 dsmc i c: -optfile=dsm-unix1.opt 
 c:\backuplogs\vscan64\local-%date%.txt


 The last line backs up the local file system.



--
Hello World.David Bronder - Systems Admin
Segmentation Fault  ITS-EI, Univ. of Iowa
Core dumped, disk trashed, quota filled, soda warm.   david-bron...@uiowa.edu


Re: snapdiff advice

2011-06-28 Thread Pete Tanenhaus
The problem you will run into on Windows is that if the scheduler runs as a
service under the
default system account accessing the shares won't be possible, you will
need to configure
the service to log in under an account which can access the CIFS shares.

If the NetApp filer is configured to be a trusted domain member (and the
account the scheduler service
runs under is a domain admin) you should be able to backup the shares
directly via the UNC names.

If the filer isn't a trusted domain member it is a little more difficult as
you must supply credentials
in order to authenticate the shares with the filer, and as previously
suggested this can be done
with NET USE commands in a pre-schedule command.

Hope this helps .


Pete Tanenhaus
Tivoli Storage Manager Client Development
email: tanen...@us.ibm.com
tieline: 320.8778, external: 607.754.4213

Those who refuse to challenge authority are condemned to conform to it


|
| From:  |
|
  
--|
  |David Bronder david-bron...@uiowa.edu  
 |
  
--|
|
| To:|
|
  
--|
  |ADSM-L@vm.marist.edu 
 |
  
--|
|
| Date:  |
|
  
--|
  |06/28/2011 06:52 PM  
 |
  
--|
|
| Subject:   |
|
  
--|
  |Re: Re: snapdiff advice  
 |
  
--|
|
| Sent by:   |
|
  
--|
  |ADSM: Dist Stor Manager ADSM-L@vm.marist.edu 
 |
  
--|





Thanks, Bill.  I was coming to the conclusion that this would have to be
scripted, at least for the CIFS shares.  Do you leave the various shares
mapped/used, or do you net use /delete them at the end of your script?

I might experiment with a PRESCHEDULECMD script to do the net use bits
and then add the shares to the DOMAIN (if that works -- I'm totally not
a Windows admin).  Though I suppose it'd be wiser to script the entire
thing, as you did, so there's only one place to add or remove filespaces.

=Dave


Colwell, William F. wrote:

 I can't comment on your error messages, but you asked how I schedule
 snapdiff backups.

 The schedule invokes a command on the client.  Here is a shortened
 version of the command file.

 echo on
 for /f tokens=2-4 delims=/  %%a in ('date /t') do (set
date=%%a-%%b-%%c)
 echo %date%

 net use share-name
 ... 12 more net use statements ...

 dsmc i -snapdiff share-name -optfile=dsm-unix1.opt 
 c:\backuplogs\xxx\snapdiff-%date%.txt

 ... 12 more dsmc commands ...

 dsmc i c: -optfile=dsm-unix1.opt 
 c:\backuplogs\vscan64\local-%date%.txt


 The last line backs up the local file system.



--
Hello World.David Bronder - Systems
Admin
Segmentation Fault  ITS-EI, Univ. of
Iowa
Core dumped, disk trashed, quota filled, soda warm.
david-bron...@uiowa.edu


snapdiff advice

2011-06-27 Thread David Bronder
Hi folks.

I'm trying to get snapdiff backups of our NetApp (OnTAP version 8.0.1P5)
working so I can move away from everybody's favorite NDMP backups...

So far, I'm not having much luck.  I don't know whether I'm just Doing
It Wrong (tm) or if something else is going on.  In particular, on both
Windows 2008 R2 (6.2.3.0) and RHEL 5.6 (6.2.2.0), I'm getting failures
like the following, depending on the dsmc invocation:

  ANS1670E The file specification is not valid. Specify a valid Network
   Appliance or N-Series NFS (AIX, Linux) or CIFS (Windows) volume.

  ANS2831E  Incremental by snapshot difference cannot be performed on
   'volume-name' as it is not a NetApp NFS or CIFS volume.

(These are shares at the root of full volumes, not Q-trees.  I'm using a
CIFS share for the Windows client, and an NFS share for the Linux client,
with the correct respective permission/security styles.  TSM server is
still 5.5, but my understanding is that that should be OK.)

For those of you who have snapdiff working, could you share any examples
of how you're actually doing it?  E.g., your dsmc invocation, how you're
mounting the share (must a Windows share be mapped to a drive letter?),
or anything relevant in the dsm.opt or dsm.sys (other than the requisite
testflags if using an older OnTAP).  Or anything else you think is useful
that the documentation left out.

(Also of interest would be how you're scheduling your snapdiff backups,
and how you have that coexisting with local filesystems on the client
running the snapdiff backups.)

Thanks,
=Dave

--
Hello World.David Bronder - Systems Admin
Segmentation Fault  ITS-EI, Univ. of Iowa
Core dumped, disk trashed, quota filled, soda warm.   david-bron...@uiowa.edu


RE : [ADSM-L] snapdiff advice

2011-06-27 Thread Alain Richard
We are using snapdiff for almost a year. We use some trick's found in the forum.

First, mount your share before starting the backup  net use \\dffdg\fgdgj
Second, If you talking to a NAS like a Netapp, you need to have http access For 
TSM : httpd.admin.enableon, and use command tsm set password -type=filer 
sdfggd $logname.

Be sure to have at least tsm6.2.2 client and Ontap 7.3.3 if you want the 
Unicode work!
Don't forget to do full scan at least once a month just in case it misses some 
files.

 Alain



-Message d'origine-
De : ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] De la part de David 
Bronder
Envoyé : 27 juin 2011 16:24
À : ADSM-L@VM.MARIST.EDU
Objet : [ADSM-L] snapdiff advice

Hi folks.

I'm trying to get snapdiff backups of our NetApp (OnTAP version 8.0.1P5)
working so I can move away from everybody's favorite NDMP backups...

So far, I'm not having much luck.  I don't know whether I'm just Doing
It Wrong (tm) or if something else is going on.  In particular, on both
Windows 2008 R2 (6.2.3.0) and RHEL 5.6 (6.2.2.0), I'm getting failures
like the following, depending on the dsmc invocation:

  ANS1670E The file specification is not valid. Specify a valid Network
   Appliance or N-Series NFS (AIX, Linux) or CIFS (Windows) volume.

  ANS2831E  Incremental by snapshot difference cannot be performed on
   'volume-name' as it is not a NetApp NFS or CIFS volume.

(These are shares at the root of full volumes, not Q-trees.  I'm using a
CIFS share for the Windows client, and an NFS share for the Linux client,
with the correct respective permission/security styles.  TSM server is
still 5.5, but my understanding is that that should be OK.)

For those of you who have snapdiff working, could you share any examples
of how you're actually doing it?  E.g., your dsmc invocation, how you're
mounting the share (must a Windows share be mapped to a drive letter?),
or anything relevant in the dsm.opt or dsm.sys (other than the requisite
testflags if using an older OnTAP).  Or anything else you think is useful
that the documentation left out.

(Also of interest would be how you're scheduling your snapdiff backups,
and how you have that coexisting with local filesystems on the client
running the snapdiff backups.)

Thanks,
=Dave

--
Hello World.David Bronder - Systems Admin
Segmentation Fault  ITS-EI, Univ. of Iowa
Core dumped, disk trashed, quota filled, soda warm.   david-bron...@uiowa.edu


Re: Copypool storage advice

2011-04-12 Thread Paul_Dudley
I believe I have found the problem. The maximum number of scratch volumes 
allowed was set to 40 and it had reached this limit which was causing the 
reclaimation process to fail. I have increased the maximum number of scratch 
volumes allowed and restarted the reclaimation process.

Thanks  Regards
Paul


 -Original Message-

 It's been a while since I've worked much with off-site copypools, but my next
 suggestion is to work with DRMedia. Do you have some volumes (that have been
 reclaimed) in VAULTRETRIEVE status? I don't remember how they show up in your
 volume list (I half expect 'EMPTY' status, but as I said, it's been a while), 
 but if you
 do, they might be returned to scratch status when you MOVE DRM the VAULTRET
 volumes back to ONSITERET.

 How many BACKUP STG processes are you running at a time? If you run four
 backup processes at a time, you'll produce four tapes each day, even if one 
 would
 be enough. That might be contributing to this phenomenon.

 You said these volumes are set as offsite; that's at the volume level; 
 volumes are
 the only things with an offisite status, as I recall. If you do QUERY LIBVOL 
 *
 volid, do they show up? If TSM knows they're in the library, they won't be
 reclaimed if they're in FILLING status, as I recall.

 I love a good puzzle. I'm just not sure how many things I'm taking for 
 granted about
 how you're using TSM. :-)

 Nick

 On Apr 10, 2011, at 6:35 PM, Paul_Dudley wrote:

  Collocation for the storage pool is set to none. No I am not using the 
  du=
 parameter on the reclaim commands. I check the log and they do finish
 successfully.
  They are a mixture of full and filling tapes. They are all set as 
  offsite.
 
  Thanks  Regards
  Paul
 
 
  I currently have a lot of copypool storage tapes which are between 50 - 
  60%
  utilization. Expiration runs daily and I run reclaimation daily on this 
  copypool, set
 to
  50.
 
  Is there anything I can do to try and consolidate the data onto fewer 
  copypool
  tapes?
 
  I would conclude that you have your collocation on the copypool set to
 something
  other than none. My back-up theory is that you're using a dur= 
  parameter on
  your reclaim commands, and they simply are not finishing.
 
  Are these tapes marked as being off-site? Are they in filling status or 
  full?
  Filling tapes normally are excluded from reclamation if they're 
  allegedly still in
 the
  library.
 
  I'm not sure why it would matter, but what's your TSM server level?
 
 
  Nick





ANL DISCLAIMER

This e-mail and any file attached is confidential, and intended solely to the 
named addressees. Any unauthorised dissemination or use is strictly prohibited. 
If you received this e-mail in error, please immediately notify the sender by 
return e-mail from your system. Please do not copy, use or make reference to it 
for any purpose, or disclose its contents to any person.


Re: Copypool storage advice

2011-04-10 Thread Paul_Dudley
Collocation for the storage pool is set to none. No I am not using the du= 
parameter on the reclaim commands. I check the log and they do finish 
successfully.
They are a mixture of full and filling tapes. They are all set as offsite.

Thanks  Regards
Paul


  I currently have a lot of copypool storage tapes which are between 50 - 60%
 utilization. Expiration runs daily and I run reclaimation daily on this 
 copypool, set to
 50.
 
  Is there anything I can do to try and consolidate the data onto fewer 
  copypool
 tapes?

 I would conclude that you have your collocation on the copypool set to 
 something
 other than none. My back-up theory is that you're using a dur= parameter 
 on
 your reclaim commands, and they simply are not finishing.

 Are these tapes marked as being off-site? Are they in filling status or 
 full?
 Filling tapes normally are excluded from reclamation if they're allegedly 
 still in the
 library.

 I'm not sure why it would matter, but what's your TSM server level?


 Nick






ANL DISCLAIMER

This e-mail and any file attached is confidential, and intended solely to the 
named addressees. Any unauthorised dissemination or use is strictly prohibited. 
If you received this e-mail in error, please immediately notify the sender by 
return e-mail from your system. Please do not copy, use or make reference to it 
for any purpose, or disclose its contents to any person.


Re: Copypool storage advice

2011-04-10 Thread Nick Laflamme
It's been a while since I've worked much with off-site copypools, but my next 
suggestion is to work with DRMedia. Do you have some volumes (that have been 
reclaimed) in VAULTRETRIEVE status? I don't remember how they show up in your 
volume list (I half expect 'EMPTY' status, but as I said, it's been a while), 
but if you do, they might be returned to scratch status when you MOVE DRM the 
VAULTRET volumes back to ONSITERET.

How many BACKUP STG processes are you running at a time? If you run four backup 
processes at a time, you'll produce four tapes each day, even if one would be 
enough. That might be contributing to this phenomenon. 

You said these volumes are set as offsite; that's at the volume level; volumes 
are the only things with an offisite status, as I recall. If you do QUERY 
LIBVOL * volid, do they show up? If TSM knows they're in the library, they 
won't be reclaimed if they're in FILLING status, as I recall.

I love a good puzzle. I'm just not sure how many things I'm taking for granted 
about how you're using TSM. :-) 

Nick

On Apr 10, 2011, at 6:35 PM, Paul_Dudley wrote:

 Collocation for the storage pool is set to none. No I am not using the 
 du= parameter on the reclaim commands. I check the log and they do finish 
 successfully.
 They are a mixture of full and filling tapes. They are all set as 
 offsite.
 
 Thanks  Regards
 Paul
 
 
 I currently have a lot of copypool storage tapes which are between 50 - 60%
 utilization. Expiration runs daily and I run reclaimation daily on this 
 copypool, set to
 50.
 
 Is there anything I can do to try and consolidate the data onto fewer 
 copypool
 tapes?
 
 I would conclude that you have your collocation on the copypool set to 
 something
 other than none. My back-up theory is that you're using a dur= parameter 
 on
 your reclaim commands, and they simply are not finishing.
 
 Are these tapes marked as being off-site? Are they in filling status or 
 full?
 Filling tapes normally are excluded from reclamation if they're allegedly 
 still in the
 library.
 
 I'm not sure why it would matter, but what's your TSM server level?
 
 
 Nick
 
 
 
 
 
   
 ANL DISCLAIMER
 
 This e-mail and any file attached is confidential, and intended solely to the 
 named addressees. Any unauthorised dissemination or use is strictly 
 prohibited. If you received this e-mail in error, please immediately notify 
 the sender by return e-mail from your system. Please do not copy, use or make 
 reference to it for any purpose, or disclose its contents to any person.


Re: Ang: Copypool storage advice

2011-04-08 Thread Paul_Dudley
The copypool is on LTO3 tapes. They are not database backups just incremental 
server backups

Thanks  Regards
Paul


 -Original Message-

 It would be helpful to know what kind of the tape technology you're using 
 since the
 reclamation threshold % is usually based off which technology is being used.
 Smaller tapes can usually have a small threshold while larger tapes requires a
 larger threshold.

 One way to reduce the amount of tapes is simply to reduce the threshold to
 something like 30 and let the reclaim process run until it's complete. This 
 will require
 enough free tape drives to a) let reclamation run until it's complete b) do 
 normal
 operations.

 There can be several reasons why you get so high pct reclaim. One is that 
 you're
 running full database or application backups. Since this will expire a full 
 backup
 every day, it will cause the reclaim on your tapes to rise. Splitting your 
 copypool into
 separate ones categorized on the type of data stored (one for fileservers, 
 one for
 application servers for example) is one way to go, using collocation is 
 another.

 Best Regards

 Daniel Sparrman

 -ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: -


 I currently have a lot of copypool storage tapes which are between 50 - 60%
 utilization. Expiration runs daily and I run reclaimation daily on this 
 copypool, set to
 50.

 Is there anything I can do to try and consolidate the data onto fewer copypool
 tapes?



 Thanks  Regards

 Paul



 Paul Dudley

 Senior IT Systems Administrator

 ANL Container Line Pty Limited

 Email:  mailto:pdud...@anl.com.au pdud...@anl.com.au








 ANL DISCLAIMER

 This e-mail and any file attached is confidential, and intended solely to the 
 named
 addressees. Any unauthorised dissemination or use is strictly prohibited. If 
 you
 received this e-mail in error, please immediately notify the sender by return 
 e-mail
 from your system. Please do not copy, use or make reference to it for any 
 purpose,
 or disclose its contents to any person.





ANL DISCLAIMER

This e-mail and any file attached is confidential, and intended solely to the 
named addressees. Any unauthorised dissemination or use is strictly prohibited. 
If you received this e-mail in error, please immediately notify the sender by 
return e-mail from your system. Please do not copy, use or make reference to it 
for any purpose, or disclose its contents to any person.


Ang: Re: Ang: Copypool storage advice

2011-04-08 Thread Daniel Sparrman
Hi Paul

So I assume you dont have any database dumps or TDP's from for example SQL, 
DB2, Oracle, Exchange or Domino, everthing is just simple file backups?

In that case, there's probably only 2 options to reduce the amount of copypool 
tapes:

a) Divide your servers into 2 groups, one with a large incremental daily change 
and one group with more static servers and direct them to two different 
copypools.

b) Like I said in my previous message, lower your reclamation threshold to 
around 30%, let the TSM server reduce the amount of tapes by completing the 
operation. This option will however probably make you end up in the same 
situation again in the future. 

The reason you have so many copypool tapes with a high pct reclaim is due to 
the large amount of change in your environment leading to data being expired on 
your copypool tapes. How does your primary pool look like? Are you seeing the 
same issue there with a large number of tapes having a high percentage of 
change?

Are you having more copypool tapes than nodes?

Best Regards

Daniel Sparrman

-ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: -


Till: ADSM-L@VM.MARIST.EDU
Från: Paul_Dudley pdud...@anl.com.au
Sänt av: ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU
Datum: 04/08/2011 08:06
Ärende: Re: Ang: Copypool storage advice

The copypool is on LTO3 tapes. They are not database backups just incremental 
server backups

Thanks  Regards
Paul


 -Original Message-

 It would be helpful to know what kind of the tape technology you're using 
 since the
 reclamation threshold % is usually based off which technology is being used.
 Smaller tapes can usually have a small threshold while larger tapes requires a
 larger threshold.

 One way to reduce the amount of tapes is simply to reduce the threshold to
 something like 30 and let the reclaim process run until it's complete. This 
 will require
 enough free tape drives to a) let reclamation run until it's complete b) do 
 normal
 operations.

 There can be several reasons why you get so high pct reclaim. One is that 
 you're
 running full database or application backups. Since this will expire a full 
 backup
 every day, it will cause the reclaim on your tapes to rise. Splitting your 
 copypool into
 separate ones categorized on the type of data stored (one for fileservers, 
 one for
 application servers for example) is one way to go, using collocation is 
 another.

 Best Regards

 Daniel Sparrman

 -ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: -


 I currently have a lot of copypool storage tapes which are between 50 - 60%
 utilization. Expiration runs daily and I run reclaimation daily on this 
 copypool, set to
 50.

 Is there anything I can do to try and consolidate the data onto fewer copypool
 tapes?



 Thanks  Regards

 Paul



 Paul Dudley

 Senior IT Systems Administrator

 ANL Container Line Pty Limited

 Email:  mailto:pdud...@anl.com.au pdud...@anl.com.au








 ANL DISCLAIMER

 This e-mail and any file attached is confidential, and intended solely to the 
 named
 addressees. Any unauthorised dissemination or use is strictly prohibited. If 
 you
 received this e-mail in error, please immediately notify the sender by return 
 e-mail
 from your system. Please do not copy, use or make reference to it for any 
 purpose,
 or disclose its contents to any person.





ANL DISCLAIMER

This e-mail and any file attached is confidential, and intended solely to the 
named addressees. Any unauthorised dissemination or use is strictly prohibited. 
If you received this e-mail in error, please immediately notify the sender by 
return e-mail from your system. Please do not copy, use or make reference to it 
for any purpose, or disclose its contents to any person.

Re: Copypool storage advice

2011-04-08 Thread Nick Laflamme
On Apr 7, 2011, at 11:07 PM, Paul_Dudley wrote:

 I currently have a lot of copypool storage tapes which are between 50 - 60% 
 utilization. Expiration runs daily and I run reclaimation daily on this 
 copypool, set to 50.
 
 Is there anything I can do to try and consolidate the data onto fewer 
 copypool tapes?

I would conclude that you have your collocation on the copypool set to 
something other than none. My back-up theory is that you're using a dur= 
parameter on your reclaim commands, and they simply are not finishing. 

Are these tapes marked as being off-site? Are they in filling status or 
full? Filling tapes normally are excluded from reclamation if they're 
allegedly still in the library. 

I'm not sure why it would matter, but what's your TSM server level? 

 Thanks  Regards
 
 Paul

Nick


Copypool storage advice

2011-04-07 Thread Paul_Dudley
I currently have a lot of copypool storage tapes which are between 50 - 60% 
utilization. Expiration runs daily and I run reclaimation daily on this 
copypool, set to 50.

Is there anything I can do to try and consolidate the data onto fewer copypool 
tapes?



Thanks  Regards

Paul



Paul Dudley

Senior IT Systems Administrator

ANL Container Line Pty Limited

Email:  mailto:pdud...@anl.com.au pdud...@anl.com.au








ANL DISCLAIMER

This e-mail and any file attached is confidential, and intended solely to the 
named addressees. Any unauthorised dissemination or use is strictly prohibited. 
If you received this e-mail in error, please immediately notify the sender by 
return e-mail from your system. Please do not copy, use or make reference to it 
for any purpose, or disclose its contents to any person.


Ang: Copypool storage advice

2011-04-07 Thread Daniel Sparrman
Hi Paul

It would be helpful to know what kind of the tape technology you're using since 
the reclamation threshold % is usually based off which technology is being 
used. Smaller tapes can usually have a small threshold while larger tapes 
requires a larger threshold. 

One way to reduce the amount of tapes is simply to reduce the threshold to 
something like 30 and let the reclaim process run until it's complete. This 
will require enough free tape drives to a) let reclamation run until it's 
complete b) do normal operations.

There can be several reasons why you get so high pct reclaim. One is that 
you're running full database or application backups. Since this will expire a 
full backup every day, it will cause the reclaim on your tapes to rise. 
Splitting your copypool into separate ones categorized on the type of data 
stored (one for fileservers, one for application servers for example) is one 
way to go, using collocation is another.

Best Regards

Daniel Sparrman

-ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU skrev: -


Till: ADSM-L@VM.MARIST.EDU
Från: Paul_Dudley pdud...@anl.com.au
Sänt av: ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU
Datum: 04/08/2011 06:07
Ärende: Copypool storage advice

I currently have a lot of copypool storage tapes which are between 50 - 60% 
utilization. Expiration runs daily and I run reclaimation daily on this 
copypool, set to 50.

Is there anything I can do to try and consolidate the data onto fewer copypool 
tapes?



Thanks  Regards

Paul



Paul Dudley

Senior IT Systems Administrator

ANL Container Line Pty Limited

Email:  mailto:pdud...@anl.com.au pdud...@anl.com.au








ANL DISCLAIMER

This e-mail and any file attached is confidential, and intended solely to the 
named addressees. Any unauthorised dissemination or use is strictly prohibited. 
If you received this e-mail in error, please immediately notify the sender by 
return e-mail from your system. Please do not copy, use or make reference to it 
for any purpose, or disclose its contents to any person.

AW: Some advice about compression=yes to perform IMAGE backup

2009-09-20 Thread Stefan Holzwarth
I'm a big fan of compression at the client side!

Compression at the client could even give you better performance.
It depends on the data and your environment.

Some pro's for client side compression:

Disk Storage pools at TSM server are more effective because there is more space
Only option if you have no tapes with hardware compression
Less IO at the TSM server (backup copypool, migration, reclamation)
Most CPUs in physical servers are underutilized and very powerful
Less network bandwidth needed (some of the possible bottlenecks)
We have very good experience with SQL TDP compression rates


Regards
Stefan Holzwarth

 -Ursprüngliche Nachricht-
 Von: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] Im 
 Auftrag von Skylar Thompson
 Gesendet: Sonntag, 20. September 2009 06:25
 An: ADSM-L@VM.MARIST.EDU
 Betreff: Re: Some advice about compression=yes to perform IMAGE backup
 
 admbackup wrote:
  Hi.
 
  I am need some advice about using compression=yes for image backups
 
 
  I need to perform image backups of mulitple disks on a 
 windows 2008 server.  Most of them have like 1.45T of size.
 
  We are running out of tapes and I was thinking in using 
 compression.  I know that it is recommended to set 
 compressalways=yes on the TSM server when using compression, 
 but I am not using compression for all the backups.  Is this 
 parameter transparent for the client servers that dont use 
 compression=yes?
 
  Also, how recommended is using compression for image 
 backups??  I know that it is going to increasse the time that 
 the backup takes but I have a lot of time windows to perform 
 those image backups (All the weekend)
 
 
 What kind of tapes do you use? You should probably stick with hardware
 compression if you can. Remember to not only think of the 
 amount of time
 the backup takes, but the amount of time the restore is going to take.
 Hardware compression is going to take buy you performance, 
 but software
 compression is going to lose you performance.
 
 --
 -- Skylar Thompson (skyl...@u.washington.edu)
 -- Genome Sciences Department, System Administrator
 -- Foege Building S048, (206)-685-7354
 -- University of Washington School of Medicine
 


Re: Some advice about compression=yes to perform IMAGE backup

2009-09-20 Thread Hans Christian Riksheim
In my experience client side compression on W2K/Intel is quite fast but on AIX 
it is very slow. I have no idea why the difference is so huge.

This poses a problem on AIX when we do client side encryption since compression 
must be done before encryption.

Oracle RMAN has two alternate compression algorithms, the usual one(gzip I 
think) and zlib. The latter is much faster but the compressibility ratio is a 
little lower. I would like to see the TSM client offer an algorithm that takes 
less toll on CPU at the expense of compressibility.

Hans Chr.




Fra: ADSM: Dist Stor Manager på vegne av Stefan Holzwarth
Sendt: sø 2009-09-20 13:43
Til: ADSM-L@VM.MARIST.EDU
Emne: AW: Some advice about compression=yes to perform IMAGE backup



I'm a big fan of compression at the client side!

Compression at the client could even give you better performance.
It depends on the data and your environment.

Some pro's for client side compression:

Disk Storage pools at TSM server are more effective because there is more space
Only option if you have no tapes with hardware compression
Less IO at the TSM server (backup copypool, migration, reclamation)
Most CPUs in physical servers are underutilized and very powerful
Less network bandwidth needed (some of the possible bottlenecks)
We have very good experience with SQL TDP compression rates


Regards
Stefan Holzwarth

 -Ursprüngliche Nachricht-
 Von: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] Im
 Auftrag von Skylar Thompson
 Gesendet: Sonntag, 20. September 2009 06:25
 An: ADSM-L@VM.MARIST.EDU
 Betreff: Re: Some advice about compression=yes to perform IMAGE backup

 admbackup wrote:
  Hi.
 
  I am need some advice about using compression=yes for image backups
 
 
  I need to perform image backups of mulitple disks on a
 windows 2008 server.  Most of them have like 1.45T of size.
 
  We are running out of tapes and I was thinking in using
 compression.  I know that it is recommended to set
 compressalways=yes on the TSM server when using compression,
 but I am not using compression for all the backups.  Is this
 parameter transparent for the client servers that dont use
 compression=yes?
 
  Also, how recommended is using compression for image
 backups??  I know that it is going to increasse the time that
 the backup takes but I have a lot of time windows to perform
 those image backups (All the weekend)
 

 What kind of tapes do you use? You should probably stick with hardware
 compression if you can. Remember to not only think of the
 amount of time
 the backup takes, but the amount of time the restore is going to take.
 Hardware compression is going to take buy you performance,
 but software
 compression is going to lose you performance.

 --
 -- Skylar Thompson (skyl...@u.washington.edu)
 -- Genome Sciences Department, System Administrator
 -- Foege Building S048, (206)-685-7354
 -- University of Washington School of Medicine




This email originates from Steria AS, Biskop Gunnerus' gate 14a, N-0051 OSLO, 
http://www.steria.no. This email and any attachments may contain 
confidential/intellectual property/copyright information and is only for the 
use of the addressee(s). You are prohibited from copying, forwarding, 
disclosing, saving or otherwise using it in any way if you are not the 
addressee(s) or responsible for delivery. If you receive this email by mistake, 
please advise the sender and cancel it immediately. Steria may monitor the 
content of emails within its network to ensure compliance with its policies and 
procedures. Any email is susceptible to alteration and its integrity cannot be 
assured. Steria shall not be liable if the message is altered, modified, 
falsified, or even edited.


Re: AW: Some advice about compression=yes to perform IMAGE backup

2009-09-20 Thread Skylar Thompson
For us we back up a combination of millions of little text files
(anywhere from ~500 bytes to ~500kB), and somewhat larger TIFF images
(3-7MB). The smaller text files can fit onto a single block on tape and
compress far better in hardware than in software because there's
similarities between files. The TIFF images don't compress at all using
either scheme, so we stick with hardware compression.

Stefan Holzwarth wrote:
 I'm a big fan of compression at the client side!

 Compression at the client could even give you better performance.
 It depends on the data and your environment.

 Some pro's for client side compression:

 Disk Storage pools at TSM server are more effective because there is more 
 space
 Only option if you have no tapes with hardware compression
 Less IO at the TSM server (backup copypool, migration, reclamation)
 Most CPUs in physical servers are underutilized and very powerful
 Less network bandwidth needed (some of the possible bottlenecks)
 We have very good experience with SQL TDP compression rates


 Regards
 Stefan Holzwarth

   
 -Ursprüngliche Nachricht-
 Von: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] Im 
 Auftrag von Skylar Thompson
 Gesendet: Sonntag, 20. September 2009 06:25
 An: ADSM-L@VM.MARIST.EDU
 Betreff: Re: Some advice about compression=yes to perform IMAGE backup

 admbackup wrote:
 
 Hi.

 I am need some advice about using compression=yes for image backups


 I need to perform image backups of mulitple disks on a 
   
 windows 2008 server.  Most of them have like 1.45T of size.
 
 We are running out of tapes and I was thinking in using 
   
 compression.  I know that it is recommended to set 
 compressalways=yes on the TSM server when using compression, 
 but I am not using compression for all the backups.  Is this 
 parameter transparent for the client servers that dont use 
 compression=yes?
 
 Also, how recommended is using compression for image 
   
 backups??  I know that it is going to increasse the time that 
 the backup takes but I have a lot of time windows to perform 
 those image backups (All the weekend)
 
 What kind of tapes do you use? You should probably stick with hardware
 compression if you can. Remember to not only think of the 
 amount of time
 the backup takes, but the amount of time the restore is going to take.
 Hardware compression is going to take buy you performance, 
 but software
 compression is going to lose you performance.

 --
 -- Skylar Thompson (skyl...@u.washington.edu)
 -- Genome Sciences Department, System Administrator
 -- Foege Building S048, (206)-685-7354
 -- University of Washington School of Medicine

 

-- 
-- Skylar Thompson (skyl...@u.washington.edu)
-- Genome Sciences Department, System Administrator
-- Foege Building S048, (206)-685-7354
-- University of Washington School of Medicine


Some advice about compression=yes to perform IMAGE backup

2009-09-19 Thread admbackup
Hi.

I am need some advice about using compression=yes for image backups


I need to perform image backups of mulitple disks on a windows 2008 server.  
Most of them have like 1.45T of size.

We are running out of tapes and I was thinking in using compression.  I know 
that it is recommended to set compressalways=yes on the TSM server when using 
compression, but I am not using compression for all the backups.  Is this 
parameter transparent for the client servers that dont use compression=yes?

Also, how recommended is using compression for image backups??  I know that it 
is going to increasse the time that the backup takes but I have a lot of time 
windows to perform those image backups (All the weekend)

+--
|This was sent by cyosh...@its-csi.com.pe via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--


Re: Some advice about compression=yes to perform IMAGE backup

2009-09-19 Thread Skylar Thompson
admbackup wrote:
 Hi.

 I am need some advice about using compression=yes for image backups


 I need to perform image backups of mulitple disks on a windows 2008 server.  
 Most of them have like 1.45T of size.

 We are running out of tapes and I was thinking in using compression.  I know 
 that it is recommended to set compressalways=yes on the TSM server when using 
 compression, but I am not using compression for all the backups.  Is this 
 parameter transparent for the client servers that dont use compression=yes?

 Also, how recommended is using compression for image backups??  I know that 
 it is going to increasse the time that the backup takes but I have a lot of 
 time windows to perform those image backups (All the weekend)


What kind of tapes do you use? You should probably stick with hardware
compression if you can. Remember to not only think of the amount of time
the backup takes, but the amount of time the restore is going to take.
Hardware compression is going to take buy you performance, but software
compression is going to lose you performance.

--
-- Skylar Thompson (skyl...@u.washington.edu)
-- Genome Sciences Department, System Administrator
-- Foege Building S048, (206)-685-7354
-- University of Washington School of Medicine


Re: Advice about configure DB2 Logs backups on a windows server

2009-03-21 Thread Richard Sims

On Mar 21, 2009, at 12:44 AM, admbackup wrote:


I want to configure DB2 logs backups on a windows server.  Where
can I find a good source to implement this?



http://www.redbooks.ibm.com/abstracts/sg246247.html


Advice about configure DB2 Logs backups on a windows server

2009-03-20 Thread admbackup
I want to configure DB2 logs backups on a windows server.  Where can I find a 
good source to implement this?

Sorry for my grammar, my native language isn't english

+--
|This was sent by cyosh...@its-csi.com.pe via Backup Central.
|Forward SPAM to ab...@backupcentral.com.
+--


Re: Windows laptop restore advice needed

2008-12-19 Thread goc
hi, TCDP for Files is okay product, easy to configure, deploy and it
works. encryption also works both ways :-)

few years back here on group there was also a question about laptop
backups, and one of the ideas that worked in my environment was a
schedule that backups c:\ -su=yes with duration period 24 , so anytime
client plugs in or connect via anything the schedule would catch his
tsm_cad service and start the backup , in combination with sub file
feature it worked great

until one day all of our managers swaped laptops and never again
bothered to have some kind of solution for the security of their data

that's my story

On Thu, Dec 18, 2008 at 3:24 AM, Wanda Prather wanda.prat...@jasi.com wrote:
 First thing is to decide exactly what kind of restore you are trying to do.

 -In the case of a hard drive failure, and the machine won't boot, do you
 want to do a bare-metal restore?  The reason for doing a bare-metal restore
 is to recover system state and installed software as well as data.  Isn't
 supported across different hardware, although people have reported
 considerable success doing it with XP and 2003.

 If that's what you really want, search this list for bare metal restore,
 and search the TSM support page for  bare machine restore, you'll find
 lots of instructions.  In general, you reload Windows to the point you can
 get back on the network, then restore the C: drive and system state with
 the  TSM BA client.

 -In the case of a laptop upgrade, you are probably going to be moving to
 different hardware and re-establishing the software environment anyway -
 you'll probably be installing upgraded/new releases of software, different
 drivers, etc.  In that case, usually all you want to do is recover the
 user's data.  For that case, I also really like the Tivoli CDP for files
 product.  It's very inexpensive, provides continuous protection, and is much
 more tolerant of frequent network disconnects.

 Either way, you want to get your Windows guys an imaging product (not an
 image backup) like Ghost; there are several others on the market as well.
 The idea is to create a pretty standard desktop setup, that already has
 the office, mail, and browser products your company uses installed on it.
 You make a copy with the Ghost(like) product, then anytime somebody needs a
 new laptop/hard disk, you can load the standard Windows environment very
 quickly.

 Then either do your bare metal restore, or your CDP restore.

 W

 On Wed, Dec 17, 2008 at 11:48 AM, Nicholas Rodolfich 
 nrodolf...@cmaontheweb.com wrote:

 Hello All,

 Thanks for your help!!


 I am primarily a server guy with a UNIX background so I am light on the
 Windows platform as it relates to client backup/restore. Our organization
 has about 50 laptops (mostly XP pro) that we need to backup so that we can
 restore  them during drive failure issues or laptop upgrades. We seem to be
 having more and more drive failure since the SATA drives have proliferated
 into the market. The majority of our laptop fleet is primarily remote to
 our office. I hope to be able to provide a solution where we can have users
 take a backup of their laptop when they are in the office every other week
 or so. That way when we do have a drive failure or someone gets a laptop
 refresh they don't have to totally recreate their laptop's working
 environments. Currently the  windows guys are starting over completely
 using the recovery CDs supplied by the vendor (Lenovo) but just feeding the
 6 CDs is a 4 hours deal.

 My question leans toward the whole backup/restore process. What is the
 easiest/best way to backup and restore these laptops. I was thinking of
 using the TSM image capabilities but reading up on it seems like there are
 several prerequisites to making it work. Maybe Christie BMR, or Fastback
 but Fastback requires the MS AIK, another server, etc..  I can and have
 read much  doc on the subject but I would prefer some empirical knowledge
 from those who really know before I start banging my head against the wall.
 Any advice rendered would be greatly appreciated.

 Thanks Again!!

 Nicholas




--

Rita Rudner  - I was a vegetarian until I started leaning toward the sunlight.


Windows laptop restore advice needed

2008-12-17 Thread Nicholas Rodolfich
Hello All,

Thanks for your help!!


I am primarily a server guy with a UNIX background so I am light on the
Windows platform as it relates to client backup/restore. Our organization
has about 50 laptops (mostly XP pro) that we need to backup so that we can
restore  them during drive failure issues or laptop upgrades. We seem to be
having more and more drive failure since the SATA drives have proliferated
into the market. The majority of our laptop fleet is primarily remote to
our office. I hope to be able to provide a solution where we can have users
take a backup of their laptop when they are in the office every other week
or so. That way when we do have a drive failure or someone gets a laptop
refresh they don't have to totally recreate their laptop's working
environments. Currently the  windows guys are starting over completely
using the recovery CDs supplied by the vendor (Lenovo) but just feeding the
6 CDs is a 4 hours deal.

My question leans toward the whole backup/restore process. What is the
easiest/best way to backup and restore these laptops. I was thinking of
using the TSM image capabilities but reading up on it seems like there are
several prerequisites to making it work. Maybe Christie BMR, or Fastback
but Fastback requires the MS AIK, another server, etc..  I can and have
read much  doc on the subject but I would prefer some empirical knowledge
from those who really know before I start banging my head against the wall.
Any advice rendered would be greatly appreciated.

Thanks Again!!

Nicholas

Re: Windows laptop restore advice needed

2008-12-17 Thread Remco Post

On 17 dec 2008, at 17:48, Nicholas Rodolfich wrote:


Hello All,

Thanks for your help!!


I am primarily a server guy with a UNIX background so I am light on
the
Windows platform as it relates to client backup/restore. Our
organization
has about 50 laptops (mostly XP pro) that we need to backup so that
we can
restore  them during drive failure issues or laptop upgrades. We
seem to be
having more and more drive failure since the SATA drives have
proliferated
into the market. The majority of our laptop fleet is primarily
remote to
our office. I hope to be able to provide a solution where we can
have users
take a backup of their laptop when they are in the office every
other week
or so. That way when we do have a drive failure or someone gets a
laptop
refresh they don't have to totally recreate their laptop's working
environments. Currently the  windows guys are starting over completely
using the recovery CDs supplied by the vendor (Lenovo) but just
feeding the
6 CDs is a 4 hours deal.

My question leans toward the whole backup/restore process. What is the
easiest/best way to backup and restore these laptops. I was thinking
of
using the TSM image capabilities but reading up on it seems like
there are
several prerequisites to making it work. Maybe Christie BMR, or
Fastback
but Fastback requires the MS AIK, another server, etc..  I can and
have
read much  doc on the subject but I would prefer some empirical
knowledge
from those who really know before I start banging my head against
the wall.
Any advice rendered would be greatly appreciated.



look into a great TSM product, CDP for files, it's the ideal solution
for your needs.


Thanks Again!!

Nicholas


--

Remco Post
r.p...@plcs.nl
+31 6 24821 622


Re: Windows laptop restore advice needed

2008-12-17 Thread Wanda Prather
First thing is to decide exactly what kind of restore you are trying to do.

-In the case of a hard drive failure, and the machine won't boot, do you
want to do a bare-metal restore?  The reason for doing a bare-metal restore
is to recover system state and installed software as well as data.  Isn't
supported across different hardware, although people have reported
considerable success doing it with XP and 2003.

If that's what you really want, search this list for bare metal restore,
and search the TSM support page for  bare machine restore, you'll find
lots of instructions.  In general, you reload Windows to the point you can
get back on the network, then restore the C: drive and system state with
the  TSM BA client.

-In the case of a laptop upgrade, you are probably going to be moving to
different hardware and re-establishing the software environment anyway -
you'll probably be installing upgraded/new releases of software, different
drivers, etc.  In that case, usually all you want to do is recover the
user's data.  For that case, I also really like the Tivoli CDP for files
product.  It's very inexpensive, provides continuous protection, and is much
more tolerant of frequent network disconnects.

Either way, you want to get your Windows guys an imaging product (not an
image backup) like Ghost; there are several others on the market as well.
The idea is to create a pretty standard desktop setup, that already has
the office, mail, and browser products your company uses installed on it.
You make a copy with the Ghost(like) product, then anytime somebody needs a
new laptop/hard disk, you can load the standard Windows environment very
quickly.

Then either do your bare metal restore, or your CDP restore.

W

On Wed, Dec 17, 2008 at 11:48 AM, Nicholas Rodolfich 
nrodolf...@cmaontheweb.com wrote:

 Hello All,

 Thanks for your help!!


 I am primarily a server guy with a UNIX background so I am light on the
 Windows platform as it relates to client backup/restore. Our organization
 has about 50 laptops (mostly XP pro) that we need to backup so that we can
 restore  them during drive failure issues or laptop upgrades. We seem to be
 having more and more drive failure since the SATA drives have proliferated
 into the market. The majority of our laptop fleet is primarily remote to
 our office. I hope to be able to provide a solution where we can have users
 take a backup of their laptop when they are in the office every other week
 or so. That way when we do have a drive failure or someone gets a laptop
 refresh they don't have to totally recreate their laptop's working
 environments. Currently the  windows guys are starting over completely
 using the recovery CDs supplied by the vendor (Lenovo) but just feeding the
 6 CDs is a 4 hours deal.

 My question leans toward the whole backup/restore process. What is the
 easiest/best way to backup and restore these laptops. I was thinking of
 using the TSM image capabilities but reading up on it seems like there are
 several prerequisites to making it work. Maybe Christie BMR, or Fastback
 but Fastback requires the MS AIK, another server, etc..  I can and have
 read much  doc on the subject but I would prefer some empirical knowledge
 from those who really know before I start banging my head against the wall.
 Any advice rendered would be greatly appreciated.

 Thanks Again!!

 Nicholas


Advice - DB Restore without tape

2008-04-02 Thread Jim Young

Hi,

Need some quick advice.

I have to test the server upgrade to V5.5 from V5.3.2.0.

I have a basic vanilla install of 5.3.2.0 on AIX but have no access to a
tape drive to restore the TSM database from the live environment.  Whats the
easiest way to get a DB backup to this server to restore it? Is it to create
a new devclass of file and drop it to that?  Any guidance gratefully
received.

All RTFM replys please spare me.  It's not something I've done before.

Cheers

Jim


Cattles plc Registered in England No: 543610
Kingston House, Centre 27 Business Park,
Woodhead Road, Birstall, Batley, WF179TD.

The views and opinions expressed herein are those of the author and not of 
Cattles plc or any of its subsidiaries.The content of this e-mail is 
confidential, may contain privileged material and is intended solely for the 
recipient(s) named above.

If you receive this in error, please notify the sender immediately and delete 
this e-mail.

Please note that neither Cattles plc nor the sender accepts any responsibility 
for viruses and it is your responsibility to scan the email and attachments(if 
any). No contracts or agreements may be concluded on behalf of Cattles plc or 
its subsidiaries by means of email communications.

This message has been scanned for Viruses
by Cattles and Sophos Puremessage scanning service.


SV: [ADSM-L] Windows TSM server upgrade advice

2007-11-23 Thread Lindström Mikael
Hello,
 
Can you install the TSM code on another disk?
If so upgrade the OS to 2003 with sp2, after that uninstall TSM and reinstall 
and upgrade TSM on the other disk.
 
Regards/Micke



Från: ADSM: Dist Stor Manager genom Chris McKay
Skickat: on 2007-11-21 17:27
Till: ADSM-L@VM.MARIST.EDU
Ämne: [ADSM-L] Windows TSM server upgrade advice



Hello,

I'm looking for some advice if possible on upgrading a Windows TSM server. The 
current server is built on Windows 2000, running TSM version 5.3. I have very 
limited system partition space available (approx 2GB free), which houses the OS 
and TSM server
files. I would like to upgrade the OS to Windows 2003 server, and also upgrade 
the version of TSM to version 5.5. I am a little concerned with the space 
available on the system partition, as most likely it would be easier to upgrade 
the OS first, then
upgrade the version of TSM after. I have a second TSM server that is also at 
level 5.3, but on a Windows 2003 OS.

Any advice would be appreciated.

Take care,

Chris


Windows TSM server upgrade advice

2007-11-21 Thread Chris McKay
Hello,

I'm looking for some advice if possible on upgrading a Windows TSM server. The 
current server is built on Windows 2000, running TSM version 5.3. I have very 
limited system partition space available (approx 2GB free), which houses the OS 
and TSM server
files. I would like to upgrade the OS to Windows 2003 server, and also upgrade 
the version of TSM to version 5.5. I am a little concerned with the space 
available on the system partition, as most likely it would be easier to upgrade 
the OS first, then
upgrade the version of TSM after. I have a second TSM server that is also at 
level 5.3, but on a Windows 2003 OS. 

Any advice would be appreciated.

Take care,

Chris


Re: Need advice on a long term TSM storage solution

2007-10-26 Thread Schneider, John
Nicholas,
I think you are right to think about splitting your TSM instance
into two.  An 80GB TSM database isn't all that large, but if you
anticipate it growing gradually bigger from there and never levelling
off, it doesn't make any sense to wait.
We recently had to split a TSM instance in two.  We started
planning it when the TSM database was 150GB (I know, already pretty big)
and we didn't get it done until the TSM database had grown to 225GB.
All the data in that growth was one LARGE client.  We have a single
client with 80TB (yes, TB) of disk.  So we had to split the client so it
has two TSM clients configured on it that send their data to two TSM
instances, splitting it by filesystem.  It was a big job and took a
couple months to use Export/Import node to migrate 40TB of the client's
data over to the new instance. 
As for your question about the which media to use, I would
rethink your question.  I don't think you can find a media which is
guaranteed to still be readable in 7 or more years, and by then it might
be so expensive to maintain you would hate being tied to it.  Another
responder suggested something like DataDomain, and that might work for
the local copy, but most sites have an offsite requirement, and
DataDomain doesn't solve that problem.  (OK, some people are replicating
DataDomain to another disk array at a geographic distance away, but that
is not practical for people who use a 3rd party DR site, or one a
thousand miles away).  And if you went with a disk-based solution like
DataDomain, where would you be in 7 years?  Are most disk arrays at
their most reliable 7 years later?  You would have to come up with some
way to migrate to a new disk technology, too. DataDomain might give you
a path for that sort of migration; you would have to ask them.
One advantage of TSM is that it is very flexible about it's use
of media.  If you are using LTO3, for example, and down the road you go
to LTO4, it is not that hard to use MOVE DATA to perform the migration.
If you have enough tape drives that you can set a couple of each media
type to continously run a string of MOVE DATAs, you might be surprised
at how painless it is.
We are right now converting from 3592 drives in one library to
LTO4 drives in another library (both are IBM3584).  This environment has
7 TSM instances, 12 local storage pools, and 8 offsite pools.  We
started the migration a couple months ago, and have migrated about 700
tapes of 3592 data.  We have about 300 to go to finish the local data,
then we will start on the offsite data.  We hope to be be done by
February or March.  Yes, it takes a long time to move the data, but with
a couple simple scripts to run the MOVE DATAs, it doesn't take a lot of
people time to administer the process.  It just chugs along getting the
job done. 
By the way, as a healthcare provider, we have HIPAA requirements
to save some of our tape data for the life of the patient, much longer
than 7 years.
If you want me to share the scripts or more details offline,
don't hesitate to ask.

Best Regards,

John D. Schneider
Lead Systems Administrator - Storage
Sisters of Mercy Health Systems
Email:  [EMAIL PROTECTED]


-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Nicholas Rodolfich
Sent: Thursday, October 25, 2007 5:08 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Need advice on a long term TSM storage solution


Hello All,

Thanks for your help!!

Our TSM server resides on an LPAR with 2 processing units and 12Gb of
RAM. We use an IBM 3584 library with an expansion cabinet and 16 drives
(8-LTO1 and 8-LTO2) We have 16 LTO3 drive on order to upgrade our
drives. Our database is at 80Gb so I think I am ready for a new
instance.

We have a HIPAA requirement to keep certain data for 3 years and other
data for 7 years.

What is the best storage solution for this type of requirement?

 I plan to manage the HIPAA data with multiple domains, management
classes, etc but I am not sure what storage medium to use. It seem no
matter which cartridge technology we use that it will end up being a
bunch of work over the years following the tape technology curve. Should
I be looking at something optical or electronic?

Additionally, does it make sense to incarnate another TSM instance?

Thanks for your patience and help!!

Nicholas



IMPORTANT NOTICE:  This message and any included attachments are from
East Jefferson General Hospital, and is intended only for the
addressee(s), and may include Protected Health (PHI) or other
confidential information.  If you are the intended recipient, you are
obligated to maintain it in a secure and confidential manner and
re-disclosure without additional consent or as permitted by law is
prohibited.   If you are not the intended recipient, use of this
information is strictly prohibited and may be unlawful.  Please promptly
reply to the sender by email and delete this message from your

Re: Need advice on a long term TSM storage solution

2007-10-26 Thread Zoltan Forray/AC/VCU
I have a curiosity question.   I saw your comment about moving from 3592
to LTO4 ?

Why would you do this ?All of my experience with LTO2 has been nothing
but problems/headaches.

We now have 9-3592-E05 (some over 1-year old) drives in our 3494 ATL and
they have been nothing but pure joy. Absolutely no failures since they
were installed. They run non-stop.  Don't have tape failures. None of them
have been serviced/replaced.

Contrary to our 8-LTO drives of which none are the original drives (all
have been replaced at least once, some 3 times) and 2-3583 libraries
(which are serviced monthlyhave had their pickers replaced at least
3-times eachconstantly have failures that require
power-recyclingand so on.)..at least 50 LTO -tapes have been
stretched, torn, destroyed, etc.   To save the cost of the 2-LTO libraries
(after a year of headaches, we were going to trash them), they have been
delegated to offsite only backups. Even then I have had LTO tapes I tried
to use to restore old 3590 tapes that were damaged and the LTO tapes were
unreadable).

Yes, I do keep all the firmware and drivers up to date.



Schneider, John [EMAIL PROTECTED]
Sent by: ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU
10/26/2007 10:29 AM
Please respond to
ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU


To
ADSM-L@VM.MARIST.EDU
cc

Subject
Re: [ADSM-L] Need advice on a long term TSM storage solution






Nicholas,
 I think you are right to think about splitting your TSM
instance
into two.  An 80GB TSM database isn't all that large, but if you
anticipate it growing gradually bigger from there and never levelling
off, it doesn't make any sense to wait.
 We recently had to split a TSM instance in two.  We
started
planning it when the TSM database was 150GB (I know, already pretty big)
and we didn't get it done until the TSM database had grown to 225GB.
All the data in that growth was one LARGE client.  We have a single
client with 80TB (yes, TB) of disk.  So we had to split the client so it
has two TSM clients configured on it that send their data to two TSM
instances, splitting it by filesystem.  It was a big job and took a
couple months to use Export/Import node to migrate 40TB of the client's
data over to the new instance.
 As for your question about the which media to use, I
would
rethink your question.  I don't think you can find a media which is
guaranteed to still be readable in 7 or more years, and by then it might
be so expensive to maintain you would hate being tied to it.  Another
responder suggested something like DataDomain, and that might work for
the local copy, but most sites have an offsite requirement, and
DataDomain doesn't solve that problem.  (OK, some people are replicating
DataDomain to another disk array at a geographic distance away, but that
is not practical for people who use a 3rd party DR site, or one a
thousand miles away).  And if you went with a disk-based solution like
DataDomain, where would you be in 7 years?  Are most disk arrays at
their most reliable 7 years later?  You would have to come up with some
way to migrate to a new disk technology, too. DataDomain might give you
a path for that sort of migration; you would have to ask them.
 One advantage of TSM is that it is very flexible about
it's use
of media.  If you are using LTO3, for example, and down the road you go
to LTO4, it is not that hard to use MOVE DATA to perform the migration.
If you have enough tape drives that you can set a couple of each media
type to continously run a string of MOVE DATAs, you might be surprised
at how painless it is.
 We are right now converting from 3592 drives in one
library to
LTO4 drives in another library (both are IBM3584).  This environment has
7 TSM instances, 12 local storage pools, and 8 offsite pools.  We
started the migration a couple months ago, and have migrated about 700
tapes of 3592 data.  We have about 300 to go to finish the local data,
then we will start on the offsite data.  We hope to be be done by
February or March.  Yes, it takes a long time to move the data, but with
a couple simple scripts to run the MOVE DATAs, it doesn't take a lot of
people time to administer the process.  It just chugs along getting the
job done.
 By the way, as a healthcare provider, we have HIPAA
requirements
to save some of our tape data for the life of the patient, much longer
than 7 years.
 If you want me to share the scripts or more details
offline,
don't hesitate to ask.

Best Regards,

John D. Schneider
Lead Systems Administrator - Storage
Sisters of Mercy Health Systems
Email:  [EMAIL PROTECTED]


-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Nicholas Rodolfich
Sent: Thursday, October 25, 2007 5:08 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Need advice on a long term TSM storage solution


Hello All,

Thanks for your help!!

Our

Re: Need advice on a long term TSM storage solution

2007-10-26 Thread Ben Bullock
Just to agree with John. 
We find that we are upgrading tape technologies about ever 5 years, the
last jump was from 3590 to 3592 tapes. As he stated, TSM makes it very
easy to add the new devices and set up the hierarchy of the storage
pools to get it ready to go. Then some move data scripts to get the
process chugging along. It can take a while but it is very little
administration.

There is always the risk that when you read off of EVERY tape to write
to new media, you will find some un-readable data, (which is what you
always hear folks saying about tape), but in our case we read over 800TB
of data with over 700,000,000 objects and had 32 files comprising 5MB
that was unrecoverable (they were in a storagepool we do not put on a
copypool on purpose). I found that a very acceptable statistic.

Ben

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Schneider, John
Sent: Friday, October 26, 2007 8:29 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Need advice on a long term TSM storage solution

Nicholas,
I think you are right to think about splitting your TSM instance
into two.  An 80GB TSM database isn't all that large, but if you
anticipate it growing gradually bigger from there and never levelling
off, it doesn't make any sense to wait.
We recently had to split a TSM instance in two.  We started
planning it when the TSM database was 150GB (I know, already pretty big)
and we didn't get it done until the TSM database had grown to 225GB.
All the data in that growth was one LARGE client.  We have a single
client with 80TB (yes, TB) of disk.  So we had to split the client so it
has two TSM clients configured on it that send their data to two TSM
instances, splitting it by filesystem.  It was a big job and took a
couple months to use Export/Import node to migrate 40TB of the client's
data over to the new instance. 
As for your question about the which media to use, I would
rethink your question.  I don't think you can find a media which is
guaranteed to still be readable in 7 or more years, and by then it might
be so expensive to maintain you would hate being tied to it.  Another
responder suggested something like DataDomain, and that might work for
the local copy, but most sites have an offsite requirement, and
DataDomain doesn't solve that problem.  (OK, some people are replicating
DataDomain to another disk array at a geographic distance away, but that
is not practical for people who use a 3rd party DR site, or one a
thousand miles away).  And if you went with a disk-based solution like
DataDomain, where would you be in 7 years?  Are most disk arrays at
their most reliable 7 years later?  You would have to come up with some
way to migrate to a new disk technology, too. DataDomain might give you
a path for that sort of migration; you would have to ask them.
One advantage of TSM is that it is very flexible about it's use
of media.  If you are using LTO3, for example, and down the road you go
to LTO4, it is not that hard to use MOVE DATA to perform the migration.
If you have enough tape drives that you can set a couple of each media
type to continously run a string of MOVE DATAs, you might be surprised
at how painless it is.
We are right now converting from 3592 drives in one library to
LTO4 drives in another library (both are IBM3584).  This environment has
7 TSM instances, 12 local storage pools, and 8 offsite pools.  We
started the migration a couple months ago, and have migrated about 700
tapes of 3592 data.  We have about 300 to go to finish the local data,
then we will start on the offsite data.  We hope to be be done by
February or March.  Yes, it takes a long time to move the data, but with
a couple simple scripts to run the MOVE DATAs, it doesn't take a lot of
people time to administer the process.  It just chugs along getting the
job done. 
By the way, as a healthcare provider, we have HIPAA requirements
to save some of our tape data for the life of the patient, much longer
than 7 years.
If you want me to share the scripts or more details offline,
don't hesitate to ask.

Best Regards,

John D. Schneider
Lead Systems Administrator - Storage
Sisters of Mercy Health Systems
Email:  [EMAIL PROTECTED]


-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Nicholas Rodolfich
Sent: Thursday, October 25, 2007 5:08 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Need advice on a long term TSM storage solution


Hello All,

Thanks for your help!!

Our TSM server resides on an LPAR with 2 processing units and 12Gb of
RAM. We use an IBM 3584 library with an expansion cabinet and 16 drives
(8-LTO1 and 8-LTO2) We have 16 LTO3 drive on order to upgrade our
drives. Our database is at 80Gb so I think I am ready for a new
instance.

We have a HIPAA requirement to keep certain data for 3 years and other
data for 7 years.

What is the best storage solution for this type

Re: Need advice on a long term TSM storage solution

2007-10-26 Thread Richard Rhodes
 There is always the risk that when you read off of EVERY tape to write
 to new media, you will find some un-readable data, (which is what you
 always hear folks saying about tape), but in our case we read over 800TB
 of data with over 700,000,000 objects and had 32 files comprising 5MB
 that was unrecoverable (they were in a storagepool we do not put on a
 copypool on purpose). I found that a very acceptable statistic.

 Ben

I have to agree.  We migrated our primary pools from 3590 to 3592, over
600tb, with very few problems.  I think that expiration/reclamation forces
almost all our tapes to be used on some regular basis.  I think this is
a strength - it's more work, but tapes get used regularly.


Rick


-
The information contained in this message is intended only for the
personal and confidential use of the recipient(s) named above. If
the reader of this message is not the intended recipient or an
agent responsible for delivering it to the intended recipient, you
are hereby notified that you have received this document in error
and that any review, dissemination, distribution, or copying of
this message is strictly prohibited. If you have received this
communication in error, please notify us immediately, and delete
the original message.


AW: [ADSM-L] Need advice on a long term TSM storage solution

2007-10-26 Thread Lehmann, Stefan
...Additionally, does it make sense to incarnate another TSM instance?

Hello Nicholas, 

yes it makes sense - not because of performance purposes or anything else,
but in case of
logical errors in your database, which require an dsmserv audit db - .. so
we have
a 3584 with 3592E05 and a Database(30GB) on a DS8100 (really a fast config)
but the audit 
ran about 12 !! twelve hours.

12 hours offline - not really funny .. so we decided to incarnate.

Hope this helps

- Stefan - 

 
Sozialversicherungsträger
für den Gartenbau
- GB-IT, Security -
Frankfurter Straße 126
 
34121 Kassel


Need advice on a long term TSM storage solution

2007-10-25 Thread Nicholas Rodolfich
Hello All,

Thanks for your help!!

Our TSM server resides on an LPAR with 2 processing units and 12Gb of
RAM. We use an IBM 3584 library with an expansion cabinet and 16 drives
(8-LTO1 and 8-LTO2) We have 16 LTO3 drive on order to upgrade our
drives. Our database is at 80Gb so I think I am ready for a new
instance.

We have a HIPAA requirement to keep certain data for 3 years and other
data for 7 years.

What is the best storage solution for this type of requirement?

 I plan to manage the HIPAA data with multiple domains, management
classes, etc but I am not sure what storage medium to use. It seem no
matter which cartridge technology we use that it will end up being a
bunch of work over the years following the tape technology curve. Should
I be looking at something optical or electronic?

Additionally, does it make sense to incarnate another TSM instance?

Thanks for your patience and help!!

Nicholas



IMPORTANT NOTICE:  This message and any included attachments are from East 
Jefferson General Hospital, and is intended only for the addressee(s), and may 
include Protected Health (PHI) or other confidential information.  If you are 
the intended recipient, you are obligated to maintain it in a secure and 
confidential manner and re-disclosure without additional consent or as 
permitted by law is prohibited.   If you are not the intended recipient, use of 
this information is strictly prohibited and may be unlawful.  Please promptly 
reply to the sender by email and delete this message from your computer. East 
Jefferson General Hospital greatly appreciates your cooperation.


Re: Need advice on a long term TSM storage solution

2007-10-25 Thread Tammy Schellenberg
We have just recently implemented a data domain and so far we love it.
The majority of the issues we had were related to tape.  Right now we
are in the process of moving our data from tape to the DDR, once this is
complete we will be implementing the backup at our offsite disaster
recovery system.  The compression is great, operations tasks are
decreased already.  I would recommend this solution.  Tapes are very
finicky and confining.  

Tammy Schellenberg, DCIS, MCP
Systems Administrator, Information Technology

direct: 604 864 6578
email: [EMAIL PROTECTED]

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Nicholas Rodolfich
Sent: Thursday, October 25, 2007 3:08 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Need advice on a long term TSM storage solution

Hello All,

Thanks for your help!!

Our TSM server resides on an LPAR with 2 processing units and 12Gb of
RAM. We use an IBM 3584 library with an expansion cabinet and 16 drives
(8-LTO1 and 8-LTO2) We have 16 LTO3 drive on order to upgrade our
drives. Our database is at 80Gb so I think I am ready for a new
instance.

We have a HIPAA requirement to keep certain data for 3 years and other
data for 7 years.

What is the best storage solution for this type of requirement?

 I plan to manage the HIPAA data with multiple domains, management
classes, etc but I am not sure what storage medium to use. It seem no
matter which cartridge technology we use that it will end up being a
bunch of work over the years following the tape technology curve. Should
I be looking at something optical or electronic?

Additionally, does it make sense to incarnate another TSM instance?

Thanks for your patience and help!!

Nicholas



IMPORTANT NOTICE:  This message and any included attachments are from
East Jefferson General Hospital, and is intended only for the
addressee(s), and may include Protected Health (PHI) or other
confidential information.  If you are the intended recipient, you are
obligated to maintain it in a secure and confidential manner and
re-disclosure without additional consent or as permitted by law is
prohibited.   If you are not the intended recipient, use of this
information is strictly prohibited and may be unlawful.  Please promptly
reply to the sender by email and delete this message from your computer.
East Jefferson General Hospital greatly appreciates your cooperation.

This email and any files transmitted with it are confidential and are intended 
solely for the use of the individual or entity to whom they are addressed.

If you are not the original recipient or the person responsible for delivering 
the email to the intended recipient, be advised that you have received this 
email in error, and that any use, dissemination, forwarding, printing, or 
copying of this email is strictly prohibited. If you receive this email in 
error, please immediately notify the sender.

Please note that this financial institution neither accepts nor discloses 
confidential member account information via email. This includes password 
related inquiries, financial transaction instructions and address changes.


Re: Request advice on moving from IBM 3494 Library

2007-09-11 Thread Wanda Prather
 If we currently have 6 3590-h drives in the 3494 library, do you think can
 we get by with 4 LTO-4 drives in the new libraries given that speed and
 storage capacity is so much better with LTO-4 drives? Most of the tape
 activity is backups only. Occasionally we have a file restore or two.

It depends.  Probably, if your host can PUSH the data to the LTO-4
drives faster than they currently push it to the 3590's.



 We are hoping to keep the new tape library for 5+ years so reliability is
 a
 big factor.  Any LTO-4 libraries more reliable than others?

Absolutely.  There are Enterprise-class libraries, mid-range tape
libraries, and entry-level libraries.  Since you are talking 4 drives, you
are talking mid-range or Enterprise class libraries.

The 3584 (now mysteriously renamed to the TS3500) is the replacement for
the 3584, and will give you the same or BETTER reliability.  You can also
buy it with a limited capacity option (which is either 80 or 100 slots,
don't remmeber), and it is muich more economical that way.  The
Quantum/Adic i2000 will give you the same reliability, although I don't
like it because it won't autoclean.  The large STK libraries are also
enterprise-class.  I don't know if Overland has an Enterprise-class
library.

Speaking from personal experience, any models below those, aren't going to
give you the kind of reliability you are used to with the 3494.

Just my opionion, and nobody else's.
W



 Do tapes last
 5
 years?

 Any advice or ideas are very welcome as everything is undecided right now.

 John



Re: Request advice on moving from IBM 3494 Library

2007-09-11 Thread Strand, Neil B.
John,
   If you are experienced with the 3494 library, they are working well
and tape mount speed is acceptable, you may consider just replacing the
3590 drives with TS1120 (3592) drives.  The cartridges fit in the same
slots as 3590 cartridges and you can significantly increase the capacity
of the library allowing you to remove a few frames if necessary.
   You could migrate from 3590 to 3592 in the same library by installing
the new drives while keeping the old drives in place for a few weeks.
You may also consider keeping the 3590 drives and cartridges for
copypool/offsite storage.  This would reduce your initial new media
cost.  Eventually, you could migrate to all 3592 media.

Cheers,
Neil

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
John C Dury
Sent: Friday, September 07, 2007 9:42 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Request advice on moving from IBM 3494 Library

We currently have two 3494 libraries with 6 3590-H drives in each that
have each been paid for and depreciated and are still working well for
the most part.  One of the libraries is offsite but accessible via dark
fiber we own and is setup as a copy storage pool for DR. We are looking
into upgrading the tape components of our TSM system to two libraries
with 4 LTO-4 drives in each because the speed and capacity difference
with LTO-4 drives is so much larger than the 3494 library. We will also
save lots of floor space as newer libraries are a fraction of the size
of the 3494s we have that currently have 7 frames each. The 3494 is
partitioned so that part is for TSM and part for the mainframe but when
this is all said and done, the mainframe will be going away so a newer
tape library will only be used for open systems. So far we have talked
to several vendors about their products
including: IBM,STK and EMC and have even looked at some third party
vendors like Overland.  Right now everything is open to suggestion but
we are severely budget challenged. Some of the questions I'm looking for
advice about are:

If we currently have 6 3590-h drives in the 3494 library, do you think
can we get by with 4 LTO-4 drives in the new libraries given that speed
and storage capacity is so much better with LTO-4 drives? Most of the
tape activity is backups only. Occasionally we have a file restore or
two.

We are hoping to keep the new tape library for 5+ years so reliability
is a big factor.  Any LTO-4 libraries more reliable than others? Do
tapes last 5 years?

Any advice or ideas are very welcome as everything is undecided right
now.

John

IMPORTANT:  E-mail sent through the Internet is not secure. Legg Mason 
therefore recommends that you do not send any confidential or sensitive 
information to us via electronic mail, including social security numbers, 
account numbers, or personal identification numbers. Delivery, and or timely 
delivery of Internet mail is not guaranteed. Legg Mason therefore recommends 
that you do not send time sensitive 
or action-oriented messages to us via electronic mail.

This message is intended for the addressee only and may contain privileged or 
confidential information. Unless you are the intended recipient, you may not 
use, copy or disclose to anyone any information contained in this message. If 
you have received this message in error, please notify the author by replying 
to this message and then kindly delete the message. Thank you.


Re: Request advice on moving from IBM 3494 Library

2007-09-11 Thread Ben Bullock
To echo that, used 3494 tape libraries are going for cheap these days.
We recently bought a used one that had 7 frames and 5 new TS1120 tape
drives for under 100K. That's somewhere between 660TB and 1.8PB for
$100K. 

For this case where we have about 8TB a day of seldom accessed archives
that we have to keep for 2 years, it's a cost/GB that couldn't be beat. 

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Strand, Neil B.
Sent: Tuesday, September 11, 2007 2:56 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Request advice on moving from IBM 3494 Library

John,
   If you are experienced with the 3494 library, they are working well
and tape mount speed is acceptable, you may consider just replacing the
3590 drives with TS1120 (3592) drives.  The cartridges fit in the same
slots as 3590 cartridges and you can significantly increase the capacity
of the library allowing you to remove a few frames if necessary.
   You could migrate from 3590 to 3592 in the same library by installing
the new drives while keeping the old drives in place for a few weeks.
You may also consider keeping the 3590 drives and cartridges for
copypool/offsite storage.  This would reduce your initial new media
cost.  Eventually, you could migrate to all 3592 media.

Cheers,
Neil

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
John C Dury
Sent: Friday, September 07, 2007 9:42 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Request advice on moving from IBM 3494 Library

We currently have two 3494 libraries with 6 3590-H drives in each that
have each been paid for and depreciated and are still working well for
the most part.  One of the libraries is offsite but accessible via dark
fiber we own and is setup as a copy storage pool for DR. We are looking
into upgrading the tape components of our TSM system to two libraries
with 4 LTO-4 drives in each because the speed and capacity difference
with LTO-4 drives is so much larger than the 3494 library. We will also
save lots of floor space as newer libraries are a fraction of the size
of the 3494s we have that currently have 7 frames each. The 3494 is
partitioned so that part is for TSM and part for the mainframe but when
this is all said and done, the mainframe will be going away so a newer
tape library will only be used for open systems. So far we have talked
to several vendors about their products
including: IBM,STK and EMC and have even looked at some third party
vendors like Overland.  Right now everything is open to suggestion but
we are severely budget challenged. Some of the questions I'm looking for
advice about are:

If we currently have 6 3590-h drives in the 3494 library, do you think
can we get by with 4 LTO-4 drives in the new libraries given that speed
and storage capacity is so much better with LTO-4 drives? Most of the
tape activity is backups only. Occasionally we have a file restore or
two.

We are hoping to keep the new tape library for 5+ years so reliability
is a big factor.  Any LTO-4 libraries more reliable than others? Do
tapes last 5 years?

Any advice or ideas are very welcome as everything is undecided right
now.

John

IMPORTANT:  E-mail sent through the Internet is not secure. Legg Mason
therefore recommends that you do not send any confidential or sensitive
information to us via electronic mail, including social security
numbers, account numbers, or personal identification numbers. Delivery,
and or timely delivery of Internet mail is not guaranteed. Legg Mason
therefore recommends that you do not send time sensitive or
action-oriented messages to us via electronic mail.

This message is intended for the addressee only and may contain
privileged or confidential information. Unless you are the intended
recipient, you may not use, copy or disclose to anyone any information
contained in this message. If you have received this message in error,
please notify the author by replying to this message and then kindly
delete the message. Thank you.


Re: Request advice on moving from IBM 3494 Library

2007-09-11 Thread Robert Clark

The only drawbacks being the need to upgrade the library managers to
a level that will support TS1120, and the maintenance costs compared
to a TS3500 et al.

[RC]

On Sep 11, 2007, at 1:56 PM, Strand, Neil B. wrote:


John,
   If you are experienced with the 3494 library, they are working well
and tape mount speed is acceptable, you may consider just replacing
the
3590 drives with TS1120 (3592) drives.  The cartridges fit in the same
slots as 3590 cartridges and you can significantly increase the
capacity
of the library allowing you to remove a few frames if necessary.
   You could migrate from 3590 to 3592 in the same library by
installing
the new drives while keeping the old drives in place for a few weeks.
You may also consider keeping the 3590 drives and cartridges for
copypool/offsite storage.  This would reduce your initial new media
cost.  Eventually, you could migrate to all 3592 media.

Cheers,
Neil

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On
Behalf Of
John C Dury
Sent: Friday, September 07, 2007 9:42 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Request advice on moving from IBM 3494 Library

We currently have two 3494 libraries with 6 3590-H drives in each that
have each been paid for and depreciated and are still working well for
the most part.  One of the libraries is offsite but accessible via
dark
fiber we own and is setup as a copy storage pool for DR. We are
looking
into upgrading the tape components of our TSM system to two libraries
with 4 LTO-4 drives in each because the speed and capacity difference
with LTO-4 drives is so much larger than the 3494 library. We will
also
save lots of floor space as newer libraries are a fraction of the size
of the 3494s we have that currently have 7 frames each. The 3494 is
partitioned so that part is for TSM and part for the mainframe but
when
this is all said and done, the mainframe will be going away so a newer
tape library will only be used for open systems. So far we have talked
to several vendors about their products
including: IBM,STK and EMC and have even looked at some third party
vendors like Overland.  Right now everything is open to suggestion but
we are severely budget challenged. Some of the questions I'm
looking for
advice about are:

If we currently have 6 3590-h drives in the 3494 library, do you think
can we get by with 4 LTO-4 drives in the new libraries given that
speed
and storage capacity is so much better with LTO-4 drives? Most of the
tape activity is backups only. Occasionally we have a file restore or
two.

We are hoping to keep the new tape library for 5+ years so reliability
is a big factor.  Any LTO-4 libraries more reliable than others? Do
tapes last 5 years?

Any advice or ideas are very welcome as everything is undecided right
now.

John

IMPORTANT:  E-mail sent through the Internet is not secure. Legg
Mason therefore recommends that you do not send any confidential or
sensitive information to us via electronic mail, including social
security numbers, account numbers, or personal identification
numbers. Delivery, and or timely delivery of Internet mail is not
guaranteed. Legg Mason therefore recommends that you do not send
time sensitive
or action-oriented messages to us via electronic mail.

This message is intended for the addressee only and may contain
privileged or confidential information. Unless you are the intended
recipient, you may not use, copy or disclose to anyone any
information contained in this message. If you have received this
message in error, please notify the author by replying to this
message and then kindly delete the message. Thank you.


Re: Request advice on moving from IBM 3494 Library

2007-09-07 Thread Richard Sims

On Sep 7, 2007, at 9:42 AM, John C Dury wrote:


...Any LTO-4 libraries more reliable than others? Do tapes last 5
years? ...


Tape life is limited by usage.  Manufacturers will quote various
number on archival lifetime (sitting on shelf; usually 30 years),
number of load/unload operations, and number of passes of the media
over the head.  See the Availability section in http://
www.storagetek.com/products/product_page48.html for an example.  An
HP Ultrium warranty statement (http://h2.www2.hp.com/bizsupport/
TechSupport/Document.jsp?
locale=en_UStaskId=120prodSeriesId=34648prodTypeId=12169objectID=lpg
50212) is one million passes or 260 full back ups (FVB) or
combination of FVB and Restores (whichever is soonest).  Also, see
article http://searchstorage.techtarget.com/tip/
0,289483,sid5_gci1253102,00.html on LTO lifespan for perspective.
Further, a given technology may quickly become obsolete, rather than
wear out: an LTO-4 cartridge finally taken off the shelf in 2037 will
be a knickknack rather than usable media.

   Richard Sims


Request advice on moving from IBM 3494 Library

2007-09-07 Thread John C Dury
We currently have two 3494 libraries with 6 3590-H drives in each that have
each been paid for and depreciated and are still working well for the most
part.  One of the libraries is offsite but accessible via dark fiber we own
and is setup as a copy storage pool for DR. We are looking into upgrading
the tape components of our TSM system to two libraries with 4 LTO-4 drives
in each because the speed and capacity difference with LTO-4 drives is so
much larger than the 3494 library. We will also save lots of floor space as
newer libraries are a fraction of the size of the 3494s we have that
currently have 7 frames each. The 3494 is partitioned so that part is for
TSM and part for the mainframe but when this is all said and done, the
mainframe will be going away so a newer tape library will only be used for
open systems. So far we have talked to several vendors about their products
including: IBM,STK and EMC and have even looked at some third party vendors
like Overland.  Right now everything is open to suggestion but we are
severely budget challenged. Some of the questions I'm looking for advice
about are:

If we currently have 6 3590-h drives in the 3494 library, do you think can
we get by with 4 LTO-4 drives in the new libraries given that speed and
storage capacity is so much better with LTO-4 drives? Most of the tape
activity is backups only. Occasionally we have a file restore or two.

We are hoping to keep the new tape library for 5+ years so reliability is a
big factor.  Any LTO-4 libraries more reliable than others? Do tapes last 5
years?

Any advice or ideas are very welcome as everything is undecided right now.

John


Re: TDP for SQL advice

2007-08-03 Thread Del Hoobler
Hi Paul,

Data Protection for SQL can certainly run LOG backups.

You could modify your automated scheduled batch file
to call the a SQL Server job to shrink the files.
For example, something like this:

   tdpsqlc backup * full
   tdpsqlc backup * log
   osql -E -i shinkjob.sql

where shinkjob.sql is a TSQL command file to
shrink the log for you.

Thanks,

Del




ADSM: Dist Stor Manager ADSM-L@VM.MARIST.EDU wrote on 08/02/2007
11:56:00 PM:

 We currently perform MS SQL database backups using TDP (using the
 tdpsqlc backup * full command)

 We then use Enterprise Manager to run a log backup (using the BACKUP LOG
 command) followed by a log shrink job (using the DBCC SHRINKFILE
 command)

 Is there a way to do all of these within TDP for SQL?

 Regards
 Paul


TDP for SQL advice

2007-08-02 Thread Paul Dudley
We currently perform MS SQL database backups using TDP (using the
tdpsqlc backup * full command)

We then use Enterprise Manager to run a log backup (using the BACKUP LOG
command) followed by a log shrink job (using the DBCC SHRINKFILE
command)

Is there a way to do all of these within TDP for SQL?

Regards
Paul

 
Paul Dudley
ANL IT Operations Dept.
ANL Container Line
[EMAIL PROTECTED]




ANL - CELEBRATING 50 YEARS

ANL DISCLAIMER
This e-mail and any file attached is confidential, and intended solely to the 
named addressees. Any unauthorised dissemination or use is strictly prohibited. 
If you received this e-mail in error, please immediately notify the sender by 
return e-mail from your system. Please do not copy, use or make reference to it 
for any purpose, or disclose its contents to any person.


TDP for SQL advice

2007-07-04 Thread Paul Dudley
We currently perform SQL database backups using TDP (using the tdpsqlc
backup * full command)



We then use Enterprise Manager to run a log backup (using the BACKUP LOG
command) followed by a log shrink job (using the DBCC SHRINKFILE
command)



Is there a way to do all of these within TDP for SQL?



Regards

Paul





Paul Dudley

ANL IT Operations Dept.

ANL Container Line

[EMAIL PROTECTED]

03-9257-0603

http://www.anl.com.au











ANL - CELEBRATING 50 YEARS

ANL DISCLAIMER
This e-mail and any file attached is confidential, and intended solely to the 
named addressees. Any unauthorised dissemination or use is strictly prohibited. 
If you received this e-mail in error, please immediately notify the sender by 
return e-mail from your system. Please do not copy, use or make reference to it 
for any purpose, or disclose its contents to any person.


Re: TDP for SQL advice

2007-07-04 Thread Wanda Prather
I think so (not being an SQL guru I'm now sure what shrink means exactly.)

Look at the TDP for SQL book, there are some parms you can add after the
backup * full that determines whether the log is reset or not.


 We currently perform SQL database backups using TDP (using the tdpsqlc
 backup * full command)



 We then use Enterprise Manager to run a log backup (using the BACKUP LOG
 command) followed by a log shrink job (using the DBCC SHRINKFILE
 command)



 Is there a way to do all of these within TDP for SQL?



 Regards

 Paul





 Paul Dudley

 ANL IT Operations Dept.

 ANL Container Line

 [EMAIL PROTECTED]

 03-9257-0603

 http://www.anl.com.au











 ANL - CELEBRATING 50 YEARS

 ANL DISCLAIMER
 This e-mail and any file attached is confidential, and intended solely to
 the named addressees. Any unauthorised dissemination or use is strictly
 prohibited. If you received this e-mail in error, please immediately
 notify the sender by return e-mail from your system. Please do not copy,
 use or make reference to it for any purpose, or disclose its contents to
 any person.



Help! Need Advice on DBB

2007-01-25 Thread Joni Moyer
Hello Everyone,

I currently have a TSM server 5.2.7.1 on AIX 5.3 and I use DRM.  I have
the following set up and I take a full dbb once a day which goes to LTO2
tape.

 Recovery Plan Prefix: /t01/dr/recoveryplans/
  Plan Instructions Prefix: /t01/dr/instructions/
Replacement Volume Postfix: @
 Primary Storage Pools: AIX LINUX NETWARE NOTES ORACLE
ORACLE-LOGS
 SOLARIS WINDOWS TAPE_AIX TAPE_LINUX
 TAPE_NETWARE TAPE_NOTES TAPE_ORACLE
 TAPE_ORACLE-LOGS TAPE_SOLARIS
TAPE_WINDOWS
Copy Storage Pools: COPY*
   Not Mountable Location Name: NOTMOUNTABLE
  Courier Name: Vital Records Inc.
   Vault Site Name: Vital Records Inc.
  DB Backup Series Expiration Days: 8 Day(s)
Recovery Plan File Expiration Days: 8 Day(s)
  Check Label?: Yes
 Process FILE Device Type?: Yes
 Command File Name: /t01/dr/exec.cmds

Since I have the DB Backup Series Expiration Days set to 8 how is it
possible that I can see 13 DBB in the volhistory?

I would now like to run a second dbb that will go to disk which is then
replicated to our disaster recovery site. And create a 2nd DRM plan after
this dbb is run as well.

My dilemma is that I'm not quite sure how to set up such an environment.
To set up a dbb to go to disk I know that I'll need a device class of file
which will point to the directory where I want the dbb to be created.  How
do I know how much space I will need?

01/24/07 10:29:42 ANR4550I Full database backup (process 7496)
complete,
   31252557 pages copied. (SESSION: 750659, PROCESS:
7496)
01/24/07 10:29:42 ANR0985I Process 7496 for DATABASE BACKUP running in
the
   BACKGROUND completed with completion state SUCCESS
at
   10:29:42. (SESSION: 750659, PROCESS: 7496)

Will I need 31252557 * 4096KB = 128010473472 KB = 128 GB?

Also, how will it be possible for me to create both each day and use each
for my DRM plan?

And how will I continue to keep 8 tape dbb versions and only keep 1 disk
dbb version?

Is what I am trying to do impossible?  Any advice/suggestions are
appreciated!  Thanks in advance!


Joni Moyer
Highmark
Storage Systems, Senior Systems Programmer
Phone Number: (717)302-9966
Fax: (717) 302-9826
[EMAIL PROTECTED]



Fw: Help! Need Advice on DBB: Update to questions

2007-01-25 Thread Joni Moyer
 = 128010473472 KB = 128 GB?

Also, how will it be possible for me to create both each day and create 2
DRM plans each day?  1 for the tape dbb and 1 for the disk dbb?

And how will I continue to keep 8 tape dbb versions and only keep 1 disk
dbb version?

Is what I am trying to do impossible?  Any advice/suggestions are
appreciated!  Thanks in advance!


Joni Moyer
Highmark
Storage Systems, Senior Systems Programmer
Phone Number: (717)302-9966
Fax: (717) 302-9826
[EMAIL PROTECTED]



Re: Help! Need Advice on DBB

2007-01-25 Thread Allen S. Rout
 On Thu, 25 Jan 2007 08:17:47 -0500, Joni Moyer [EMAIL PROTECTED] said:


 I would now like to run a second dbb that will go to disk which is
 then replicated to our disaster recovery site. And create a 2nd DRM
 plan after this dbb is run as well.

I spent a few years cutting snapshots here, fulls there, etc.

Where I ended up was doing my database backups to virtual volumes,
written to a TSM server which was itself local, but had remote copy
pools.

This meant that:

1) I could write to disk for concurrency's sake, but still store data on tape

2) I could generate multiple physical copies of a DB Backup, without
   multiple intensive BACKUP DB processes.

3) The DB backups I have offsite are -actual copies- of the onsite,
   rather than snapshots taken at approximately similar times.  This
   may be merely an aesthetic concern, but it always bothered me to
   have my offsite different from my onsite DBB.



I ended up also using the server that got DBBs as library manager,
too.  Here's a pretty detailed discussion of it, last updated (dang!)
beginning of 2005 I should fix that.

http://open-systems.ufl.edu/services/NSAM/whitepapers/design.html



- Allen S. Rout


Re: Help! Need Advice on DBB

2007-01-25 Thread PAC Brion Arnaud
Hi Joni,

I think you should take advantage of the different types of DB backups
that Tivoli is offering : Dbbackups and Dbsnapshots.
You could use tape based devclass for dbbackups and disk based devclass
for dbbsnapshots, thus using different retention values for each of
them. 
Do not forget that you are able to define the retention delay for DB
backups/snapshots while using the del volhist command, specifiying
todate=today-x and type=dbbackup or type=dbsnapshot. 
Use this in a script, that should be ran once a day !  

In such a case do not forget to set DB Backup Series Expiration Days
to something longer that the maximal retention of the dbbackups you
would like to keep, or it will take precedence over your del volhist
command ...

Hope this helped ... 

Arnaud 


**
Panalpina Management Ltd., Basle, Switzerland, 
CIT Department Viadukstrasse 42, P.O. Box 4002 Basel/CH
Phone:  +41 (61) 226 11 11, FAX: +41 (61) 226 17 01
Direct: +41 (61) 226 19 78
e-mail: [EMAIL PROTECTED]

**

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Joni Moyer
Sent: Thursday, 25 January, 2007 14:18
To: ADSM-L@VM.MARIST.EDU
Subject: Help! Need Advice on DBB

Hello Everyone,

I currently have a TSM server 5.2.7.1 on AIX 5.3 and I use DRM.  I have
the following set up and I take a full dbb once a day which goes to LTO2
tape.

 Recovery Plan Prefix: /t01/dr/recoveryplans/
  Plan Instructions Prefix: /t01/dr/instructions/
Replacement Volume Postfix: @
 Primary Storage Pools: AIX LINUX NETWARE NOTES ORACLE
ORACLE-LOGS
 SOLARIS WINDOWS TAPE_AIX TAPE_LINUX
 TAPE_NETWARE TAPE_NOTES TAPE_ORACLE
 TAPE_ORACLE-LOGS TAPE_SOLARIS
TAPE_WINDOWS
Copy Storage Pools: COPY*
   Not Mountable Location Name: NOTMOUNTABLE
  Courier Name: Vital Records Inc.
   Vault Site Name: Vital Records Inc.
  DB Backup Series Expiration Days: 8 Day(s) Recovery Plan File
Expiration Days: 8 Day(s)
  Check Label?: Yes
 Process FILE Device Type?: Yes
 Command File Name: /t01/dr/exec.cmds

Since I have the DB Backup Series Expiration Days set to 8 how is it
possible that I can see 13 DBB in the volhistory?

I would now like to run a second dbb that will go to disk which is then
replicated to our disaster recovery site. And create a 2nd DRM plan
after this dbb is run as well.

My dilemma is that I'm not quite sure how to set up such an environment.
To set up a dbb to go to disk I know that I'll need a device class of
file which will point to the directory where I want the dbb to be
created.  How do I know how much space I will need?

01/24/07 10:29:42 ANR4550I Full database backup (process 7496)
complete,
   31252557 pages copied. (SESSION: 750659, PROCESS:
7496)
01/24/07 10:29:42 ANR0985I Process 7496 for DATABASE BACKUP running
in
the
   BACKGROUND completed with completion state
SUCCESS at
   10:29:42. (SESSION: 750659, PROCESS: 7496)

Will I need 31252557 * 4096KB = 128010473472 KB = 128 GB?

Also, how will it be possible for me to create both each day and use
each for my DRM plan?

And how will I continue to keep 8 tape dbb versions and only keep 1 disk
dbb version?

Is what I am trying to do impossible?  Any advice/suggestions are
appreciated!  Thanks in advance!


Joni Moyer
Highmark
Storage Systems, Senior Systems Programmer Phone Number: (717)302-9966
Fax: (717) 302-9826
[EMAIL PROTECTED]



Would like some advice for TSM server upgrade

2005-09-29 Thread Dave Zarnoch
Folks,

We  are soon planning to upgrade our version of TSM from:

Server Version 5, Release 1, Level 6.2

To:

Server Version 5, Release 3, Level 1.2

Any bumps in the road

Any advice?

I have also opened a call with TSM service but, would like to hear some
other
information if possible

Thanks!

Dave Zarnoch
Nationwide Provident
[EMAIL PROTECTED]


Re: Advice needed for EMC CX500 setup

2005-06-08 Thread Richard Rhodes
Hi Brian,

I wish I had a magic formula to tell you what to do, but there is none.

My suggestion is to create a raid5 raidset for the db/log, put your db/log
onto it and run an expiration and db backup to see if  you get the
performance you need.
If it's not good enough, then blow away the raidset and make it bigger and
try again.

Or, just do what we ended up doing - create a couple big raid5 raidsets and
put db/log/storage pool across all drives.  If the db load isn't heavy it
will work, but expect
to have to move the db/log to dedicated volumes sometime.  This is strictly
stop-gap stuff.

I think you will basically have to try a setup, knowing full well you may
have to
change it if performance isn't good.

Rick





  PAC Brion Arnaud
  [EMAIL PROTECTED]To:   ADSM-L@VM.MARIST.EDU
  ALPINA.COM  cc:
  Sent by: ADSM:  Subject:  Re: Advice needed for 
EMC CX500 setup
  Dist Stor
  Manager
  [EMAIL PROTECTED]
  .EDU


  06/07/2005 09:50
  AM
  Please respond to
  ADSM: Dist Stor
  Manager






Richard (Aaron too),

Many thanks for taking some of your time trying to help me !

Yes, you're absolutely right, my mail was lacking some important
information, but I did not really knew where to begin ...
To answer some of your questions :

- number of nodes : approx 200, equally split in Windows and AIX
environments. Some of them also hosting Informix and DB2 databases.
- backups/day : approx 400 GB AIX/Win data, and 500 to 750 GB
informix/db2. No archiving (except db2 logical logs), no hsm.
- db actual size : 30GB, 80% used; expected 50% increase within one year
(proportional to the expected increase of nodes). Number of objects
(from expiration process) : 610. Our actual performance for
expiration is approx 160 obj/hr, not much, but due to time
constraints space reclamation is running parallely and slows it. I would
expect getting the same performance, or better !
- log size : 13 GB for security as I'm using rollforward mode
- expected iops : well, no idea at all. Some formula to calculate this ?

Do you need something else ?
Regards.

Arnaud


**
Panalpina Management Ltd., Basle, Switzerland, CIT Department
Viadukstrasse 42, P.O. Box 4002 Basel/CH
Phone:  +41 (61) 226 11 11, FAX: +41 (61) 226 17 01
Direct: +41 (61) 226 19 78
e-mail: [EMAIL PROTECTED]

**

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Richard Rhodes
Sent: Tuesday, 07 June, 2005 14:46
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Advice needed for EMC CX500 setup

Hi Brion,

I see that no one else has responded . . . . . let me stir up the waters
and see what shakes loose . . .

You don't inidicate the the number of nodes, db objects (files), backup
rate (gb/day), etc, etc that you expect.  This is key.  You indicate a
db of 70 gb with a full sized log - this means to me that you expect
heavy db activity.  For the tsm db, or for any other db for that matter,
the question is how many iops do you need?  The 133gb fc disk drive can
probably give you around 120+ iops.  I'm afraid that a 3 spindle raid 5
probably is not enough iops to run tsm instance with the activity you
might have.

Before my time on the storage team, Emc sold a disk subsystem to us for
one of our TSM servers.  It was a Clariion with all big drives.  Like
most disk purchases, it was configured on capacity, not i/o load.  There
was no way to split the drives between db/log and staging pools and have
enough disk space for staging pools, or, enough spindles for db/log.
The only way to make it work was to spread db/log/pools across all the
spindles.  NOT good . . . . actually, it worked quite well for the first
year.

Rick








  PAC Brion Arnaud
  [EMAIL PROTECTED]To:
ADSM-L@VM.MARIST.EDU
  ALPINA.COM  cc:
  Sent by: ADSM:  Subject:  Advice needed
for EMC CX500 setup
  Dist Stor
  Manager
  [EMAIL PROTECTED]
  .EDU


  06/06/2005 01:13
  PM
  Please respond to
  ADSM: Dist Stor
  Manager






Hi list,

I need some guidance for setup of a brand new EMC array, equipped with
10 FC drives, 133GB each. TSM server (5.3.1) will be running on AIX 5.3.
My goal is to use those disks for following purposes :
- TSM DB 70 GB
- TSM log 13 GB
- TSM primary pools : all of the remaining space

Questions :

1) EMC recommands using

Re: Advice needed for EMC CX500 setup

2005-06-08 Thread Troy Frank
If you've got enough disks for it you also might want to think about a RAID 0+1 
for the DB/Log volumes, as this would give much better write performance than 
RAID5.


 [EMAIL PROTECTED] 6/8/2005 6:38:45 AM 
Hi Brian,

I wish I had a magic formula to tell you what to do, but there is none.

My suggestion is to create a raid5 raidset for the db/log, put your db/log
onto it and run an expiration and db backup to see if  you get the
performance you need.
If it's not good enough, then blow away the raidset and make it bigger and
try again.

Or, just do what we ended up doing - create a couple big raid5 raidsets and
put db/log/storage pool across all drives.  If the db load isn't heavy it
will work, but expect
to have to move the db/log to dedicated volumes sometime.  This is strictly
stop-gap stuff.

I think you will basically have to try a setup, knowing full well you may
have to
change it if performance isn't good.

Rick





  PAC Brion Arnaud
  [EMAIL PROTECTED]To:   ADSM-L@VM.MARIST.EDU 
  ALPINA.COM  cc:
  Sent by: ADSM:  Subject:  Re: Advice needed for 
EMC CX500 setup
  Dist Stor
  Manager
  [EMAIL PROTECTED] 
  .EDU


  06/07/2005 09:50
  AM
  Please respond to
  ADSM: Dist Stor
  Manager






Richard (Aaron too),

Many thanks for taking some of your time trying to help me !

Yes, you're absolutely right, my mail was lacking some important
information, but I did not really knew where to begin ...
To answer some of your questions :

- number of nodes : approx 200, equally split in Windows and AIX
environments. Some of them also hosting Informix and DB2 databases.
- backups/day : approx 400 GB AIX/Win data, and 500 to 750 GB
informix/db2. No archiving (except db2 logical logs), no hsm.
- db actual size : 30GB, 80% used; expected 50% increase within one year
(proportional to the expected increase of nodes). Number of objects
(from expiration process) : 610. Our actual performance for
expiration is approx 160 obj/hr, not much, but due to time
constraints space reclamation is running parallely and slows it. I would
expect getting the same performance, or better !
- log size : 13 GB for security as I'm using rollforward mode
- expected iops : well, no idea at all. Some formula to calculate this ?

Do you need something else ?
Regards.

Arnaud


**
Panalpina Management Ltd., Basle, Switzerland, CIT Department
Viadukstrasse 42, P.O. Box 4002 Basel/CH
Phone:  +41 (61) 226 11 11, FAX: +41 (61) 226 17 01
Direct: +41 (61) 226 19 78
e-mail: [EMAIL PROTECTED] 

**

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Richard Rhodes
Sent: Tuesday, 07 June, 2005 14:46
To: ADSM-L@VM.MARIST.EDU 
Subject: Re: Advice needed for EMC CX500 setup

Hi Brion,

I see that no one else has responded . . . . . let me stir up the waters
and see what shakes loose . . .

You don't inidicate the the number of nodes, db objects (files), backup
rate (gb/day), etc, etc that you expect.  This is key.  You indicate a
db of 70 gb with a full sized log - this means to me that you expect
heavy db activity.  For the tsm db, or for any other db for that matter,
the question is how many iops do you need?  The 133gb fc disk drive can
probably give you around 120+ iops.  I'm afraid that a 3 spindle raid 5
probably is not enough iops to run tsm instance with the activity you
might have.

Before my time on the storage team, Emc sold a disk subsystem to us for
one of our TSM servers.  It was a Clariion with all big drives.  Like
most disk purchases, it was configured on capacity, not i/o load.  There
was no way to split the drives between db/log and staging pools and have
enough disk space for staging pools, or, enough spindles for db/log.
The only way to make it work was to spread db/log/pools across all the
spindles.  NOT good . . . . actually, it worked quite well for the first
year.

Rick








  PAC Brion Arnaud
  [EMAIL PROTECTED]To:
ADSM-L@VM.MARIST.EDU 
  ALPINA.COM  cc:
  Sent by: ADSM:  Subject:  Advice needed
for EMC CX500 setup
  Dist Stor
  Manager
  [EMAIL PROTECTED] 
  .EDU


  06/06/2005 01:13
  PM
  Please respond to
  ADSM: Dist Stor
  Manager






Hi list,

I need some guidance for setup of a brand new EMC array, equipped with
10 FC drives, 133GB each. TSM

Re: Advice needed for EMC CX500 setup

2005-06-08 Thread PAC Brion Arnaud
Richard, Frank

I really would like to use RAID 0+1, unfortunately this would waste
too much of the precious space I'm needing for primary pools.

Richard's idea seems to be the right one : I'll try running some
performance tests (expiration, db backup) on several possible settings
and opt for the most satisfying one. Kind of an empirical way of doing
things, but the lack of precise documentation and/or advices leaves me
no other way ...

Thanks again !
Regards.


Arnaud 


**
Panalpina Management Ltd., Basle, Switzerland, CIT Department
Viadukstrasse 42, P.O. Box 4002 Basel/CH
Phone:  +41 (61) 226 11 11, FAX: +41 (61) 226 17 01
Direct: +41 (61) 226 19 78
e-mail: [EMAIL PROTECTED]

**

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Troy Frank
Sent: Wednesday, 08 June, 2005 15:12
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Advice needed for EMC CX500 setup

If you've got enough disks for it you also might want to think about a
RAID 0+1 for the DB/Log volumes, as this would give much better write
performance than RAID5.


 [EMAIL PROTECTED] 6/8/2005 6:38:45 AM 
Hi Brian,

I wish I had a magic formula to tell you what to do, but there is none.

My suggestion is to create a raid5 raidset for the db/log, put your
db/log onto it and run an expiration and db backup to see if  you get
the performance you need.
If it's not good enough, then blow away the raidset and make it bigger
and try again.

Or, just do what we ended up doing - create a couple big raid5 raidsets
and put db/log/storage pool across all drives.  If the db load isn't
heavy it will work, but expect to have to move the db/log to dedicated
volumes sometime.  This is strictly stop-gap stuff.

I think you will basically have to try a setup, knowing full well you
may have to change it if performance isn't good.

Rick





  PAC Brion Arnaud
  [EMAIL PROTECTED]To:
ADSM-L@VM.MARIST.EDU 
  ALPINA.COM  cc:
  Sent by: ADSM:  Subject:  Re: Advice
needed for EMC CX500 setup
  Dist Stor
  Manager
  [EMAIL PROTECTED] 
  .EDU


  06/07/2005 09:50
  AM
  Please respond to
  ADSM: Dist Stor
  Manager






Richard (Aaron too),

Many thanks for taking some of your time trying to help me !

Yes, you're absolutely right, my mail was lacking some important
information, but I did not really knew where to begin ...
To answer some of your questions :

- number of nodes : approx 200, equally split in Windows and AIX
environments. Some of them also hosting Informix and DB2 databases.
- backups/day : approx 400 GB AIX/Win data, and 500 to 750 GB
informix/db2. No archiving (except db2 logical logs), no hsm.
- db actual size : 30GB, 80% used; expected 50% increase within one year
(proportional to the expected increase of nodes). Number of objects
(from expiration process) : 610. Our actual performance for
expiration is approx 160 obj/hr, not much, but due to time
constraints space reclamation is running parallely and slows it. I would
expect getting the same performance, or better !
- log size : 13 GB for security as I'm using rollforward mode
- expected iops : well, no idea at all. Some formula to calculate this ?

Do you need something else ?
Regards.

Arnaud


**
Panalpina Management Ltd., Basle, Switzerland, CIT Department
Viadukstrasse 42, P.O. Box 4002 Basel/CH
Phone:  +41 (61) 226 11 11, FAX: +41 (61) 226 17 01
Direct: +41 (61) 226 19 78
e-mail: [EMAIL PROTECTED]

**

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Richard Rhodes
Sent: Tuesday, 07 June, 2005 14:46
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Advice needed for EMC CX500 setup

Hi Brion,

I see that no one else has responded . . . . . let me stir up the waters
and see what shakes loose . . .

You don't inidicate the the number of nodes, db objects (files), backup
rate (gb/day), etc, etc that you expect.  This is key.  You indicate a
db of 70 gb with a full sized log - this means to me that you expect
heavy db activity.  For the tsm db, or for any other db for that matter,
the question is how many iops do you need?  The 133gb fc disk drive can
probably give you around 120+ iops.  I'm afraid that a 3 spindle raid 5
probably is not enough iops to run tsm instance with the activity you
might have.

Before my time on the storage team, Emc sold a disk subsystem to us for
one of our TSM servers.  It was a Clariion with all big drives.  Like

Re: Advice needed for EMC CX500 setup

2005-06-08 Thread Richard Rhodes
At the recent EMC conference at a session about Clariions, they indicated
several things about Clariions that I thought I would share.  From my
notes . . . .

1)  Raid 0+1 is better for writes, but doesn't help with reads.  It will
only/mostly use just one copy for reads.  It's not as smart as a
dmx/symm, which will perform reads from both copies.  Raid 5
random reads are just as good as 0+1 random reads.

2)  Raid 3.  The way I heard it explained, r3 was recommended for
big block sequential processing (like tsm staging pools) for ATA
drives.  The reasoning was that ATA drives don't currently have command
tag queueing.  Since they can only do one i/o op at a time, r3 fits
real well with this.  They recommended this for up to around 10
concurrent data streams to the raidset.   The quote went something
like this:  ATA drives are brain dead, and raid 3 optimizes their
brain-deadness.

3)  Raid 5 on Fiber Channel drives is  very good with large block
sequential I/O.  Raid 3 is not needed here.

4)  Raid5 definitely has the write performance penality, which
requires 4 i/o's per random write (not writing a full strip).   This
is hidden behind the write cache and performed later.   They gave
a rule of thumb of using raid5 for random I/O up to a  write/read
ratios of around 25-30%.  That is, a random
access pattern where there is 70% reads and 25% writes.

5)  Leave the raidset strip size at the default (I think it's
64k, or 128 blocks).  They have performed all
kinds of internal testing and this is the optimum size for Clariion raid 5
raidsets.  Period.  The internal processing logic of the Clariion is
optimized for this strip size - don't mess with it unless you reallly
know what you are doing.

6)  15k rpm fc drives give around a 30% increase in small block
random iops than 10k drives.


Rick


-
The information contained in this message is intended only for the personal
and confidential use of the recipient(s) named above. If the reader of this
message is not the intended recipient or an agent responsible for
delivering it to the intended recipient, you are hereby notified that you
have received this document in error and that any review, dissemination,
distribution, or copying of this message is strictly prohibited. If you
have received this communication in error, please notify us immediately,
and delete the original message.


Re: SUSPECT: (MSW) Re: Advice needed for EMC CX500 setup

2005-06-08 Thread PAC Brion Arnaud
Richard,

This is what I call helpfull information ! It confirms what I read in
EMC's engineering white paper Backup-to-Disk Guide with IBM Tivoli
Storage Manager, where they recommend using RAID3 for storage pools.
Unfortunately there was not any word about TSM DB and logs ...
As our Clariion is also equipped with 50TB SATA drives, I'll use them as
sequential type volumes  with raid3, and will give raid5 a chance on FC
drives, for primary disk pool where our nighly backups are landing
(hopefully it will not be too slow), for DB and LOGS too ...
Thanks.

Arnaud 


**
Panalpina Management Ltd., Basle, Switzerland, CIT Department
Viadukstrasse 42, P.O. Box 4002 Basel/CH
Phone:  +41 (61) 226 11 11, FAX: +41 (61) 226 17 01
Direct: +41 (61) 226 19 78
e-mail: [EMAIL PROTECTED]

**

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Richard Rhodes
Sent: Wednesday, 08 June, 2005 16:57
To: ADSM-L@VM.MARIST.EDU
Subject: SUSPECT: (MSW) Re: Advice needed for EMC CX500 setup

At the recent EMC conference at a session about Clariions, they
indicated several things about Clariions that I thought I would share.
From my notes . . . .

1)  Raid 0+1 is better for writes, but doesn't help with reads.  It will
only/mostly use just one copy for reads.  It's not as smart as a
dmx/symm, which will perform reads from both copies.  Raid 5 random
reads are just as good as 0+1 random reads.

2)  Raid 3.  The way I heard it explained, r3 was recommended for big
block sequential processing (like tsm staging pools) for ATA drives.
The reasoning was that ATA drives don't currently have command tag
queueing.  Since they can only do one i/o op at a time, r3 fits real
well with this.  They recommended this for up to around 10
concurrent data streams to the raidset.   The quote went something
like this:  ATA drives are brain dead, and raid 3 optimizes their
brain-deadness.

3)  Raid 5 on Fiber Channel drives is  very good with large block
sequential I/O.  Raid 3 is not needed here.

4)  Raid5 definitely has the write performance penality, which
requires 4 i/o's per random write (not writing a full strip).   This
is hidden behind the write cache and performed later.   They gave
a rule of thumb of using raid5 for random I/O up to a  write/read ratios
of around 25-30%.  That is, a random access pattern where there is 70%
reads and 25% writes.

5)  Leave the raidset strip size at the default (I think it's 64k, or
128 blocks).  They have performed all kinds of internal testing and this
is the optimum size for Clariion raid 5 raidsets.  Period.  The internal
processing logic of the Clariion is optimized for this strip size -
don't mess with it unless you reallly know what you are doing.

6)  15k rpm fc drives give around a 30% increase in small block random
iops than 10k drives.


Rick


-
The information contained in this message is intended only for the
personal and confidential use of the recipient(s) named above. If the
reader of this message is not the intended recipient or an agent
responsible for delivering it to the intended recipient, you are hereby
notified that you have received this document in error and that any
review, dissemination, distribution, or copying of this message is
strictly prohibited. If you have received this communication in error,
please notify us immediately, and delete the original message.


Re: Advice needed for EMC CX500 setup

2005-06-07 Thread Richard Rhodes
Hi Brion,

I see that no one else has responded . . . . . let me stir up the waters
and
see what shakes loose . . .

You don't inidicate the the number of nodes, db objects (files), backup
rate (gb/day), etc, etc
that you expect.  This is key.  You indicate a db of 70 gb with a full
sized log - this means to
me that you expect heavy db activity.  For the tsm db, or for any other db
for that matter, the
question is how many iops do you need?  The 133gb fc disk drive can
probably give you
around 120+ iops.  I'm afraid that a 3 spindle raid 5 probably is not
enough iops to
run tsm instance with the activity you might have.

Before my time on the storage team, Emc sold a disk subsystem to us for one
of our
TSM servers.  It was a Clariion with all big drives.  Like most disk
purchases, it was
configured on capacity, not i/o load.  There was no way to split the drives
between db/log and staging pools and have enough disk space for staging
pools, or,
enough spindles for db/log.  The only way to make it work was to spread
db/log/pools
across all the spindles.  NOT good . . . . actually, it worked quite well
for the first year.

Rick








  PAC Brion Arnaud
  [EMAIL PROTECTED]To:   ADSM-L@VM.MARIST.EDU
  ALPINA.COM  cc:
  Sent by: ADSM:  Subject:  Advice needed for EMC 
CX500 setup
  Dist Stor
  Manager
  [EMAIL PROTECTED]
  .EDU


  06/06/2005 01:13
  PM
  Please respond to
  ADSM: Dist Stor
  Manager






Hi list,

I need some guidance for setup of a brand new EMC array, equipped with
10 FC drives, 133GB each. TSM server (5.3.1) will be running on AIX 5.3.
My goal is to use those disks for following purposes :
- TSM DB 70 GB
- TSM log 13 GB
- TSM primary pools : all of the remaining space

Questions :

1) EMC recommands using RAID3 for disk to disk backup - should I create
2 raid groups : one raid5 for TSM DB and logs, one raid3 for diskpools ?

2) if using raid5 for DB and log, should I create 2 raid groups, each
one using different physical disks : one for the DB, the other for the
logs, or is the performance penalty neglectible if both are on the same
disks ?
3) I plan using raw volumes for DB and LOGS (no TSM or AIX mirroring, I
trust EMC's raid): what is the best setup : 1 big TSM volume or several
smaller ?  If using several volumes should they be built using different
LUN's, or could I create several LV's on the same LUN.

So many possibilities, that I'm a bit lost ...  So if you have good
experience in this domain, your help will be welcome !
TIA.
Cheers


Arnaud


**
Panalpina Management Ltd., Basle, Switzerland, CIT Department
Viadukstrasse 42, P.O. Box 4002 Basel/CH
Phone:  +41 (61) 226 11 11, FAX: +41 (61) 226 17 01
Direct: +41 (61) 226 19 78
e-mail: [EMAIL PROTECTED]

**




-
The information contained in this message is intended only for the personal
and confidential use of the recipient(s) named above. If the reader of this
message is not the intended recipient or an agent responsible for
delivering it to the intended recipient, you are hereby notified that you
have received this document in error and that any review, dissemination,
distribution, or copying of this message is strictly prohibited. If you
have received this communication in error, please notify us immediately,
and delete the original message.


Re: Advice needed for EMC CX500 setup

2005-06-07 Thread PAC Brion Arnaud
Richard (Aaron too),

Many thanks for taking some of your time trying to help me !

Yes, you're absolutely right, my mail was lacking some important
information, but I did not really knew where to begin ...
To answer some of your questions :

- number of nodes : approx 200, equally split in Windows and AIX
environments. Some of them also hosting Informix and DB2 databases. 
- backups/day : approx 400 GB AIX/Win data, and 500 to 750 GB
informix/db2. No archiving (except db2 logical logs), no hsm.
- db actual size : 30GB, 80% used; expected 50% increase within one year
(proportional to the expected increase of nodes). Number of objects
(from expiration process) : 610. Our actual performance for
expiration is approx 160 obj/hr, not much, but due to time
constraints space reclamation is running parallely and slows it. I would
expect getting the same performance, or better !
- log size : 13 GB for security as I'm using rollforward mode
- expected iops : well, no idea at all. Some formula to calculate this ?

Do you need something else ?
Regards.

Arnaud 


**
Panalpina Management Ltd., Basle, Switzerland, CIT Department
Viadukstrasse 42, P.O. Box 4002 Basel/CH
Phone:  +41 (61) 226 11 11, FAX: +41 (61) 226 17 01
Direct: +41 (61) 226 19 78
e-mail: [EMAIL PROTECTED]

**

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Richard Rhodes
Sent: Tuesday, 07 June, 2005 14:46
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Advice needed for EMC CX500 setup

Hi Brion,

I see that no one else has responded . . . . . let me stir up the waters
and see what shakes loose . . .

You don't inidicate the the number of nodes, db objects (files), backup
rate (gb/day), etc, etc that you expect.  This is key.  You indicate a
db of 70 gb with a full sized log - this means to me that you expect
heavy db activity.  For the tsm db, or for any other db for that matter,
the question is how many iops do you need?  The 133gb fc disk drive can
probably give you around 120+ iops.  I'm afraid that a 3 spindle raid 5
probably is not enough iops to run tsm instance with the activity you
might have.

Before my time on the storage team, Emc sold a disk subsystem to us for
one of our TSM servers.  It was a Clariion with all big drives.  Like
most disk purchases, it was configured on capacity, not i/o load.  There
was no way to split the drives between db/log and staging pools and have
enough disk space for staging pools, or, enough spindles for db/log.
The only way to make it work was to spread db/log/pools across all the
spindles.  NOT good . . . . actually, it worked quite well for the first
year.

Rick








  PAC Brion Arnaud
  [EMAIL PROTECTED]To:
ADSM-L@VM.MARIST.EDU
  ALPINA.COM  cc:
  Sent by: ADSM:  Subject:  Advice needed
for EMC CX500 setup
  Dist Stor
  Manager
  [EMAIL PROTECTED]
  .EDU


  06/06/2005 01:13
  PM
  Please respond to
  ADSM: Dist Stor
  Manager






Hi list,

I need some guidance for setup of a brand new EMC array, equipped with
10 FC drives, 133GB each. TSM server (5.3.1) will be running on AIX 5.3.
My goal is to use those disks for following purposes :
- TSM DB 70 GB
- TSM log 13 GB
- TSM primary pools : all of the remaining space

Questions :

1) EMC recommands using RAID3 for disk to disk backup - should I create
2 raid groups : one raid5 for TSM DB and logs, one raid3 for diskpools ?

2) if using raid5 for DB and log, should I create 2 raid groups, each
one using different physical disks : one for the DB, the other for the
logs, or is the performance penalty neglectible if both are on the same
disks ?
3) I plan using raw volumes for DB and LOGS (no TSM or AIX mirroring, I
trust EMC's raid): what is the best setup : 1 big TSM volume or several
smaller ?  If using several volumes should they be built using different
LUN's, or could I create several LV's on the same LUN.

So many possibilities, that I'm a bit lost ...  So if you have good
experience in this domain, your help will be welcome !
TIA.
Cheers


Arnaud


**
Panalpina Management Ltd., Basle, Switzerland, CIT Department
Viadukstrasse 42, P.O. Box 4002 Basel/CH
Phone:  +41 (61) 226 11 11, FAX: +41 (61) 226 17 01
Direct: +41 (61) 226 19 78
e-mail: [EMAIL PROTECTED]

**




-
The information contained in this message is intended only for the
personal and confidential use of the recipient(s) named

Advice needed for EMC CX500 setup

2005-06-06 Thread PAC Brion Arnaud
Hi list,

I need some guidance for setup of a brand new EMC array, equipped with
10 FC drives, 133GB each. TSM server (5.3.1) will be running on AIX 5.3.
My goal is to use those disks for following purposes :
- TSM DB 70 GB
- TSM log 13 GB
- TSM primary pools : all of the remaining space

Questions :

1) EMC recommands using RAID3 for disk to disk backup - should I create
2 raid groups : one raid5 for TSM DB and logs, one raid3 for diskpools ?

2) if using raid5 for DB and log, should I create 2 raid groups, each
one using different physical disks : one for the DB, the other for the
logs, or is the performance penalty neglectible if both are on the same
disks ?
3) I plan using raw volumes for DB and LOGS (no TSM or AIX mirroring, I
trust EMC's raid): what is the best setup : 1 big TSM volume or several
smaller ?  If using several volumes should they be built using different
LUN's, or could I create several LV's on the same LUN.

So many possibilities, that I'm a bit lost ...  So if you have good
experience in this domain, your help will be welcome !
TIA.
Cheers
 

Arnaud 


**
Panalpina Management Ltd., Basle, Switzerland, CIT Department
Viadukstrasse 42, P.O. Box 4002 Basel/CH
Phone:  +41 (61) 226 11 11, FAX: +41 (61) 226 17 01
Direct: +41 (61) 226 19 78
e-mail: [EMAIL PROTECTED]

**


Advice required for server\library upgrade

2005-05-12 Thread Copperfield Adams
Hi all,

 

I would really appreciate any help with the following:

 

We currently have a Windows 2000 server running TSM server 5.2.2.4 (also
using DRM) with a SCSI attached PV210 disk array for disk pools, and
SCSI attached 3583 library with 4x SCSI LTO1 drives. We are planning to
purchase new hardware for the server (dual processor, 2gb RAM, Windows
server 2003, 2x local 146gb drives RAID1 for mirroring local OS and TSM
db) and an ADIC i2000 library with 4x LTO2 tape drives (SCSI). We
currently have no plans to upgrade to TSM server 5.3 and the reason for
upgrade is primarily due to lack of storage capacity for onsite tape
pools. We will also be retiring the PV210 disk array and configuring the
disk pools to live on a huge SAN attached disk array (the library and
TSM server will not be SAN attached).

 

I have the following overall plan to migrate to the new setup and would
appreciate any comments on whether I am heading in the right direction.
Has anyone out there completed such an upgrade? Is there any
documentation/papers/weblinks with info? Is it advisable to run mixed
media or should we move all data from our LTO1 media to LTO2 bearing in
mind we currently have a reasonable amount of LTO1 media (appx. 150
units)? 

 

1.  Build new TSM server.
2.  Take a db backup of existing TSM server.
3.  Attach new library and drives to existing TSM server and define
new devclass for LTO2. Create new copy storage pool for LTO2.
4.  Checkin/label LTO2 volumes.
5.  Either: issue a backup command to backup existing tape pools to
new tape pools (i.e. 'backup existing_tape_pool new_tape_pool'). 
6.  Or: Change 'next storage pool' for disk pools to point to new
library and issue a 'move data' command on a volume-by-volume basis.
This would move data from tape volumes back to disk pool and then:
7.  Set migration threshold to 0/0 on disk pool to force migrations
to new tape storage pool.
8.  As existing tape volumes become empty set to READO or checkout.
9.  Backup db, halt server, copy db to new TSM server.
10. Attach new library to new server and start TSM server on new
box.
11. Redefine library, drives and paths.
12. Create new storage pools on SAN drive array and check mgmclass
destinations.
13. Cross fingers.

Any ideas what is missing?

 

Our primary onsite pools currently look like this:

 

Primary_Pool  GB   Files
-- - ---
ARCHIVE_TAPE0.407191
BACKUP_TAPE  1303.37 2846356
BACKUP_TAPE_COL  1170.64 2849721
DIR_FILE0.26  171262
DIR_POOL0.90  558979
EXCH_TAPE 715.56 315
 
Many thanks for any advice.

 
 



This e-mail and any files transmitted with it are confidential and 
intended solely for the individual or entity to whom they are addressed. 
Any views or opinions presented or expressed are those of the 
author(s) and may not necessarily represent those of the Company or of 
any WWAV Rapp Collins Group Company and no representation is 
given nor liability accepted for the accuracy or completeness of any 
information contained in this email unless expressly stated to the 
contrary.
 
If you are not the intended recipient or have received this e-mail in
error, you may not use, disseminate, forward, print or copy it, but
please notify the sender that you have received it in error.
 
Whilst we have taken reasonable precautions to ensure that any 
attachment to this e-mail has been swept for viruses, we cannot accept 
liability for any damage sustained as a result of software viruses and 
would advise that you carry out your own virus checks before opening 
any attachment. Please note that communications sent by or to any 
person through our computer systems may be viewed by other 
company personnel and agents.

Registered Office: 1 Riverside, Manbre Road, London W6 9WA


Advice needed - different backup's from same node ?

2004-06-29 Thread Brian Ipsen
Hi,

 I need an advice on how to handle backup from a specific node... On weekdays, the 
backup should run normally, but ignore specific file-types, e.g. .pst
The .pst files should be backup during the weekend instead... Is this possible without 
installing 2 instances of the scheduler - or do I have to register 2 nodes for this 
server, one for the weekday backups, and another for handling the pst files during 
saturday/sunday ??

 From a license point of view, I would assume, that registering 2 schedulers/nodes 
from the same host is to be considered as 1 CPU (as far as I remember, IBM calls it 
CPU instead of hosts)...

**
Mvh/Rgds.
Brian Ipsen
PROGRESSIVE IT A/S

Århusgade 88, 3.sal Tel: +45 3525 5070
DK-2100 København Ø Fax: +45 3525 5090
Denmark Dir. +45 3525 5080
Email:[EMAIL PROTECTED]

***



---
This mail was scanned for virus and spam by Progressive IT A/S.
---


Re: Need advice on dealing with unreadable tape

2003-02-07 Thread Matt Simpson
At 4:20 PM -0600 2/6/03, Julian Armendariz wrote:

You will have to specify which copypool to get the files from.

restore v 000345 copy=name_of_copypool p=y


According to the manual, copypool is optional, and if not specified, files
are restored from any copy pool in which copies can be located.
But, since the manual is occasionally wrong, I tried specifiying
copypool. It didn't help.

Someone else suggested maybe the access on the original tape needed
to be changed to destroyed.  I tried that. It didn't help.

At 5:35 PM -0500 2/6/03, Prather, Wanda wrote:

 could be there was a problem reading the 2 bad files
at the time it was trying to create the copy pool copy.


That's what I was thinking.


if you purge the DB entries for the bad files, TSM will
back them up on the next go around, assuming they still exist on the client.


And if they don't still exist on the client, I'll just hope that the
user doesn't change his mind and decide he needs them back.  I'll do
the audit and let it delete the entries.  Thanks for the advice.


--


Matt Simpson --  OS/390 Support
219 McVey Hall  -- (859) 257-2900 x300
University Of Kentucky, Lexington, KY 40506
mailto:[EMAIL PROTECTED]
mainframe --   An obsolete device still used by thousands of obsolete
companies serving billions of obsolete customers and making huge obsolete
profits for their obsolete shareholders.  And this year's run twice as fast
as last year's.



Need advice on dealing with unreadable tape

2003-02-06 Thread Matt Simpson
I have some tapes that are getting read errors, and I'm trying to
find a graceful way to get out of the mess.

One example is a tape that has 2 files on it, according to Q CONTENT
MOVE DATA for that tape fails, because it can't be read.

We have (or think we have) offsite copies of our backup tapes. So I
thought I might be able to recover the files from an offsite copy.
To find which offsite tape(s) I would need, I tried

restore v 000345 p=y

I got messages

ANR0984I Process 622 for RESTORE VOLUME (PREVIEW) started in the
BACKGROUND at 09:28:00.
ANR1233I Restore preview of volumes in primary storage pool
BACKUPONSITE started as process 622.
ANR2110I RESTORE VOLUME started as process 622.
ANR1235I Restore process 622 ended for volumes in storage pool BACKUPONSITE.
ANR0985I Process 622 for RESTORE VOLUME (PREVIEW) running in the
BACKGROUND completed with
completion state SUCCESS at 09:28:00.
ANR1241I Restore preview of volumes in primary storage pool
BACKUPONSITE has ended.  Files
Restored: 0, Bytes Restored: 0.
ANR1256W Volume 000345 contains files that could not be restored.


I assume that means the files didn't get copied to the backup pool
before the tape got flaky.

At this point, I guess I have to assume those backups are toast.  If
they are inactive versions, I can shrug and say I hope they never
want to restore the old versions.  But, as far as I can tell, there
is no way to tell whether a backup on a specific tape is active or
inactive.  If that's true, I need to assume they might be active and
get new backups of them.

If I just delete the volume, with discarddata=yes, and the backups
are active versions, will that force TSM to realize it no longer has
an active backup of those files, and back them up again the next time
the node is backed up?
--


Matt Simpson --  OS/390 Support
219 McVey Hall  -- (859) 257-2900 x300
University Of Kentucky, Lexington, KY 40506
mailto:[EMAIL PROTECTED]
mainframe --   An obsolete device still used by thousands of obsolete
companies serving billions of obsolete customers and making huge obsolete
profits for their obsolete shareholders.  And this year's run twice as fast
as last year's.



Re: Need advice on dealing with unreadable tape

2003-02-06 Thread Prather, Wanda
Well, if RESTORE VOLUME says there is nothing to restore, and q content says
there is, one of them is lying!

Try AUDIT VOLUME 000345
If it says the two files can't be read, then run

AUDIT VOLUME 000356 fix=yes
That should purge the bad DB entries and free up the tape.



-Original Message-
From: Matt Simpson [mailto:[EMAIL PROTECTED]]
Sent: Thursday, February 06, 2003 4:55 PM
To: [EMAIL PROTECTED]
Subject: Need advice on dealing with unreadable tape


I have some tapes that are getting read errors, and I'm trying to
find a graceful way to get out of the mess.

One example is a tape that has 2 files on it, according to Q CONTENT
MOVE DATA for that tape fails, because it can't be read.

We have (or think we have) offsite copies of our backup tapes. So I
thought I might be able to recover the files from an offsite copy.
To find which offsite tape(s) I would need, I tried

restore v 000345 p=y

I got messages

ANR0984I Process 622 for RESTORE VOLUME (PREVIEW) started in the
BACKGROUND at 09:28:00.
ANR1233I Restore preview of volumes in primary storage pool
BACKUPONSITE started as process 622.
ANR2110I RESTORE VOLUME started as process 622.
ANR1235I Restore process 622 ended for volumes in storage pool BACKUPONSITE.
ANR0985I Process 622 for RESTORE VOLUME (PREVIEW) running in the
BACKGROUND completed with
completion state SUCCESS at 09:28:00.
ANR1241I Restore preview of volumes in primary storage pool
BACKUPONSITE has ended.  Files
Restored: 0, Bytes Restored: 0.
ANR1256W Volume 000345 contains files that could not be restored.


I assume that means the files didn't get copied to the backup pool
before the tape got flaky.

At this point, I guess I have to assume those backups are toast.  If
they are inactive versions, I can shrug and say I hope they never
want to restore the old versions.  But, as far as I can tell, there
is no way to tell whether a backup on a specific tape is active or
inactive.  If that's true, I need to assume they might be active and
get new backups of them.

If I just delete the volume, with discarddata=yes, and the backups
are active versions, will that force TSM to realize it no longer has
an active backup of those files, and back them up again the next time
the node is backed up?
--


Matt Simpson --  OS/390 Support
219 McVey Hall  -- (859) 257-2900 x300
University Of Kentucky, Lexington, KY 40506
mailto:[EMAIL PROTECTED]
mainframe --   An obsolete device still used by thousands of obsolete
companies serving billions of obsolete customers and making huge obsolete
profits for their obsolete shareholders.  And this year's run twice as fast
as last year's.



Re: Need advice on dealing with unreadable tape

2003-02-06 Thread Matt Simpson
At 5:11 PM -0500 2/6/03, Prather, Wanda wrote:

Well, if RESTORE VOLUME says there is nothing to restore, and q content says
there is, one of them is lying!


I didn't interpret the messages as meaning there was nothing TO
restore .. I thought it meant there was nothing it COULD restore. It
did say
ANR1256W Volume 000345 contains files that could not be restored.
so it apparently knew there was stuff there it couldn't restore.


Try AUDIT VOLUME 000345
If it says the two files can't be read, then run

AUDIT VOLUME 000356 fix=yes
That should purge the bad DB entries and free up the tape.


OK, thanks.  Am I correct in assuming that purging the DB entries
will force new backups if those are the active versions?



--


Matt Simpson --  OS/390 Support
219 McVey Hall  -- (859) 257-2900 x300
University Of Kentucky, Lexington, KY 40506
mailto:[EMAIL PROTECTED]
mainframe --   An obsolete device still used by thousands of obsolete
companies serving billions of obsolete customers and making huge obsolete
profits for their obsolete shareholders.  And this year's run twice as fast
as last year's.



Re: Need advice on dealing with unreadable tape

2003-02-06 Thread Julian Armendariz
You will have to specify which copypool to get the files from.

restore v 000345 copy=name_of_copypool p=y



Julian Armendariz
System Analyst - UNIX
H.B. Fuller
(651) 236-4043



 [EMAIL PROTECTED] 02/06/03 03:54PM 
I have some tapes that are getting read errors, and I'm trying to
find a graceful way to get out of the mess.

One example is a tape that has 2 files on it, according to Q CONTENT
MOVE DATA for that tape fails, because it can't be read.

We have (or think we have) offsite copies of our backup tapes. So I
thought I might be able to recover the files from an offsite copy.
To find which offsite tape(s) I would need, I tried

restore v 000345 p=y

I got messages

ANR0984I Process 622 for RESTORE VOLUME (PREVIEW) started in the
BACKGROUND at 09:28:00.
ANR1233I Restore preview of volumes in primary storage pool
BACKUPONSITE started as process 622.
ANR2110I RESTORE VOLUME started as process 622.
ANR1235I Restore process 622 ended for volumes in storage pool BACKUPONSITE.
ANR0985I Process 622 for RESTORE VOLUME (PREVIEW) running in the
BACKGROUND completed with
completion state SUCCESS at 09:28:00.
ANR1241I Restore preview of volumes in primary storage pool
BACKUPONSITE has ended.  Files
Restored: 0, Bytes Restored: 0.
ANR1256W Volume 000345 contains files that could not be restored.


I assume that means the files didn't get copied to the backup pool
before the tape got flaky.

At this point, I guess I have to assume those backups are toast.  If
they are inactive versions, I can shrug and say I hope they never
want to restore the old versions.  But, as far as I can tell, there
is no way to tell whether a backup on a specific tape is active or
inactive.  If that's true, I need to assume they might be active and
get new backups of them.

If I just delete the volume, with discarddata=yes, and the backups
are active versions, will that force TSM to realize it no longer has
an active backup of those files, and back them up again the next time
the node is backed up?
--


Matt Simpson --  OS/390 Support
219 McVey Hall  -- (859) 257-2900 x300
University Of Kentucky, Lexington, KY 40506
mailto:[EMAIL PROTECTED]
mainframe --   An obsolete device still used by thousands of obsolete
companies serving billions of obsolete customers and making huge obsolete
profits for their obsolete shareholders.  And this year's run twice as fast
as last year's.



Re: Need advice on dealing with unreadable tape

2003-02-06 Thread Prather, Wanda
Ah.  Yes, if it gave you the ANR1256W message, it knows that there are files
on the primary tape that are not on a copy pool tape, and therefore can't be
restored.

That's not surprising - could be there was a problem reading the 2 bad files
at the time it was trying to create the copy pool copy.

And you are correct, if you purge the DB entries for the bad files, TSM will
back them up on the next go around, assuming they still exist on the client.


-Original Message-
From: Matt Simpson [mailto:[EMAIL PROTECTED]]
Sent: Thursday, February 06, 2003 5:20 PM
To: [EMAIL PROTECTED]
Subject: Re: Need advice on dealing with unreadable tape


At 5:11 PM -0500 2/6/03, Prather, Wanda wrote:
Well, if RESTORE VOLUME says there is nothing to restore, and q content
says
there is, one of them is lying!

I didn't interpret the messages as meaning there was nothing TO
restore .. I thought it meant there was nothing it COULD restore. It
did say
ANR1256W Volume 000345 contains files that could not be restored.
so it apparently knew there was stuff there it couldn't restore.

Try AUDIT VOLUME 000345
If it says the two files can't be read, then run

AUDIT VOLUME 000356 fix=yes
That should purge the bad DB entries and free up the tape.

OK, thanks.  Am I correct in assuming that purging the DB entries
will force new backups if those are the active versions?



--


Matt Simpson --  OS/390 Support
219 McVey Hall  -- (859) 257-2900 x300
University Of Kentucky, Lexington, KY 40506
mailto:[EMAIL PROTECTED]
mainframe --   An obsolete device still used by thousands of obsolete
companies serving billions of obsolete customers and making huge obsolete
profits for their obsolete shareholders.  And this year's run twice as fast
as last year's.



Seeking 3.7.5 - 5.1 upgrade advice..

2002-11-12 Thread John N. Stacey Jr
Hello,

   I would really appreciate some advice on how to proceed with an
upgrade we are planning.



Currently, we are running TSM version 3.7.5 on an Ultra 10/Solaris 7



Our plans are to upgrade to 5.1.x on a Sun Ultra Enterprise 2/Solaris 8.



I am currently trying to figure out if it would be easier to upgrade the
current 3.7.5 server to 5.1 which will automatically convert everything
to 5.1 and then move the newly converted database and other needed files
to the new server which would have 5.1 already installed and running on
it.



Or, would it be easier to restore the 3.7.5 database from tape to the
newly configured 5.1 server and run an UPGRADEDB command to bring the
database up to the current level?



If the second path is the best, does the database have to have the same
physical path on the disk as the first server?



And are the registered nodes, users, volumes and automated commands
stored in a file which can be easily transferred to the new machine?



Thank you for any help you can give.



-john

_

John N. Stacey Jr. * Information Technology

Euro RSCG MVBMS Partners

* 350 Hudson St. * New York, NY 10014

Ph. 212.886.4369 * Email: [EMAIL PROTECTED]



Re: Seeking 3.7.5 - 5.1 upgrade advice..

2002-11-12 Thread Prather, Wanda
Restoring an old data base (3.7.5) to a new level of the server (5.1) SHOULD
work but it isn't guaranteed.
That is, Tivoli doesn't test that process, and if it doesn't work, they
probably won't fix it.  And you will be going up SEVERAL levels of code.  So
unless you can find someone else who has done it going from 3.7 to 5.1, on
Solaris, I recommend you stick with plan A, and upgrade in place.  If you
have problems with that, Tivoli support will be able to help you.

Then you can move the DB to your new server.


-Original Message-
From: John N. Stacey Jr [mailto:john.stacey;EURORSCG.COM]
Sent: Tuesday, November 12, 2002 11:27 AM
To: [EMAIL PROTECTED]
Subject: Seeking 3.7.5 - 5.1 upgrade advice..


Hello,

   I would really appreciate some advice on how to proceed with an
upgrade we are planning.



Currently, we are running TSM version 3.7.5 on an Ultra 10/Solaris 7



Our plans are to upgrade to 5.1.x on a Sun Ultra Enterprise 2/Solaris 8.



I am currently trying to figure out if it would be easier to upgrade the
current 3.7.5 server to 5.1 which will automatically convert everything
to 5.1 and then move the newly converted database and other needed files
to the new server which would have 5.1 already installed and running on
it.



Or, would it be easier to restore the 3.7.5 database from tape to the
newly configured 5.1 server and run an UPGRADEDB command to bring the
database up to the current level?



If the second path is the best, does the database have to have the same
physical path on the disk as the first server?



And are the registered nodes, users, volumes and automated commands
stored in a file which can be easily transferred to the new machine?



Thank you for any help you can give.



-john

_

John N. Stacey Jr. * Information Technology

Euro RSCG MVBMS Partners

* 350 Hudson St. * New York, NY 10014

Ph. 212.886.4369 * Email: [EMAIL PROTECTED]



Re: Seeking 3.7.5 - 5.1 upgrade advice..

2002-11-12 Thread Thomas Denier
I would really appreciate some advice on how to proceed with an
 upgrade we are planning.

 Currently, we are running TSM version 3.7.5 on an Ultra 10/Solaris 7

 Our plans are to upgrade to 5.1.x on a Sun Ultra Enterprise 2/Solaris 8.

 I am currently trying to figure out if it would be easier to upgrade the
 current 3.7.5 server to 5.1 which will automatically convert everything
 to 5.1 and then move the newly converted database and other needed files
 to the new server which would have 5.1 already installed and running on
 it.

 Or, would it be easier to restore the 3.7.5 database from tape to the
 newly configured 5.1 server and run an UPGRADEDB command to bring the
 database up to the current level?

 If the second path is the best, does the database have to have the same
 physical path on the disk as the first server?

No, as long as the aggregate size of the installed database volumes on
the new system is sufficient.

 And are the registered nodes, users, volumes and automated commands
 stored in a file which can be easily transferred to the new machine?

Registered nodes and administrative users are stored in the TSM database.
Information on volumes is also stored in the database. However, a TSM
server can and should be configured to save some of the volume information
to one or more flat files whenever the information is updated. Copying
the flat file involved to the new system will make the database restore
much easier. The path to the flat file is given in the server options
file. A similar situation prevails for device definitions. However, you
will probably not be able to simply copy the flat file of device
definitions to the new system; you will probably need to edit the file
to reflect the configuration differences between the two systems.
Automated commands may or may not be in the database, depending on
the exact mechanism used. Scripts and administrative schedules are in
the database, but macros are not. After the database restore is done
you may need to do some clean-up. If the disk storage pool volumes are
different you will need to delete the old ones and define the new ones.
If tape libraries and drives are different you will need to update
the library and drive definitions.

You should probably look at the discussion of server disaster recovery
in the Administrator's Guide. Server disaster recovery is essentially
an urgent migration to new hardware.



Re: Seeking 3.7.5 -gt; 5.1 upgrade advice..

2002-11-12 Thread Sias Dealy
TSM'er

I agree with Wanda. Stick with plan A.

When migrating from TSM 3.7.X to 5.1.X, the data base should be
upgrade during the migration. When you try to start the dsmserv
process and you recieve an error message complaining about the
database, then issue the dsmserv upgradedb command.
You can be kind of pro-active, you can issue dsmserv
upgradedb and then start the dsmserv process. I have not
experience any problems issuing dsmserv upgradedb when the
data base is already upgraded. Unless I am one of the semi-
lucky one.

When I migrated from 3.7 to 4.2, one server the database got
upgrade but for some restange reason another server I had to
issue the dsmserv upgradedb commmand after there was a
message complaining about the database.

A good hint, before you start the upgrade.
Make sure that you have a current backup of the TSM server,
backup the volhist, and the devconfig. I would also backup the
dsmserv.opt file, if you would have to start from scratch you
would not have to try to remember what was set in the
dsmserv.opt file.

Speaking about Tivoli support, in the US. Have anyone call
Tivoli support since June 2002? There was an e-mail or a notice
that due to the call volume that they are experiencing that the
tech's are  calling the customer back. Basicly, you've got some
waiting time.


Sais



Get your own 800 number
Voicemail, fax, email, and a lot more
http://www.ureach.com/reg/tag


 On, Prather, Wanda ([EMAIL PROTECTED]) wrote:

 Restoring an old data base (3.7.5) to a new level of the
server (5.1) SHOULD
 work but it isn't guaranteed.
 That is, Tivoli doesn't test that process, and if it doesn't
work, they
 probably won't fix it.  And you will be going up SEVERAL
levels of code.  So
 unless you can find someone else who has done it going from
3.7 to 5.1, on
 Solaris, I recommend you stick with plan A, and upgrade in
place.  If you
 have problems with that, Tivoli support will be able to help
you.

 Then you can move the DB to your new server.





Running and auditdb tomorrow, any advice

2002-09-06 Thread Farren Minns

Hi TSMers

I know I have asked simple questions before, but the only silly question is
the one not asked.

I am running TSM3738 on a Solaris 2.7 box (E250 400mhz 1GB mem). Can you
tell me if the way to run an audit is simply as follows:-

1)Halt the server
2)Issue the command dsmserv auditdb fix=yes

If the above syntax is correct, how can I see the progress of the job if
TSM is not running? Does anyone have much experience with this? Would it be
best to pipe the output to a file? If so, is the file likely to become
huge!?

My db is 7500 allocated and about 70% utilised. I have a 1000Mb log file
with logging set to 'normal' mode.

Any questions? Please ask. Many thanks in advance for your help.

Farren Minns - Trainee TSM and Solaris System Admin - John Wiley  sons Ltd

**

Our Chichester based offices are amalgamating and relocating to a new
address
from 1st September 2002

John Wiley  Sons Ltd
The Atrium
Southern Gate
Chichester
West Sussex
PO19 8SQ

Main phone and fax numbers remain the same:
Phone +44 (0)1243 779777
Fax   +44 (0)1243 775878
Direct dial numbers are unchanged

Address, phone and fax nos. for all other Wiley UK locations are unchanged
**



Re: Running and auditdb tomorrow, any advice

2002-09-06 Thread Seay, Paul

I just heard from development put on the latest available server patch.
They now know an unload and audit can get into a deadly embrace because of
W2K system objects.  4.2.2.10 and 5.1.1.4 have the fixes for the problems.

Paul D. Seay, Jr.
Technical Specialist
Naptheon Inc.
757-688-8180


-Original Message-
From: Farren Minns [mailto:[EMAIL PROTECTED]]
Sent: Friday, September 06, 2002 3:09 AM
To: [EMAIL PROTECTED]
Subject: Running and auditdb tomorrow, any advice


Hi TSMers

I know I have asked simple questions before, but the only silly question is
the one not asked.

I am running TSM3738 on a Solaris 2.7 box (E250 400mhz 1GB mem). Can you
tell me if the way to run an audit is simply as follows:-

1)Halt the server
2)Issue the command dsmserv auditdb fix=yes

If the above syntax is correct, how can I see the progress of the job if TSM
is not running? Does anyone have much experience with this? Would it be best
to pipe the output to a file? If so, is the file likely to become huge!?

My db is 7500 allocated and about 70% utilised. I have a 1000Mb log file
with logging set to 'normal' mode.

Any questions? Please ask. Many thanks in advance for your help.

Farren Minns - Trainee TSM and Solaris System Admin - John Wiley  sons Ltd


**

Our Chichester based offices are amalgamating and relocating to a new
address from 1st September 2002

John Wiley  Sons Ltd
The Atrium
Southern Gate
Chichester
West Sussex
PO19 8SQ

Main phone and fax numbers remain the same:
Phone +44 (0)1243 779777
Fax   +44 (0)1243 775878
Direct dial numbers are unchanged

Address, phone and fax nos. for all other Wiley UK locations are unchanged

**



Re: Your advice wanted!

2002-07-05 Thread Zlatko Krastev

In this case AutoVault *IS NOT* an alternative - usage of server-to-server
virtual volumes requires DRM license.

Zlatko Krastev
IT Consultant




Please respond to ADSM: Dist Stor Manager [EMAIL PROTECTED]
Sent by:ADSM: Dist Stor Manager [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
cc:

Subject:Re: Your advice wanted!

Check out AutoVault from http://www.coderelief.com as a good alternative to
DRM.

Bill

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Steve Harris
Sent: Thursday, June 27, 2002 7:05 PM
To: [EMAIL PROTECTED]
Subject: Re: Your advice wanted!


There is a large retail operation here in Australia that backs up all its
store servers without tape at each site.

There is a TSM server at each site with local disk storage, copypools are
defined across the network to a central site
(probably one in each city).  We briefly looked at something like that for
some of our own sites, but the smaller sites are mainly netware only and
they didn't want the hassle of an NT box just to do backups.

Of course this requires DRM on each server, but if you buy enough licences
I'm sure you'll get some sort of discount.

Steve Harris
AIX and TSM Admin,
Queensland Health, Brisbane Australia.

 [EMAIL PROTECTED] 28/06/2002 6:52:56 
Have in mind that due to lack of random access copypools you have to plan
also reclamation there (of course if copypools is used at all). I would
prefer to mirror primary diskpool volumes (of course DBLog too) thus
getting random access copies and still protected against HDD failure.
Do not forget to schedule backups to file devclass copypools are using.
Backup of volhistorydevconfig would help but even without them files from
DB backups have .DBB extension and are easily recognizable. And they may
go off-site if using Alex's idea for disks exchange (there is no
hot-swapping for IDE but we know the cages for quick cold-swapping).
Alex, you will not be able to send Shark's disks off-site because ESS LIC
will complain but try to send the whole Shark off-site :)

Zlatko Krastev
IT Consultant




Please respond to ADSM: Dist Stor Manager [EMAIL PROTECTED]
Sent by:ADSM: Dist Stor Manager [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
cc:

Subject:Re: Your advice wanted!

Hi.

I don't see much return on making those primary disk storagepools
sequential, because once they get tape hardware, you can just move data
or
migrate the backup data off of the random diskpool volumes.  In fact, it's
more of a headache because you'll have to start reclaiming them and
whatnot.
Definitely stay with random access disk volumes in your primary diskpools.

For copypool, I wonder.  Since your installation is so small, I wonder if
you can get some hotswap drive bays (do those exist for IDE?), buy 2 more
IDE hard drives (they're fairly cheap, aren't they?), and start an offsite
rotation of your copypool disks.  That would be cute.  And much cheaper
than
investing in a new tape infrastructure to begin with.  Hmm... I wonder if
I
can do that with Shark disk.  But your copypool would have to be
sequential,
so that would complicate matters.

If you can figure out how to do sequential volumes and reclamation and
whatnot on disk, I would use those two disks as copypool, with or without
the extra 2 disks for offsite.  Then if you have an application based
corruption of your primary diskpool volumes, your copypool has a good
chance
of surviving that because it's more of an asynchronous mirror process.
Synchronous mirroring would be more vulnerable to application based
corruptions.

Have you given any thought to how you're going to manage your dbbackups?
It's a good thing to have them on some other machine or media.  You could
back up your database to disk and ftp it to another machine, or mount
remote
disk and back up to it, or half a dozen other variations.

Good luck.

Alex Paschal
Storage Administrator
Freightliner, LLC
(503) 745-6850 phone/vmail

-Original Message-
From: Maria Waern [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, June 26, 2002 8:14 AM
To: [EMAIL PROTECTED]
Subject: Your advice wanted!


As a TSM newbie I'd be grateful for some hints and tips - I'm not after
instructions because I have those!

A customer has a small office, 15 or 20 people with laptops (20 Gb HDDs
mostly) and a new Windows 2000/TSM server that contains 4x 60Gb IDE
disks.  They have no tape robot.

What is the best way to set up storage pools on the disks?  Use
sequential pools instead of standard disk storage pools to provide for
easier future storage pool backup should they acquire some tape robot
(although this seems highly unlikely at the present time)?

Also how big should each storage volume on the disks be?  Presumably
it's not a good idea to make one large (approx 60 Gb) storage volume on
each disk?  It may not even be possible to do this for all I know!

Also, what about having two of the disks set aside for copy storage
pools?  They only have one

Re: Your advice wanted!

2002-06-28 Thread Zlatko Krastev

This is nice working solution but if their budget is so tight to have IDE
instead of SCSI disks then they probably will have no DRM.

Zlatko Krastev
IT Consultant




Please respond to ADSM: Dist Stor Manager [EMAIL PROTECTED]
Sent by:ADSM: Dist Stor Manager [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
cc:

Subject:Re: Your advice wanted!

There is a large retail operation here in Australia that backs up all its
store servers without tape at each site.

There is a TSM server at each site with local disk storage, copypools are
defined across the network to a central site
(probably one in each city).  We briefly looked at something like that for
some of our own sites, but the smaller sites are mainly netware only and
they didn't want the hassle of an NT box just to do backups.

Of course this requires DRM on each server, but if you buy enough licences
I'm sure you'll get some sort of discount.

Steve Harris
AIX and TSM Admin,
Queensland Health, Brisbane Australia.

 [EMAIL PROTECTED] 28/06/2002 6:52:56 
Have in mind that due to lack of random access copypools you have to plan
also reclamation there (of course if copypools is used at all). I would
prefer to mirror primary diskpool volumes (of course DBLog too) thus
getting random access copies and still protected against HDD failure.
Do not forget to schedule backups to file devclass copypools are using.
Backup of volhistorydevconfig would help but even without them files from
DB backups have .DBB extension and are easily recognizable. And they may
go off-site if using Alex's idea for disks exchange (there is no
hot-swapping for IDE but we know the cages for quick cold-swapping).
Alex, you will not be able to send Shark's disks off-site because ESS LIC
will complain but try to send the whole Shark off-site :)

Zlatko Krastev
IT Consultant




Please respond to ADSM: Dist Stor Manager [EMAIL PROTECTED]
Sent by:ADSM: Dist Stor Manager [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
cc:

Subject:Re: Your advice wanted!

Hi.

I don't see much return on making those primary disk storagepools
sequential, because once they get tape hardware, you can just move data
or
migrate the backup data off of the random diskpool volumes.  In fact, it's
more of a headache because you'll have to start reclaiming them and
whatnot.
Definitely stay with random access disk volumes in your primary diskpools.

For copypool, I wonder.  Since your installation is so small, I wonder if
you can get some hotswap drive bays (do those exist for IDE?), buy 2 more
IDE hard drives (they're fairly cheap, aren't they?), and start an offsite
rotation of your copypool disks.  That would be cute.  And much cheaper
than
investing in a new tape infrastructure to begin with.  Hmm... I wonder if
I
can do that with Shark disk.  But your copypool would have to be
sequential,
so that would complicate matters.

If you can figure out how to do sequential volumes and reclamation and
whatnot on disk, I would use those two disks as copypool, with or without
the extra 2 disks for offsite.  Then if you have an application based
corruption of your primary diskpool volumes, your copypool has a good
chance
of surviving that because it's more of an asynchronous mirror process.
Synchronous mirroring would be more vulnerable to application based
corruptions.

Have you given any thought to how you're going to manage your dbbackups?
It's a good thing to have them on some other machine or media.  You could
back up your database to disk and ftp it to another machine, or mount
remote
disk and back up to it, or half a dozen other variations.

Good luck.

Alex Paschal
Storage Administrator
Freightliner, LLC
(503) 745-6850 phone/vmail

-Original Message-
From: Maria Waern [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, June 26, 2002 8:14 AM
To: [EMAIL PROTECTED]
Subject: Your advice wanted!


As a TSM newbie I'd be grateful for some hints and tips - I'm not after
instructions because I have those!

A customer has a small office, 15 or 20 people with laptops (20 Gb HDDs
mostly) and a new Windows 2000/TSM server that contains 4x 60Gb IDE
disks.  They have no tape robot.

What is the best way to set up storage pools on the disks?  Use
sequential pools instead of standard disk storage pools to provide for
easier future storage pool backup should they acquire some tape robot
(although this seems highly unlikely at the present time)?

Also how big should each storage volume on the disks be?  Presumably
it's not a good idea to make one large (approx 60 Gb) storage volume on
each disk?  It may not even be possible to do this for all I know!

Also, what about having two of the disks set aside for copy storage
pools?  They only have one machine dedicated for TSM just now and no
off-site backup.  I said it was a small office!  Anyway, I was thinking
to have a primary pool on two of the disks and dedicate the other two
for a copy storage pool.

Like I said I don't need instructions, just your ideas!

Maria

  1   2   >