AW: Storage Pools

2001-05-20 Thread sal Salak Juraj

Hi Tom,

you leave us in darkness what your offsite pool exactly is.
If it is a backup storage pool of your "onsite" pool, 
then yes, deleting onsite files will delete offsite files as well.

Seems like what you want to accomplish is simply effectively turn
co-location on, for new and for existing data.

There are simple ways to achieve this, like following:
- update the storage pool concerned to be co-located
- be sure you have reasonably much free tape volumes in it
- tape by tape, move data from existing volumes back either into same
storage pool,
 or in the originating primary disk storage pool.
 You will not loose a single file and you will end with collocated tapes.

regards

Juraj Salak

-Ursprüngliche Nachricht-
Von: Berning, Tom [mailto:[EMAIL PROTECTED]]
Gesendet am: Freitag, 18. Mai 2001 20:41
An: [EMAIL PROTECTED]
Betreff: Storage Pools

I am currently running ADSM 3.1.2.50 and will be converting to TSM 4.1.

The question I have is my boss would like to keep all of the data that is
currently on our offsite copy pool and on our onsite archive pool.
What he want is to destroy the onsite migration pool, then start new backups
of all of the systems and migrate them to the onsite migration pool using
co-location.

My question is if we delete all of the migration pool tapes and discard the
data will it also remove the data from the offsite copy pool tapes or will
they remain in tact in case
something needs to be restored from that pool?

We want to do this right after we convert from ADSM to TSM.

Thanks in advance for any responses.

Thomas R. Berning
UNIX Systems Administrator
8485 Broadwell Road
Cincinnati, OH 45244
Phone:   513-388-2857
Fax:   513-388-
Email:[EMAIL PROTECTED]



AW: Copy storage pools

2001-05-04 Thread sal Salak Juraj

Hi,

I find it easy to automate everything with the exeption of 
>> moving adequate data back to diskpool,

This is because you only can move all data from volume to disk stg, which is
notoriously smaller.
So you will want to watch the move process and cancel as soon as 
disk stg becomes almost full. 
OK, OK,  this could be automated as well, but...
All that scripting is not worth the work.
Short and simple paper-written procedures 
for operator might be convenient enough.

But understanding backgrounds of your problem,
I suggest another solution: 
let your customer purchase another tape drive. 
No, not LTO drive, a cheaper one instead,
maybe DLT1 which is only some 60.000 CK worth
or even cheap DDS3 tape (but remember: you wil become what you pay for)

Define only your backup stg on this drive, use it for nothing else,
eventually with the exception of full database backups.
This way there will almost no tape changes be necessary on this manual
drive,
no obscure workarounds will be necessary, 
and the customer will not be in the risk of single drive failure.

If it helps you to sell, 
I even could you let you know the # of my bank account :)

regards
Juraj Salak


-Ursprüngliche Nachricht-
Von: Hrouda Tomáš [mailto:[EMAIL PROTECTED]]
Gesendet am: Freitag, 4. Mai 2001 11:08
An: [EMAIL PROTECTED]
Betreff: Re: Copy storage pools

Hi Juraj,

it sounds perfect for me. My last question is, if it can be automatized - I
mean finding single instance files, moving adequate data back to diskpool,
backup and migrate them back to tapes ? It is important for me to offer
complex solution to my customer althoug he has not enough money (or he don't
want to spend them meanwhile) to buy second drive.

Thanx
Tom

> -Původní zpráva-----
> Od:   sal Salak Juraj [SMTP:[EMAIL PROTECTED]]
> Odesláno:   4. května 2001 10:59
> Komu: [EMAIL PROTECTED]
> Předmět:  AW: Copy storage pools
> 
> Hi,
> 
> Your idea should perfectly work.
> 
> There is an maybe even simpler workaround:
> update stg your_primary_tape_pool access=readonly
> 
> 
> But using any of this there is a danger your disk pool becomes full thus 
> preventing any backups from succesful run.
> I find the risk of occasionaly having data migrated prior
> to backup stg pool is smaller - in such case you have only
> one copy of backup files, but without backup you have no
> copy at all.
> You still can periodically try
> backup stg your_primary_tape_pool  your_backup_tape_pool
> preview=yes
> to find out whether there are files in one instance only, and if, what
> tapes
> are they on.
> In this case you still can move data from this tape volumes back to disk
> pool,
> backup and migrate them again. It costs much time but is quite secure.
> 
> regards
> Juraj
> 
> 
> -Ursprüngliche Nachricht-
> Von: Hrouda Tomáš [mailto:[EMAIL PROTECTED]]
> Gesendet am: Freitag, 4. Mai 2001 10:29
> An: [EMAIL PROTECTED]
> Betreff: Re: Copy storage pools
> 
> Thanks to you Juraj, 
> 
> it is good idea for me and a little rescue in prepared solution for our
> customer (he wants "incrased" security of data providing copy backups and
> prepared solition has only one tape drive - exactly LTO Autoloader - and
> we
> swear to solve it). Good point is that diskpool can be big enough (about
> 50GB for cca 130GB of whole amount of backuped data).
> 
> But another one question: is posible to ensure, that data moved from
> clients
> to primary disk storage pool will not be NEVER migrated to tape pool befor
> backuping them to copy tape pool? The only way that occures to me is
> changing (by scheduled script) NEXT_STG_POOL parametr for diskpool,
> exactly
> disable it before copy backup and enable it after copy backup and then
> migrate data. I think that emptying NEXT_STG_POOL parametr ensure, that
> data
> cannot be migrated there. Please tell me, if is it good idea, of if exist
> some bettter solution.
> 
> Thanx
> Tom
> 
> > -Původní zpráva-
> > Od:   sal Salak Juraj [SMTP:[EMAIL PROTECTED]]
> > Odesláno:   4. května 2001 8:55
> > Komu: [EMAIL PROTECTED]
> > Předmět:  AW: Copy storage pools
> > 
> > Hallo Tomáš,
> > 
> > you could use storage standard storage pool hierarchy,
> > backing first to disk storage pool, 
> > which later migrates to primary tape pool
> > (you probably already do this).
> > 
> > If your disk storage pool is large enough not to be migrated for, say,
> > couple of days,
> > than you can schedule 
> > backup storage pool your_disk_storage_pool to your_tape_backup_pool 
> > often. This will create the redundancy you wish.
> > 
> > When the data from disk storage pool are later 
> > migrated to you 

AW: Copy storage pools

2001-05-04 Thread sal Salak Juraj

Hi,

Your idea should perfectly work.

There is an maybe even simpler workaround:
update stg your_primary_tape_pool access=readonly


But using any of this there is a danger your disk pool becomes full thus 
preventing any backups from succesful run.
I find the risk of occasionaly having data migrated prior
to backup stg pool is smaller - in such case you have only
one copy of backup files, but without backup you have no
copy at all.
You still can periodically try
backup stg your_primary_tape_pool  your_backup_tape_pool preview=yes
to find out whether there are files in one instance only, and if, what tapes
are they on.
In this case you still can move data from this tape volumes back to disk
pool,
backup and migrate them again. It costs much time but is quite secure.

regards
Juraj


-Ursprüngliche Nachricht-
Von: Hrouda Tomáš [mailto:[EMAIL PROTECTED]]
Gesendet am: Freitag, 4. Mai 2001 10:29
An: [EMAIL PROTECTED]
Betreff: Re: Copy storage pools

Thanks to you Juraj, 

it is good idea for me and a little rescue in prepared solution for our
customer (he wants "incrased" security of data providing copy backups and
prepared solition has only one tape drive - exactly LTO Autoloader - and we
swear to solve it). Good point is that diskpool can be big enough (about
50GB for cca 130GB of whole amount of backuped data).

But another one question: is posible to ensure, that data moved from clients
to primary disk storage pool will not be NEVER migrated to tape pool befor
backuping them to copy tape pool? The only way that occures to me is
changing (by scheduled script) NEXT_STG_POOL parametr for diskpool, exactly
disable it before copy backup and enable it after copy backup and then
migrate data. I think that emptying NEXT_STG_POOL parametr ensure, that data
cannot be migrated there. Please tell me, if is it good idea, of if exist
some bettter solution.

Thanx
Tom

> -Původní zpráva-----
> Od:   sal Salak Juraj [SMTP:[EMAIL PROTECTED]]
> Odesláno:   4. května 2001 8:55
> Komu: [EMAIL PROTECTED]
> Předmět:  AW: Copy storage pools
> 
> Hallo Tomáš,
> 
> you could use storage standard storage pool hierarchy,
> backing first to disk storage pool, 
> which later migrates to primary tape pool
> (you probably already do this).
> 
> If your disk storage pool is large enough not to be migrated for, say,
> couple of days,
> than you can schedule 
> backup storage pool your_disk_storage_pool to your_tape_backup_pool 
> often. This will create the redundancy you wish.
> 
> When the data from disk storage pool are later 
> migrated to you primary tape storage pool,
> there will not be any need to backup them again, 
> TSM will keep the link between those two copies.
> 
> I have four tape drives and I still use this technique in order to
> minimise usage of the tapes and to speed things up.
> 
> 
> For older backup data you already have in your tape storage pool
> you could do  first 
>  move data some_tape stgpool your_disk_storage_pool,
> then again backup and migration just as above.
> 
> 
> Considering the prices of large disks this can be cheaper than having two
> tape drives,
> but I still do advice you: require second tape drive from your management.
> 
> System with only one tape is restricted in more ways (reclamation , no two
> paralell restores possible,
> single tape is single point of failure: no backup/restore if the tape does
> not work, etc.)
> 
> regards
> Šalak Juraj
> 
> KEBA AG
> Softwareentwicklung Bankautomation
> Gewerbepark Urfahr 14 - 16
> Postfach 111
> A-4041 Linz
> Österreich
> 
> Tel. ++43/732/7090-7461
> Fax ++43/732/730910
> e-mail: [EMAIL PROTECTED]
> www.keba.co.at
> 
> 
> 
> 
> 
> 
> -Ursprüngliche Nachricht-
> Von: Hrouda Tomáš [mailto:[EMAIL PROTECTED]]
> Gesendet am: Freitag, 4. Mai 2001 06:45
> An: [EMAIL PROTECTED]
> Betreff: Copy storage pools
> 
> Hi all, 
> I want to ensure:
> 1. Do I need two mountpoints (at minimum two drives) if I want to backup
> primary tape storage pool to copy storage pool?
> 2. Is any other way to resolve making backups to copy storage pools with
> only single tape drive?
> 
> Thanx Tom
> 
> Tomáš Hrouda, AGCOM Smiřice
> Tivoli Certified Consultant
> Storage Manager Specialist
> [EMAIL PROTECTED]
> 049-5941312, 0604-296521
> ICQ#77892561
> 
> 
> ---
> Odchozí zpráva neobsahuje viry.
> Zkontrolováno antivirovým systémem AVG (http://www.grisoft.cz).
> Verze: 6.0.225 / Virová báze: 107 - datum vydání: 22.12.2000
> ---
> Příchozí zpráva neobsahuje viry.
> Zkontrolováno antivirovým systémem AVG (http://www.grisoft.cz).
> Verze: 6.0.225 / Virová báze: 107 - datum vydání: 22.12.2000
> 
---
Odchozí zpráva neobsahuje viry.
Zkontrolováno antivirovým systémem AVG (http://www.grisoft.cz).
Verze: 6.0.225 / Virová báze: 107 - datum vydání: 22.12.2000



AW: Copy storage pools

2001-05-03 Thread sal Salak Juraj

Hallo Tomáš,

you could use storage standard storage pool hierarchy,
backing first to disk storage pool, 
which later migrates to primary tape pool
(you probably already do this).

If your disk storage pool is large enough not to be migrated for, say,
couple of days,
than you can schedule 
backup storage pool your_disk_storage_pool to your_tape_backup_pool 
often. This will create the redundancy you wish.

When the data from disk storage pool are later 
migrated to you primary tape storage pool,
there will not be any need to backup them again, 
TSM will keep the link between those two copies.

I have four tape drives and I still use this technique in order to
minimise usage of the tapes and to speed things up.


For older backup data you already have in your tape storage pool
you could do  first 
 move data some_tape stgpool your_disk_storage_pool,
then again backup and migration just as above.


Considering the prices of large disks this can be cheaper than having two
tape drives,
but I still do advice you: require second tape drive from your management. 
System with only one tape is restricted in more ways (reclamation , no two
paralell restores possible,
single tape is single point of failure: no backup/restore if the tape does
not work, etc.)

regards
Šalak Juraj

KEBA AG
Softwareentwicklung Bankautomation
Gewerbepark Urfahr 14 - 16
Postfach 111
A-4041 Linz
Österreich

Tel. ++43/732/7090-7461
Fax ++43/732/730910
e-mail: [EMAIL PROTECTED]
www.keba.co.at






-Ursprüngliche Nachricht-
Von: Hrouda Tomáš [mailto:[EMAIL PROTECTED]]
Gesendet am: Freitag, 4. Mai 2001 06:45
An: [EMAIL PROTECTED]
Betreff: Copy storage pools

Hi all, 
I want to ensure:
1. Do I need two mountpoints (at minimum two drives) if I want to backup
primary tape storage pool to copy storage pool?
2. Is any other way to resolve making backups to copy storage pools with
only single tape drive?

Thanx Tom

Tomáš Hrouda, AGCOM Smiřice
Tivoli Certified Consultant
Storage Manager Specialist
[EMAIL PROTECTED]
049-5941312, 0604-296521
ICQ#77892561


---
Odchozí zpráva neobsahuje viry.
Zkontrolováno antivirovým systémem AVG (http://www.grisoft.cz).
Verze: 6.0.225 / Virová báze: 107 - datum vydání: 22.12.2000



AW: NT Performance issues

2001-05-03 Thread sal Salak Juraj

Morgan,

I am afraid you were a bit upset when writing
your mail. Stating that Arcserve is capable of 100+ MB/sec
seems either not to be fair, or you must have very
sophisticated hardware.

In order to give you any advice more informations are needed.
If you happen , for example, to use 10MBit network,
then your 1MB/s performance is quite good.

I use NT on both sides as well, 100MBit in between,
and have restore performances of 4-18GB/hour depending
on different factors (the upper limit is only possible through
data compression)

regards
juraj salak

-Ursprüngliche Nachricht-
Von: MORGAN TONY [mailto:[EMAIL PROTECTED]]
Gesendet am: Donnerstag, 3. Mai 2001 18:40
An: [EMAIL PROTECTED]
Betreff: NT Performance issues

Hi All,

I run the NT TSM 4.1.x.x at client (43 clients) and Server and performance
is rubbish...

Arcserve was 100 times faster!!! (Min.)

I have applied all the tuning covered in the Tivoli Adv.Mgmt course... and
can't get much past 1MB/sec

So that I can justify (or not) retaining Tivoli Any ideas where its all
gone wrong

I will send details of my setttings/hardware to anyone interested

Is it really just a crabby product???

Rgds
Tony Morgan
Fortis Bank UK



AW: DISK reclamation?

2001-05-03 Thread sal Salak Juraj

hallo jack,

no need to worry about reclamation of disk storage pools at all.
(Primary) disk storage pools will be emptied according to
their migration limits as set by you.
Maybe you should read TSM documentation, there is a nice 
chapter about its concepts in one of the books (sorry, I forgot
where exactly, but it is easy to find)

best regards
Juraj Salak


-Ursprüngliche Nachricht-
Von: Jack McKinney [mailto:[EMAIL PROTECTED]]
Gesendet am: Mittwoch, 2. Mai 2001 22:28
An: [EMAIL PROTECTED]
Betreff: DISK reclamation?

 TSM 4.1.3.2 (I think... 4.1 plus whatever patch they put on their site
last week) on AIX 4.3.3.

 Firstly, is there a FAQ associated with this list?   I have a feeling
this question may have come up before...

 How do DISK volumes reclaim unused space?  On a tape, since you cannot
write to the begining of a tape you are reading from, you have to copy the
data to another tape to reclaim.
 On a disk volume, this should not be necessary, since you can copy
data from the middle of a volume to the begining of the volume.
 I just installed a new TSM server here with disk volumes as the
primary storage pool.  Should I be worried about reaching the "end" of the
disks?  Will TSM fill the "holes" in each volume with new data, or will it
shift all of the data to the beginning of the volume, just moving the holes
to the end of the volume?
 If not, should I put one of the volumes into a separate storage pool
and then tell TSM to reclaim my main disk pool to this second pool, and
have that pool migrate back to the main one?   I would hate to do this if
it was not necessary, since it would waste a lot of good space for backups.
 OTOH, I'd hate to put ALL of my disks into the one storage pool, and
then be unable to do backups due to lack of reclamation...
 Right now, I have some disks that I have not used, yet (once a part
arrives, I am going to mirror the original volumes).   If I wait until we
have filled the disk storage pool and see what happens, I can then recover
if backups stop by using one of the other disks as a reclamation pool. I'd
rather not trial and error on a production server, though...

--
"There are two kinds of spurs, my friend: Jack McKinney
 Those that come in by the door, and  [EMAIL PROTECTED]
 those that come in by the window.http://www.lorentz.com
 -Tuco, The Good, The Bad, and The Ugly
1024D/D68F2C07 4096g/38AEF076



AW: AW: DSM.OPT override question.

2001-05-02 Thread sal Salak Juraj

Hi Andy,

you are quite right, 
one should never ignore vendors backup requirements,
I agree with you this vere a bad idea.

Maybe having been misunderstood, I will repeat myself:

In addition to your statements 
I still propose to speak about restore requirements 
inspite of backup requirements whenever possible.

As trivial and natural this change sounds to be,
it is a major paradigm change
which easily leads to better understanding of real requirements,
even in the case of experienced users and vendors.

Can you suggest/imagine any backup requirement
where there is no restore requirement behind it?
And be they implicit/instictive only, restore
requirements are what enforces all backups.
So why most of us always speak about backup requirements
but almost never about restore requirements?
For me this is a historical misconception only.

Restore requirements enforce backup recommendations,
the opposite is not true at all, 
there is a one-way dependency between those two.

Backup requirements defined per se 
easily turn to be either superfluos or even insufficient
when later cross-checked against actual restore requirements.

regards
Juraj Salak

P.S. by the way - *SM is one of very few products which
support  - through management classes - this way of thinkink




-Ursprüngliche Nachricht-
Von: Andy Raibeck [mailto:[EMAIL PROTECTED]]
Gesendet am: Freitag, 27. April 2001 16:34
An: [EMAIL PROTECTED]
Betreff: Re: AW: DSM.OPT override question.

If the product's vendor is making a recommendation about how to back up (or
how not to back up) their product, it's probably a good idea to heed that
recommendation. You do not want to be in a position where you need to
restore, and the vendor can not help because you did not follow their
recommended backup procedure.

That said, it would be a good idea to ask the vendor why they make that
recommendation, so that you can evaluate its technical merits. In this
case, the vendor may have a very good reason for recomending against
incremental backups; on the other hand, perhaps they do not understand what
is meant by "incremental" backup, so again, a discussion as to why they
make the recommendation is a good idea, so that all parties understand the
issues.

Regards,

Andy

Andy Raibeck
IBM Tivoli Systems
Tivoli Storage Manager Client Development
e-mail: [EMAIL PROTECTED]
"The only dumb question is the one that goes unasked."
"The command line is your friend"


sal Salak Juraj <[EMAIL PROTECTED]>@VM.MARIST.EDU> on 04/26/2001 11:06:16 PM

Please respond to "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>

Sent by:  "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>


To:   [EMAIL PROTECTED]
cc:
Subject:  AW: DSM.OPT override question.



not a direct help for your OPT problem
but a general though about misconception
it has been caused by:

Much too often we speak - with our
vendors and bosses and customers as well -
about backup requirements.

NOBODY HAS ANY BACKUP REQUIREMENTS,
WE ALL DO ONLY HAVE RESTORE REQUIREMENTS.

The backup is only a way to accomplish it,
backup is only a tool and maybe a method,
but not our target.

If you happen to make your vendor understand this,
the chances are better he will not matter
what name of backup - incremental or selective - you use.

regards
Juraj Salak


-Ursprüngliche Nachricht-
Von: Alan Davenport [mailto:[EMAIL PROTECTED]]
Gesendet am: Donnerstag, 26. April 2001 21:03
An: [EMAIL PROTECTED]
Betreff: DSM.OPT override question.

One of our vendors insists that their product cannot be backed up
incrementally and insists a selective backup is required. We've
explained that a point in time restore is possible from incremental
backups however they still insist that is no good.

We've included their directory in our include/exclude list for the node
bound the the management class that has been set up especially for this
product's backups. The problem I'm having is that we have to use
preschedule/postschedule commands to shut down and restart the database
before/after the backups. We do not want to shut down the database for
the normal nightly incremental backup for the node.

I swear I read someplace that is possible to override the default
DSM.OPT file for a backup schedule for a command line but I cannot
remember where or how to do this. Can anyone help please?

I would like to have the dir in question excluded from the normal
incremental backup and for the selective backup include the needed
pre/post schedule commands and the directory in question with an include
statement.

Al



AW: Ramifications of Delete Volume as pertains to next backup cyc le.

2001-05-02 Thread sal Salak Juraj

Yes, you are right, the files will be backed up as if they vere new on the
client.

Besides, under circumstances you are not right with the premise
"with no chance of recovery". If the volume deleted belongs to a primary
pool
and had been backed up by the meands of "backup storage pool",
than you can restore the files.

regards
juraj salak

-Ursprüngliche Nachricht-
Von: Alan Davenport [mailto:[EMAIL PROTECTED]]
Gesendet am: Freitag, 27. April 2001 16:21
An: [EMAIL PROTECTED]
Betreff: Ramifications of Delete Volume as pertains to next backup
cycle.

I have a question regarding what happens after a "DELETE VOL X
DISCARD=YES" is issued. It is not about what happens to the data on the
tape. I realize that the data is then gone with no chance of recovery.
My question is what does *SM do during the next backup cycle. Will *SM
see the files that were previously backed up on the tape as new files on
the client and then back them up again?

  Thank you,
  Al



AW: Immediate client actions

2001-05-02 Thread sal Salak Juraj

Schedule Randomization Percentage
affects 
Immediate Client Actions
as well,
I believe

regards
Juraj Salak


-Ursprüngliche Nachricht-
Von: Marc D. Taylor [mailto:[EMAIL PROTECTED]]
Gesendet am: Montag, 30. April 2001 21:54
An: [EMAIL PROTECTED]
Betreff: Immediate client actions

Hello All,

Does anyone out there routinely use the Immediate Client Actions
utility?  I have never gotten it to work for me.  And by work, I mean that
I set up some action that i want to happen immediately and I add it.  In my
mind, as soon as I hit the add button, that action should start.

Has anyone else had this problem.  I am using ADSM Server 3.1.2.30 on AIX
4.2.1.

TIA.

Marc Taylor



AW: LTO Tape Drive Performance

2001-05-02 Thread sal Salak Juraj

Thanks!

Interesting reading, however, with limited usablility scope.
This tests vere optimised highest throughputs / streaming 
(fair over all technologies, but for this single task only),
while many of us will cope with lower throughputs because of sets of smaller
files, network limitations etc. In such environments can tape technologies
tested compare quite different.

regards
Juraj salak

-Ursprüngliche Nachricht-
Von: Dmochowski, Ray [mailto:[EMAIL PROTECTED]]
Gesendet am: Freitag, 27. April 2001 15:06
An: [EMAIL PROTECTED]
Betreff: LTO Tape Drive Performance

To all 
A "new" performance paper by Progressive Strategies is posted on the Tivoli
site -
SEE ANALYST REPORTS:
Tape Drive Performance Comparisons-Using Tivoli Storage Manager
http://www.tivoli.com/products/solutions/storage/storage_related.html

Ray Dmochowski
Schering-Plough Research Institute
Kenilworth, NJ 07033, U.S.A.
(908) 740-3261
[EMAIL PROTECTED]

This E-mail and any files transmitted with it may contain information that
is confidential and privileged.  This material is intended solely for the
use of the individual(s) or entity to whom it is addressed.
Opinions and conclusions contained in this message shall be understood as
non-binding and neither given nor endorsed by Schering-Plough Research
Institute.
If you believe you have received this email in error, please notify the
sender.





***
 This electronic message, including its attachments, is confidential and
proprietary and is solely for the intended recipient.  If you are not the
intended recipient, this message was sent to you in error and you are hereby
advised that any review, disclosure, copying, distribution or use of this
message or any of the information included in this message by you is
unauthorized and strictly prohibited.  If you have received this electronic
transmission in error, please immediately notify the sender by reply to this
message and permanently delete all copies of this message and its
attachments in your possession.  Thank you.



AW: DSM.OPT override question.

2001-04-27 Thread sal Salak Juraj

not a direct help for your OPT problem
but a general though about misconception
it has been caused by:

Much too often we speak - with our 
vendors and bosses and customers as well -
about backup requirements.

NOBODY HAS ANY BACKUP REQUIREMENTS,
WE ALL DO ONLY HAVE RESTORE REQUIREMENTS.

The backup is only a way to accomplish it,
backup is only a tool and maybe a method, 
but not our target.

If you happen to make your vendor understand this,
the chances are better he will not matter 
what name of backup - incremental or selective - you use.

regards
Juraj Salak


-Ursprüngliche Nachricht-
Von: Alan Davenport [mailto:[EMAIL PROTECTED]]
Gesendet am: Donnerstag, 26. April 2001 21:03
An: [EMAIL PROTECTED]
Betreff: DSM.OPT override question.

One of our vendors insists that their product cannot be backed up
incrementally and insists a selective backup is required. We've
explained that a point in time restore is possible from incremental
backups however they still insist that is no good.

We've included their directory in our include/exclude list for the node
bound the the management class that has been set up especially for this
product's backups. The problem I'm having is that we have to use
preschedule/postschedule commands to shut down and restart the database
before/after the backups. We do not want to shut down the database for
the normal nightly incremental backup for the node.

I swear I read someplace that is possible to override the default
DSM.OPT file for a backup schedule for a command line but I cannot
remember where or how to do this. Can anyone help please?

I would like to have the dir in question excluded from the normal
incremental backup and for the selective backup include the needed
pre/post schedule commands and the directory in question with an include
statement.

Al



AW: exporting nodes

2001-04-26 Thread sal Salak Juraj

Yes.
If you happen to have free disk space for your largest node,
it is a breeze.
regards
juraj salak

-Ursprüngliche Nachricht-
Von: Joseph Dawes [mailto:[EMAIL PROTECTED]]
Gesendet am: Mittwoch, 25. April 2001 22:47
An: [EMAIL PROTECTED]
Betreff: exporting nodes

Has anyone ever exported a node to a file(disk) instead of tape? I'm moving
about 700 gig of data.




joe



AW: Incremental DBbackups

2001-04-25 Thread sal Salak Juraj

No.
That´s why I perform incremental db backups
to a disk (on another server).
regards
juraj salak

-Ursprüngliche Nachricht-
Von: Gill, Geoffrey L. [mailto:[EMAIL PROTECTED]]
Gesendet am: Mittwoch, 25. April 2001 22:15
An: [EMAIL PROTECTED]
Betreff: Incremental DBbackups

Hello all,

A question about incremental DBbackups. If I run 10 incremental backups on
the database per day will they all be written to the same tape, provided
there is room, as long as the tape remains in the library?

Thanks for the info,

Geoff Gill
TSM Administrator
NT Systems Support Engineer
SAIC
E-Mail:   [EMAIL PROTECTED]
Phone:  (858) 826-4062
Pager:   (888) 997-9614



AW: LTO Performance

2001-04-25 Thread sal Salak Juraj

Hello,

from what I know about currently available tape technologies
I guess that AIT-2 technology might be best for TSM, 
because its performance strongly relies on short search times 
and on short tape load/unload cycles.
Tis technology seems to be reliable and durable enough as well.

Can anybody either confirm or disprove that for me?

best regards
Juraj Salak
Austria


-Ursprüngliche Nachricht-
Von: Richard Sims [mailto:[EMAIL PROTECTED]]
Gesendet am: Freitag, 20. April 2001 19:40
An: [EMAIL PROTECTED]
Betreff: Re: LTO Performance

>We recently upgraded our 3466 model A01 with two 3575 tape libraries to a
>3466 model C01 with a 3584 tape library.  We also upgraded from ADSM 3.1.1
>to TSM 4.1.  I have begun to notice that our tape operations are taking
>significantly longer than they did before. ...

Debbie - I think many of us have been wondering what the reality of 3583
 LTO Ultrium tape drives would be, and your posting may incite
more exploration.  As I understand the LTO initiative, Ultrium was
intended to sacrifice performance for capacity, while the Accelis flavor
was to favor performance but offer limited capacity.  My reading says that
Accelis development was abandoned in favor of the higher capacity that
the marketplace would want with Ultrium.  But your question makes me
wonder how much performance may have been compromised.

Perform the tar test that Paul recommended and let us know what you
are able to determine.

  thanks,  Richard Sims, BU



AW: Reducing/compressing the database

2001-04-25 Thread sal Salak Juraj

Shawn,

with the database of this size
you might evenconsider splitting it on two TSM instances.
There are advantages and drawbacks coupled with that,
but it works.

>> - Migrate all the client nodes one by one to other machines with
>> import/export. 
I did it when I moved from OS/2 to NT Server,
and it worked flawelesly.
It is even easier if you have either enough tape ressources,
or you can afford temporarily disk space to export/import single clients.

regards
Juraj salak

-Ursprüngliche Nachricht-
Von: Shawn Drew [mailto:[EMAIL PROTECTED]]
Gesendet am: Sonntag, 22. April 2001 08:30
An: [EMAIL PROTECTED]
Betreff: Reducing/compressing the database

TSM v3.7.3

I have read alot on this list about reducing the database because my
situation is pretty bad.  We have a 103 gig database that was 97% used!  I
finally was permitted to fix the outrageous retention settings, and got it
down to 50% utilization, but the Maximum Reduction value is still 0.  Now,
I
want to get the database's assigned capacity to the ibm "recommended" max
of
70 gigs. (This is from the ISSS last year in san diego. I was the guy that
had an 80gig db in one of the seminars) From reading this list, it seems I
have a few options, but could not determine the best route.  Down time IS a
factor for this.   The performance on this machine, for restores, is
dramatically slower than on other machines, and since it seems all else is
almost equal, I am assuming its the db size.
First of all, my reason for doing this is to get better performance on my
restores.

So,  will defragging the database really improve my restore times? seems
pointless otherwise.

It seems my options are:

- dumpdb/loaddb - I read some horror stories of this, and really hesitate
on
it.  Also, it seems the loaddb takes very long, from other people's
experience  (2 days for a 40gb db? I think I read?)

- unloaddb/loaddb - The only difference I can find with this and the
previous one is that it defrags the database.  And it seems that the loaddb
portion is the same, and is subject to the same unreliability and time
problems.  (This is the one described in the manuals to solve my situation)

- Richard recommended the backup db/restore db options over the
dumpdb/loaddb because it is more reliable and faster.  Does this also offer
the defrag benefit?  And how long would it take?

- Migrate all the client nodes one by one to other machines with
import/export.  Kill and rebuild TSM and the db and move everything back.
This seems like the one with least downtime. which may be the best option
and least risky.  But it will take a LONG time, and strain my other boxes.

thanks
shawn

___
Shawn Drew
Tivoli IT - ADSM/TSM Systems Administrator
[EMAIL PROTECTED]



AW: RAID-5 Vs Mirroring

2001-04-25 Thread sal Salak Juraj

Hi Mahes,

just few thoughts:


>> 1. With the RAID 5 at AIX level, should a second, synchronized  copy
 of DB volumes is required.

It had been argumented on this forum that TSM mirroring has 
some advantages over OS-based Raid system.
Anyway, an OS-based RAID and only single database volumes
would definitely help against single disk failure as well.

>>  2. If I do not keep a second copy and take daily full backup, would
it
 be RISKY.
Database backup is mandatory whether you use Raid5 of whatever else.
I, for example, make full backupo each day and about 10 incremental backups
in between.

Be sure you can perform database restore - do test it!
be sure you can afford this down time in case of need.

>>  IBM's version is that since a second copy MUST be kept
It´s new for me. I think it is only recommended.

Raid 5 and 80 GB Database: do not forget, database updates (such as restore
from backup) will take much longer with Raid5!

regards
juraj salak




-Ursprüngliche Nachricht-
Von: Mahesh Babbar [mailto:[EMAIL PROTECTED]]
Gesendet am: Sonntag, 22. April 2001 09:53
An: [EMAIL PROTECTED]
Betreff: RAID-5 Vs Mirroring

 Hi all,

 Environment:

 NSM 3466, RS 6000, AIX 4.3.2 , TSM version 3.7.4,

 12 x 18.2 GB disk ( for DB and Diskpool Volumes)

 My current DB is of 80 GB size and is alarmingly utililized ( 95
 %).The DB volulmes are mirrored inside TSM ( Second Copy).

 Therefore another 80 GB space is being used for the second copies. In
 order to have more usable space for DB, a suggestion has been mooted
 to go for RAID-5 at the hardware level.

 IBM's version is that since a second copy MUST be kept , going for
 RAID 5 shall require more disk space. Now my question is:

 1. With the RAID 5 at AIX level, should a second, synchronized  copy
 of DB volumes is required.

 2. If I do not keep a second copy and take daily full backup, would it
 be RISKY.

 3. Configuring RAID 5 at this juncture, would anybody see any pitfall
 ahead.

 Any comments are welcome!

 Regards

 MAhesh



AW: ANR0102E

2001-04-20 Thread sal Salak Juraj

.. looks like a problem ...

Look for oldest database backup you have
and try to remember yourself whether 
it is older than your ANR0102E problem.
If yes, keep this database backup safe.

Backup your DB and related configuration files,  
toggle your db off-line,
adn perform either audit db fix=yesa detail=yes,
or dump db / load db
and hope that it helps.
If not, do exactly what the manual says,
"contact your service representative"
and organise some cash.

Do it today better then tomorrow.

Regards

Juraj Salak

-Ursprüngliche Nachricht-
Von: Forgosh, Seth [mailto:[EMAIL PROTECTED]]
Gesendet am: Freitag, 20. April 2001 14:25
An: [EMAIL PROTECTED]
Betreff: Re: ANR0102E

Thanks. I read that in the messages manual also. Does that mean that need to
take the server off line to do a DB audit or is something a little less
vexing in order?
Seth Forgosh

-Original Message-
From: Gisbert Schneider [mailto:[EMAIL PROTECTED]]
Sent: Friday, April 20, 2001 9:19 AM
To: [EMAIL PROTECTED]
Subject: Re: ANR0102E


ANR0102E Source file(line number): Error error code inserting row in table
"table name".

Explanation: An internal error has occurred in an attempt to add data to a
server database table. This message always accompanies another error message
and provides more detail about that error.

System Action: The activity that generated this error fails.

User Response: Contact your service representative.




__


   Da E-Mails leicht unter fremdem Namen erstellt oder manipuliert werden
   koennen, muessen wir zu Ihrem und unserem Schutz die rechtliche
   Verbindlichkeit der vorstehenden Erklaerungen ausschliessen. Die fuer die
   Stadtsparkasse Koeln geltenden Regeln ueber die Verbindlichkeit von
   rechtsgeschaeftlichen Erklaerungen mit verpflichtendem Inhalt bleiben
   unberuehrt.


**
This message, including any attachments, contains confidential information
intended for a specific individual and purpose, and is protected by law.  If
you are not the intended recipient, please contact sender immediately by
reply e-mail and destroy all copies.  You are hereby notified that any
disclosure, copying, or distribution of this message, or the taking of any
action based on it, is strictly prohibited.
TIAA-CREF
**



AW: ? BUFPOOLSIZE limit under TSM4.1 / NT

2001-04-20 Thread sal Salak Juraj

Hallo,

Result from this and some offline responds:

there seem not to be any limitation on BufPoolSize 
on *SM Server on NT starting with v 3.7,
except for real free RAM size (practical  limit)
or except for free virtual memory size (theoretical limit).

So all NT folks with small Cache Hit ratio
go on and purchase RAM
to increase your´s DB speed.

thanks
Juraj Salak, Linz, Austria


-Ursprüngliche Nachricht-
Von: Volovsek, Jay [mailto:[EMAIL PROTECTED]]
Gesendet am: Donnerstag, 19. April 2001 17:23
An: [EMAIL PROTECTED]
Betreff: Re: ? BUFPOOLSIZE limit under TSM4.1 / NT

I am running my TSM box on W2K with a buffpool size of 196MB and it is
running fine.  That is the number that TSM chose on install.

-Original Message-
From: sal Salak Juraj [mailto:[EMAIL PROTECTED]]
Sent: Thursday, April 19, 2001 7:24 AM
To: [EMAIL PROTECTED]
Subject: Re: ? BUFPOOLSIZE limit under TSM4.1 / NT


Hi and thanks,
 
the very same sentence can be found in v3. manual,
but there it is simply not true.
Using larger values then 64 MB with 3.1.7 will
cause *SM Server to start slow, slower, not to start at all (depending on
the BufPoolSize value)
and the space declared  above 64MB will aparently never be used at all.
 
Anyone running TSM 4.x on NT 4.x with Bufpoolsize > 64MB?
Or at least on NT2000 - maybe BufPool works better there??
 
regards
Juraj Salak
 
 

-Ursprüngliche Nachricht-
Von: Ray Pratts [mailto:[EMAIL PROTECTED]]
Gesendet am: Donnerstag, 19. April 2001 14:01
An: [EMAIL PROTECTED]
Betreff: Re: ? BUFPOOLSIZE limit under TSM4.1 / NT
According to the 4.1 Admin. Reference, "The maximum value is limited by
available virtual memory size." 

sal Salak Juraj wrote: 


hallo, 

I have purchased upgrade from TSM Server 3.1.7 under NT 4 
to newest TSM server version. 


There is limitation of BufPoolSize to max. 65536 in my current system. 


I hope this limit is much larger with TSM 4.x - I would like 
to purchase some RAM an speed up my system. 
Can anyone tell me how large the limit is, if any? 


Another question: 
are there any late server versions I should avoid because of known 
problems? 
(NT4, DLT4, LTO) 


regards 


Salak Juraj 


KEBA AG 
Softwareentwicklung Bankautomation 
Gewerbepark Urfahr 14 - 16 
Postfach 111 
A-4041 Linz 
Österreich 


Tel. ++43/732/7090-7461 
Fax ++43/732/730910 
e-mail: [EMAIL PROTECTED] 
www.keba.co.at



Re: ? BUFPOOLSIZE limit under TSM4.1 / NT

2001-04-19 Thread sal Salak Juraj

Hi and thanks,
 
the very same sentence can be found in v3. manual,
but there it is simply not true.
Using larger values then 64 MB with 3.1.7 will
cause *SM Server to start slow, slower, not to start at all (depending on
the BufPoolSize value)
and the space declared  above 64MB will aparently never be used at all.
 
Anyone running TSM 4.x on NT 4.x with Bufpoolsize > 64MB?
Or at least on NT2000 - maybe BufPool works better there??
 
regards
Juraj Salak
 
 

-Ursprüngliche Nachricht-
Von: Ray Pratts [mailto:[EMAIL PROTECTED]]
Gesendet am: Donnerstag, 19. April 2001 14:01
An: [EMAIL PROTECTED]
Betreff: Re: ? BUFPOOLSIZE limit under TSM4.1 / NT
According to the 4.1 Admin. Reference, "The maximum value is limited by
available virtual memory size." 

sal Salak Juraj wrote: 


hallo, 

I have purchased upgrade from TSM Server 3.1.7 under NT 4 
to newest TSM server version. 


There is limitation of BufPoolSize to max. 65536 in my current system. 


I hope this limit is much larger with TSM 4.x - I would like 
to purchase some RAM an speed up my system. 
Can anyone tell me how large the limit is, if any? 


Another question: 
are there any late server versions I should avoid because of known 
problems? 
(NT4, DLT4, LTO) 


regards 


Salak Juraj 


KEBA AG 
Softwareentwicklung Bankautomation 
Gewerbepark Urfahr 14 - 16 
Postfach 111 
A-4041 Linz 
Österreich 


Tel. ++43/732/7090-7461 
Fax ++43/732/730910 
e-mail: [EMAIL PROTECTED] 
www.keba.co.at



? BUFPOOLSIZE limit under TSM4.1 / NT

2001-04-19 Thread sal Salak Juraj

hallo,

I have purchased upgrade from TSM Server 3.1.7 under NT 4
to newest TSM server version.

There is limitation of BufPoolSize to max. 65536 in my current system.

I hope this limit is much larger with TSM 4.x - I would like 
to purchase some RAM an speed up my system.
Can anyone tell me how large the limit is, if any?

Another question: 
are there any late server versions I should avoid because of known
problems?
(NT4, DLT4, LTO)

regards

Salak Juraj

KEBA AG
Softwareentwicklung Bankautomation
Gewerbepark Urfahr 14 - 16
Postfach 111
A-4041 Linz
Österreich

Tel. ++43/732/7090-7461
Fax ++43/732/730910
e-mail: [EMAIL PROTECTED]
www.keba.co.at



AW: More LOADDB woes!!!

2001-04-18 Thread sal Salak Juraj

Hi,

>> Ended up restoring from the day before's database backup. 
>> Gonna be some upset people in the A.M.

for future you might want to
either change logmode to rollforward
or simply perform incremental database backups
between full ones.
Limitation: much largere logvolumes necessary.

I have scheduled incremental database backups each 2 hours,
with disk files as target device. This will not help
in case of disaster, but suits perfectly for
software problems like yours.
This backups tend to be very small and quick - look at
Q DB F=D; Changed Since Last Backup;
to see how small this sort of backups vere at your site.
Limitation: only 32 incremental backups are allowed,
so you will not be able to schedule it very often.

best regards
juraj salak



-Ursprüngliche Nachricht-
Von: William Boyer [mailto:[EMAIL PROTECTED]]
Gesendet am: Montag, 16. April 2001 04:30
An: [EMAIL PROTECTED]
Betreff: More LOADDB woes!!!

After running for almost 23-hours, the LOADDB process terminated with:

ANRD PVRNTP(2536): RC 0x0 Reading cartridge block 00026776 to ddname
SYS00040.
ANR0664E LOADDB: Media not accessible in accessing data storage.
ANR1364I Input volume M00750 closed.
ANR5209I Dismounting volume M00750 (read-only access).
ANRD DLLOAD(1249): Premature End of Dump - Missing Volumes ?
VCS0201E   DATA LOST - VCSS NOT OPERATIVE 
ANR4032I LOADDB: Copied 2236928 database records.
ANR4033I LOADDB: Copied 351 bit vectors.
ANR4035I LOADDB: Encountered 0 bad database records.
ANR4036I LOADDB: Copied 78859100 database entries.
ANR4037I LOADDB: 4919 Megabytes   copied.
ANR4005E LOADDB: Database load process terminated due to error (-1).
ANR2106I : Quiescing database update activity.
ANR2107I : Database update activity is now quiesced.

Ended up restoring from the day before's database backup. Gonna be some
upset people in the A.M.

Bill Boyer
"Some days you are the bug, some days you are the windshield." - ??



AW: Should LOADDB take this long?

2001-04-18 Thread sal Salak Juraj

-Ursprüngliche Nachricht-
Von: William Boyer [mailto:[EMAIL PROTECTED]]
Gesendet am: Sonntag, 15. April 2001 23:50
An: [EMAIL PROTECTED]
Betreff: Should LOADDB take this long?

TSM 3.7.3.0 running on OS/390 2.9

Last night our automated processes to shut the system down didn't take into
account that TSM would take a few extra minutes to halt due to reclamation
running. The automation script ended up taking TCPIP and CA-TLMS and
DFSMSRmm (in warn mode) down while TSM was still up and trying to close out
his tape processing. TSM ended up abending with a EC6. After our downtime,
TSM wouldn't come back up. It would sit there in a CPU loop with no I/O. The
last message in the joblog was "ANR0306I Recovery log volume mount in
progress." WOuld come up any farther. I managed to get an DUMPDB to run and
it took only 1/2 hour and dumped over 78million database entries for a total
of 6.9MB. I then did a FORMAT for all the db/log volumes and started the
LOADDB last night at 20:15. It is still running and it is now 22 hours later
and has only processed 70million of those database entries.

I searched the archives, but there wasn't much on LOADDB. Should LOADDB take
this long when the DUMPDB only took 1/2hour? Good thing this is a holiday
weekend or the users and managers would be more upset than they
are. I tell them, Hey it wasn't my shutdown script that corrupted the
system!!!

Also, if anyone has any ideas on how I could have averted having to do the
DUMP/LOADDB processes I would be more than happy to hear them. I just
couldn't think of any way to bypass the recovery log processing during
startup, or to have the load cleared by itself.

TIA,
Bill Boyer
"Some days you are the bug, some days you are the windshield." - ??



AW: Should LOADDB take this long?

2001-04-18 Thread sal Salak Juraj

Hi,

looks like your DB is good optimised for read operations,
less for write/change operations.

Load takes long, but not necessarily that long.

My DB of some 6 GB used space took more than an hour to be dumped,
but was loaded in something more than a day.
NT4, 2 333 Pentii CPU´s.

I believe that adsm-DB spread in mure (16) db volumes
over 2 raid 1 arrays implemented on
an 80MB/s SCSI controller with fair amount 
of well-managed on-board RAM-cache
and usage of at least 2 independent scsi channels
speeds up writing in adsm´s database significantly.
(please note, same amount of dbvols could 
be fatal in another HW environmenmt)

More usefull in your current situation:
if you are using the adsm´s software raid 1,
switch it off during either load or restore operations.
This will speed things up, you can add this 
redundancy any time later.


regards
Juraj Salak


-Ursprüngliche Nachricht-
Von: William Boyer [mailto:[EMAIL PROTECTED]]
Gesendet am: Sonntag, 15. April 2001 23:50
An: [EMAIL PROTECTED]
Betreff: Should LOADDB take this long?

TSM 3.7.3.0 running on OS/390 2.9

Last night our automated processes to shut the system down didn't take into
account that TSM would take a few extra minutes to halt due to reclamation
running. The automation script ended up taking TCPIP and CA-TLMS and
DFSMSRmm (in warn mode) down while TSM was still up and trying to close out
his tape processing. TSM ended up abending with a EC6. After our downtime,
TSM wouldn't come back up. It would sit there in a CPU loop with no I/O. The
last message in the joblog was "ANR0306I Recovery log volume mount in
progress." WOuld come up any farther. I managed to get an DUMPDB to run and
it took only 1/2 hour and dumped over 78million database entries for a total
of 6.9MB. I then did a FORMAT for all the db/log volumes and started the
LOADDB last night at 20:15. It is still running and it is now 22 hours later
and has only processed 70million of those database entries.

I searched the archives, but there wasn't much on LOADDB. Should LOADDB take
this long when the DUMPDB only took 1/2hour? Good thing this is a holiday
weekend or the users and managers would be more upset than they
are. I tell them, Hey it wasn't my shutdown script that corrupted the
system!!!

Also, if anyone has any ideas on how I could have averted having to do the
DUMP/LOADDB processes I would be more than happy to hear them. I just
couldn't think of any way to bypass the recovery log processing during
startup, or to have the load cleared by itself.

TIA,
Bill Boyer
"Some days you are the bug, some days you are the windshield." - ??



AW: file name too long when?

2001-04-15 Thread sal Salak Juraj

up to my knowledge,
TSM respects the limits of underlying file system.

A years ago I had similar problem,
it turned out to be caused by really too long names,
too long in terms of NTFS.

However, we were able to create such long names in NT with following
scenario:
mount a directory , e.g. c:\tmp\ under drive letter (D:).
Now create files under d:. Limits apply to sum of directory name length plus

file name length. Create file with maximum file length.
Now access same file using c:\tmp path.
In this case, the file name increased by length("c:\tmp"),
thus went beyond allowe limits.

I forgot the exact limit,
I do not know exact versions of NT where this behaviour 
apply and whether it changed in later version,
so you only can check whether this might be your problem.

regards
Juraj Salak
Linz, Austria

-Ursprüngliche Nachricht-
Von: Glass, Peter [mailto:[EMAIL PROTECTED]]
Gesendet am: Donnerstag, 12. April 2001 22:43
An: [EMAIL PROTECTED]
Betreff: file name too long when?

One of our NT clients is getting a large number of 'ANE4018E ...: file name
too long' errors in their backups. The file names in question are indeed
quite long.
Does anybody know the TSM standard on file name lengths?
Can somebody point me to which TSM docs discuss this specifically?
Thank you!

Peter Glass
Distributed Storage Management (DSM)
Wells Fargo Services Company
> * 612-667-0086  * 866-249-8568
> * [EMAIL PROTECTED]
>



AW: RAID5 or 0+1

2001-04-05 Thread sal Salak Juraj

Hi,

as others already pointed out, there are more factors to be considered.

As for Raid5 vs. raid1+0,
the later will almost surely improve overall performance at higher costs.

Changing to either raid 1 or raid1+0 will likely 
etremly speed-up tasks where the database is changed,
like client backups or the restoration of DB itself.

As for database I would consider 2 raid1 arrays over single rad1+0 array,
and a separate dbvolume per each raid1 array.

This could considerably speed your database reads and writes.
Allowing for 2 separate dbvolumes per each raid1 array
would even further sped the reads - 
as there were up to 4 paralell I/0s - one per spindle.
But writing would cause up to 8 parallel I/Os - 2 per spindle,
which could slow things down.

But - if your disk controller has a fair amount of cahce
this would not matter much,
and you could define even more dbvolumes.
With my controller with 128MB ram and nice
access optimisation
4 dbvolumes per raid1 are better than 2,
but - it depends.

Raid1+0 would be nice for backup storage pools,
if you can afford it.



Regards
Juraj Salak



-Ursprüngliche Nachricht-
Von: Dearman, Richard [mailto:[EMAIL PROTECTED]]
Gesendet am: Mittwoch, 4. April 2001 22:02
An: [EMAIL PROTECTED]
Betreff: RAID5 or 0+1

I have been seeing high memory utilization and high disk busy problems on
the disks that my database sits on when I do backups.   I'm running AIX
4.3.3 with in a H70 with 2 cpu's and 1gb of memory.  I was thinking of
moving my tsm database to a RAID 0+1 setup from a RAID5 setup to see if I
gain any performance.  Does anyone have any suggestions on database
performance issuses.

Thanks
***EMAIL  DISCLAIMER**
This e-mail and any files transmitted with it may be confidential and are
intended solely for the use of the individual or entity to whom they are
addressed.   If you are not the intended recipient or the individual
responsible for delivering the e-mail to the intended recipient, any
disclosure, copying, distribution or any action taken or omitted to be taken
in reliance on it, is strictly prohibited.  If you have received this e-mail
in error, please delete it and notify the sender or contact Health
Information  Management (312) 996-3941.



AW: Backup of Remote Sites

2001-04-05 Thread sal Salak Juraj

Hi,
just 2 pennies:

- yes, compression on should be used in such configuration,
  and *sm client  is better suited for that than NTBackup.

- do think in terms of restore, not in terms of backup backup: 
while you probbaly will succeed in backuops,
will you still be able to perfom good enough at restores, 
which are non-incremental by their very nature?

regards
Juraj Salak
[EMAIL PROTECTED]


-Ursprüngliche Nachricht-
Von: David Nash [mailto:[EMAIL PROTECTED]]
Gesendet am: Mittwoch, 4. April 2001 17:50
An: [EMAIL PROTECTED]
Betreff: Backup of Remote Sites

I have a question for all of the *SM/Network experts
out there.  We have a central office that we just
started using TSM at.  We also have several remote
offices that are connected to central office via
dedicated lines.  Theses sites currently are running
their own backups via NTBackup.  We are concerned that
these backups are unreliable/not offsite/not being done.
The dedicated lines are mostly 256Kbs lines but a few
are smaller.  Is it a good idea to try to back up these
sites across the WAN using *SM?  We realize that the first
backup would take a while, but after we suffer through that,
the amount of changed data would be small.  Is it a good
idea in this case to turn on client compression?  Any
suggestions would be appreciated.

Thanks,

--David Nash
  Systems Administrator
  The GSI Group



AW: Comparison of Backup Products

2001-03-28 Thread sal Salak Juraj

Hi,

I use TSM while my colleagues use HP-Omniback.

Looks like everybody cooks from water only.

I had a couple of problems with my adsm/tsm in last 4 yeras,
one of them serious (turned to be HW problem, 
but only after nightless weeks).

The collegaues had problems as well.

I feel TSM has had larger amount of problems and annoyancies overall, 
while Omniback had maybe less problems but more of them vere serious.


I heard similar stories about legato.

Base your decision on your needs, the 
products are simply different and differntly 
suite different requirements.

Be sure to have some xternal support
and ability to report errors.

Only aplly new software version when you really need them.
Always look for new patches a check ther readmes 
against your problems.

Use quality hardware.

Do not hope for a bug-free software,
all prducts are complex and undergo since years 
rapid development. 
Thus the are dammned to have bugs.

regards
juraj SALAK
Linz, Austria

-Ursprüngliche Nachricht-
Von: Tim Melly [mailto:[EMAIL PROTECTED]]
Gesendet am: Mittwoch, 28. März 2001 15:14
An: [EMAIL PROTECTED]
Betreff: Comparison of Backup Products

To All,

Due to the poor quality *SM code Tivoli has been releasing, and the
resultant
fallout at my site, I've been asked to investigate other backups solutions.
I'm
familiar with Veritas NetBackup but have not worked with any other backup
products. Does anyone have any experiences / information on non-*SM products
that they can share.

Thx, Tim Melly
  Bayer Corp.