It is very interesting watching all of the comments on this very emotive
topic.
The answer to the question 'are you taking monthly backups?' is always
'Yes - of course I am'. Monthly backup is a logical thing, not a physical
thing - does the user need to know the difference? You could even foll
Yes, it is all working ok now.
One gotcha we found was that our NT guy set up the option file for the local
drive clients with
clusterno
This was a mistake as you dont want the option there at all on the local
clients.
Specifying the option just caused duplicate backups of the cluster drive
Hi,
What is the purpose of ODBC files like (IP22519_ODBC.EXE) in the same folder
with clients like (IP22519..EXE)?
Does not the client installation set include everything we need? (i mean why is
it seperate?)
Regards,
Burak
Hi *sm'ers
in the statistics of my archive-job I found a Value of ojects inspected
6800 and ojects archived of 5900. This is a difference of more than 900
objects. Has anyone an idea, what the reason for this big value may be? The
server runs with tsm version 4.2.2 and the client is Version 3.7.
Merhaba Burak
An ODBC driver allows you to use a relational database product such as SQL
to query the database and display the results. Client installation does not
set include everything we need. You have to install API and ODBC
separately.
selamlar...
My organization will be implementing Windows XP
workstations. These workstations could be used by
multiple users. Is there any way for individual users
to backup and restore their own filespaces?
__
Post your ad for free now! ht
Hi TSM Guys,
I have a customer who moves volumes from a primary storage pool to an
offsite location by setting the volumes to "UNAVAILABLE". The storage
pool's migration threshold is set to 100% for spacerclamation to be
prevented. What do I have to consider if I want to manually start space
rec
Merhaba Tsm,
Then why is API is included in the client installation set? (it gives us the
option to install or not)
Thanks
Burak
[EMAIL PROTECTED]
Sent by: [EMAIL PROTECTED]
09.07.2002 14:06
Please respond to ADSM-L
Hi!
Does anyone have any tips on how they migrated their archived or 'long term
retention' backup data from other backup applications? We have multiple
years worth of data and on various tape formats.
Thanks in advance for any information you can provide.
Brenda Collins,
ING
612-342-3839
Hi *sm`ers
the amount of data pertaining our Lotus Domino Servers is enlarging
enormous. The domino guys tell me that they can restore very old databases
although the domino db`s are deleted a long time ago. When we try to
restore such an old database which on the domino server has been deleted a
The ODBC driver is a completely separate entity from the rest of the
backup-archive client. It has no dependencies on the b-a client, and the
b-a client has no dependencies on the ODBC driver. Therefore, in order to
help reduce the already substantial size of the client package, the ODBC
drive
Sorry folks, I had a lot of problems with the list accepting large messages. My
attempts to break up the document only seems to have partly worked. If you would like
the full document, email me and I will send the whole thing to you.
Miles
-
yes, you may change the accesses to readwrite and then reclaim but you should
normally use offsite not unavailable access
regards,
burak
[EMAIL PROTECTED]
Sent by: [EMAIL PROTECTED]
09.07.2002 16:23
Please respond to ADSM-L
Hi Holger,
with TDP for Lotus Domino every database is treated as if it was one filespace. On
your TSM Server you can see this using "q files
".
Like with the ba-client TSM updates the filespace when you perform an incremental
backup and otherwise leaves the filespace as it
is. So when you stop
Restore the data, move it to a system on which a TSM client
is installed and start a TSM backup.
(assuming you still have access to the hardware/software
which was used to backup the data in the past)
Regards,
Alexander
> Hi!
>
> Does anyone have any tips on how they migrated their archived or
Please include in the email with the whole document. Thanks for sharing your work.
Mahesh
>>> [EMAIL PROTECTED] 07/09/02 09:06AM >>>
Sorry folks, I had a lot of problems with the list accepting large messages. My
attempts to break up the document only seems to have partly worked. If you would l
Hi Michael,
when we do a 'q files nodename' the result looks like This:
Node Name Filespace Platform
Filespace CapacityPct
Name
Type(MB) Util
-
Merhaba
ODBC driver is available only on Windows platforms, not on UNIX platforms.
You have to install ODBC Driver seperately on windows. Typical setup on
windows gives you the Backup-Archive-Client, the API files, and the web
client
Regards,
Enver Genc
Have you tried "AUDIT VOL 000987 FIX=YES"? That should fix your problem.
Al
-Original Message-
From: Steve Bennett [mailto:[EMAIL PROTECTED]]
Sent: Monday, July 08, 2002 4:19 PM
To: [EMAIL PROTECTED]
Subject: offsite tape will not reclaim, delete or move
I have an offsi
I would like to have a copy of the document
Thanks,
Gary Wallace
Storage Engineer
Email:[EMAIL PROTECTED]
Office:(916) 356-6465
Pager: (888) 787-9831
Cellular:(916) 799-4479
<<...OLE_Obj...>>
-Original Message-
From: Miles Purdy [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, July 0
Hopefully a simple question:
I have a select script that finds the tapes I want to be processed by
MOV DRM, is there a way to pipe the results from the select back into
that command? Here's the select:
select volume_name,state from drmedia where (volume_name in (select
volume_name from volumes w
On Wednesday, June 26, 2002, at 08:05 AM, Remeta, Mark wrote:
> Actually you can reconstruct the aggregates during a move data. I forget
> what version it started with but there is a command line option for move
> data called Reconstruct=yes that will reconstruct the aggregates
> during a
> move
The only way I know to do it is start over. In my shop its usually caused
by the LAN group changing switches and such without telling me.
First check to see if you can ping IP addresses from the host out to the
client and the client to the host. Also, check if your cards are ok at the
clients as
I would like to look at the whole document.
Thanks!
Orville L. Lantto
Datatrend Technologies, Inc. (http://www.datatrend.com)
IBM Premier Business Partner
121 Cheshire Lane, Suite 700
Minnetonka, MN 55305
Email: [EMAIL PROTECTED]
V: 952-931-1203
F: 952-931-1293
C: 612-770-9166
Miles Purdy <
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
Brenda Collins
> Does anyone have any tips on how they migrated their archived or
> 'long term
> retention' backup data from other backup applications? We have multiple
> years worth of data and on various tape formats.
Your on
Yes, absolutely. Have them run the dsm program and they will be able to
back and restore data on their own.
--
Joshua S. Bassi
IBM Certified - AIX 4/5L, SAN, Shark
eServer Systems Expert -pSeries HACMP
Tivoli Certified Consultant - ADSM/TSM
Sr. Solutions Architect @ rs-unix.com
An IBM Premier
In order to find a list of tapes to remove from my LTO every morning, and update their
DRM status, I use a script. Here is the relevant code. The actual script is much
longer and does several things. My script will creates another KSH script.
Code:
CHECKOUT=/tmp/update_checkout_status.`date +%
Hi, has anyone experienced a similar problem to this
BKI2017I: Blocksize is set to 131072 bytes
BKI0027I: Time: 07/07/2002 00:02:48 Object: 1 of 64 in process:
/oracle/P05/sapd
ata1/temp2_1/temp2.data1 Size: 2040.008 MB, MGMNT-CLASS: BRBACKUPMC1,
TSM-Server
: AOTSM2_SAP .
BKI0027I: Time: 07/
The below schedule was scheduled for 1 PM. The client scheduler is
polling and at 1:05 PM it updated, and said the schedule would start in 33
minutes. What causes a schedule to execute a time later than scheduled? I
am aware of staggered starts for scheduled events, but this is the only
The API is an entry into the application (ie, write custom backup clients,
or use TDP). ODBC is view of TSM database.
- Kai.
"Live in such a way that you would not be ashamed to sell your parrot to the
town gossip." -- Will Rogers
> -Original Message-
> From: Burak Demircan [mailto:[EMA
Hi Gisbert,
you're right, things seem to have changed since my last contact with the Connect Agent
for Lotus Notes...
With TDP Oracle I remember there's a tool called tdposync to synchronize backups
between rman (oracle) and the TSM server. Hopefully
there's a similar tool for TDP Domino so that
When I do a Query Mount command I get two ANR8376I messages about mount
points reserved in device class: status: RESERVED. We are running TSM
version 4.2.1.7 and using a 3494 that we are sharing between two TSM
servers. I was curious if somebody knows what that message means. We
have 6 3590 ta
Does anyone know if there is a manual or redbook about upgrading from TSM V4.1 to
V4.2? Are there any horror stories or is it pretty much cut and dry?
We just got a TSM server up and running (still in a testing phase but
it's running).
I recently tried to backup a new node I created and got this error -
ANS1329S Server out of data storage space
Now, the server only has an 80GB partition for data storage at the
moment. That is split into 7 10GB
Check your randomize setting with 'q stat'. By default it is 25%.
You can set it with set randomize.
bob
On Tue, Jul 09, 2002 at 01:33:40PM -0400, Martin, Jon R. wrote:
> The below schedule was scheduled for 1 PM. The client scheduler is
> polling and at 1:05 PM it updated, and said th
Hi,
It really depends on what platform you are using. We have setup
such a script on AIX and Linux (and it should work on any other unixes).
Here is an exemple:
#!/usr/bin/ksh
/usr/tivoli/tsm/client/ba/bin/dsmadmc -id=admin -password=admin
-outfile=/usr/tivoli/tsm/server/bin/scripts/tmp
What's your Schedule Randomization percentage and what is the backup
window for this schedule? If I recall correctly randomization is for
every schedule even if it only runs on one node. And the ramdomization
starts from the moment the nodes connect to the server not from the start
of the window
pretty seamless
server is Version 4, Release 2, Level 1.9 on os390
clients are nt 4.0/2000 at Version 4 Release 2, Level 1.20
and netware 5.1 at Version 4 Release 2, Level 2
no major problems
even restored novell server afterwards
Tim Brown
Systems Specialist
Central Hudson Gas & Electric
284
Two quick suggestions.
1. Your pool seems to be named STORAGE.. Run this command:
q stg STORAGE f=d, and look at the field "Maximum Size Threshold"
If it says "No Limit" then ok there. Else this is the maximum size
of a file/object that can be sent to this pool.
2.. What is the node you are tr
It's set to "No Limit". The nodes in question are a Linux/x86 and I
tried a Win2k just to test it.
I tried backing up maybe 200MB off the Win2k node and that failed. The
Linux boxen is probably 2GB total.
Nothing big at all.
sim
David Longo wrote:
>Two quick suggestions.
>
>1. Your pool seem
look at your recovery logs
- Original Message -
From: "Simeon Johnston" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Tuesday, July 09, 2002 2:35 PM
Subject: TSM storage problem
> We just got a TSM server up and running (still in a testing phase but
> it's running).
> I recently tr
Do a "q vol f=d" and check the "Access" field on your disk volumes. They
should all be "ReadWrite". If they are "ReadOnly" then this would explain
why TSM isn't writing to them and telling you the storage pool is full even
though at first glance it appears it isn't.
TSM can change the access auto
Did all that. Everything looks fine. It's all Read/Write.
Here's a silly question. Would this problem occure is, say, the license
check failed?
I just noticed that it says
Server License Compliance: FAILED
Is this the problem? I'd think this would be a different error.
sim
Gerald Wichmann
Ensure that the Copy destination of your Backup Copy Group is spelled
correctly or that it is pointing to a stgpool that exists.
I have seen this error message when doing an Archive to a passthru disk
storage pool that had been removed as part of a cleanup process.
Later
Log looks fine. Able to increase to 100MB. 1,284,608 usable pages (231
used).
Doesn't look like a problem.
sim
Tim Brown wrote:
>look at your recovery logs
>
>
No that just shows you what the highest percentage utilized was since the
last time you did a "reset" command on that statistic.
Gerald Wichmann
Senior Systems Development Engineer
Zantaz, Inc.
925.598.3099 (w)
-Original Message-
From: Simeon Johnston [mailto:[EMAIL PROTECTED]]
Sent: Tue
No that's not it.. it would work regardless.
Doublecheck what domain the node is assigned to and verify it is backing up
with the management class and therefore stgpool you are thinking it should
be.
I'd recommend looking through your activity log on or about where the
problem started and see if
If I'm doing a select from volumes and only want to display the first 25
volumes that meet my criteria, does anyone know how I'd do that?
Thanks,
Julie
Have you defined COPYSTGpool and COPYContinue=No on the primary pool?
If copypool is out of space and COPYC=N the whole write will fail.
Zlatko Krastev
IT Consultant
Please respond to "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
Sent by:"ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>
To
Lars,
I do not know what is the idea behind sending primary tapes off-site (DR
??) but this does not help at all in case tape gets somehow damaged. You
will learn it the hard way - when you need to restore from the broken
tape.
Migration threshold set to anything between 0 and 100% will
Does the "Max Pct Util" have anything to do with it? Is this a max
allowed setting?
Available Space (MB): 5,120
Assigned Capacity (MB): 5,020
Maximum Extension (MB): 100
Maximum Reduction (MB): 5,016
Page Size (bytes): 4,096
Total Usable Pages: 1,284,608
Ports 1500 and 1501 do *not* need to be bi-directional.
Connection on port 1500 is always initiated from the node to the server.
Connection on port 1501 is always from the server to the node. Just
firewall admin should set them correct.
Zlatko Krastev
IT Consultant
Please respond to "ADSM: Di
Hello John,
In the following error message, I can see that you are using 'backup_type'
as 'file' (-t file). In my knowledge, there is no parameter like 'file' to
support backup_type. Can you tell me any reason why you are using
backup_type as file?
"BR272E Execution of program '/usr/sap/P05/SYS/
53 matches
Mail list logo