Hi Jerry, have they tried making more "volumes" to hold the data and then
jack up the resourceutilization to allow multiple processes to tackle the
task.
I had something like that on a unix box... half million files in a file
system that was only a few GB's... took hours to back up. I seem to rec
OK backups are kept by versions & versions alone !
if you keep 30 versions and the data changes daily, on the 31st day the 1st
copy goes away !
Copy pools only copy what is in the primary pool, when it expires from the
primary pool, so goes the copypool copy.
If people really want data for a year,
little harder... just the
way we do it.
Dwight
-Original Message-
From: Richard L. Rhodes [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, March 20, 2001 7:23 AM
To: Cook, Dwight E; [EMAIL PROTECTED]
Subject: Re: SAP and 3494 performance.
Thanks for great info!
q) What kind of a client system are
ANother thing you might wish to do is set your virtual volume size to either
something really big (like 1 TB) or as big as a tape on your remote
server... (I've used 10 GB and 1024 GB and sent the data compressed) and
actually I tried to keep reclamation from running on virtual volumes...
Now some
alter to go to diskpool first
use tsm client compression
don't multiplex with tdp
run about 10-20 concurrent client sessions (based on processor ability)
you should see about 4/1 compression
you should see about 11.6 MB/sec data transfer (if things are really OK &
processor can keep up)
you should
Don't you have jcl to perform batch TSO session(s) ?
Just use that and have it issue a call to your dsmadmc
argh... we moved off all our MVS adsm 3 or so years ago BUT I seem to recall
I was doing that...
Dwight
-Original Message-
From: Lee, Gary D. [mailto:[EMAIL PROTECTED]]
Sent: Tuesd
And note !
My current environment, if a tape is FF00 in the atl and is listed as a tsm
private tape... an "audit libr" within tsm WILL NOT update the FF00 to the
proper private status for that TSM server.
I believe in the past it did, and might again in the future but currently
my environments do
Do you mean a 3494 ATL with 3590 tape drives ?
what you can do is
on ATL perform "full inventory update" which makes the atl scan all
volumes and update its internal db
you can then (from the attached aix host) do a "mtlib -l/dev/lmcpx
-qI > somefile" to get a list of internal volu
In general...
data tapes will be associated with the device class within an atl
in other words you may checkout all libvols, delete all drives, & delete the
library and all is still well !
Then if you define a new library, define your drives back to that new
library, (update your deviceclass for t
Oh might want to double check that statement...
last I checked it will not use to determine if the actual "archive" will
occur but it WILL (or used to) bind that file to any management class
specified...
example (last time I checked)
if you have the line in your inclexcl.list of
exclud
I might add, now and again I'd label lots of tapes 400 or more... naturally
I want to get it done as fast as possible...
fastest way is to split them up across 4 "dsmlabel" jobs, at that rate the
gripper never rests
some of those times I'd classify as a pretty good stress test and the atl's
here h
been running with six 3494's for 5+ years...
Love'em !
had them spread out across 4 data centers in 3 states...
as environments have changed I've been able to pull resources (cabinets &
drives) from one ATL and put them on another
very flexible
the six I deal with were bought new 5+ years ago and
mtlib -l/dev/lmcp# -qL | more
that will have all the library slot/tape info you need...(also cleaner
cycles)
the tape use "thing" has been a thorn in my rear for a long time now (going
on 6 years)
just use accounting records, take all the inbound traffic (arch, bkup, hsm)
add them up and divide b
what you will probably notice is that when you tried to put new labels on
already labeled tapes and the system figured out something was wrong 'cause
you didn't tell it to expect a label already there...
it ejected the tapes from the library (took them out of libvols, something
along those lines)
All this is why (if you do charge back) you simply charge the client by what
the "auditocc" shows...
Then if "THEY" take their processor time to compress their data they get a
little bit of a charge break
but if "THEY" don't take the time to compress their data, they end up paying
more !
Or should
It all boils down to where your bottle neck is...
If your client(s) have hugh amounts of data and big enough engines to
compress it down and it compresses nicely (such as oracle DB's) then you
will find that you can compress the data AND send it in less time than you
can send the data uncompressed
But I don't think they have a standard start or stop code/char...
We ran into the same problem about a year or two ago.
Good luck in finding something that will read it 'cause we couldn't
Dwight
-Original Message-
From: Richard Sims [mailto:[EMAIL PROTECTED]]
Sent: Thursday, March 08, 200
If your management classes don't allow for "holding on" to data for the
duration you wish to be able to perform a "point in time restore" for,
then... you are out of luck :-(((
I've attached an excel spread sheet that shows the results of management
class settings on backed up data over time and
I've been off the list for a while so this might have already been asked...
Has there been any improvement in migration process limits in versions past
3.7.2.0 ? ? ?
In other words, if there is only one client's data in a pool, will there
still only be a single migration process ? ? ?
I'm back
To see information you need to be as specific as at least the file system...
So /sdvscratch2 is a file system (I would bet)
Dwight
-Original Message-
From: Joel Fuhrman [mailto:[EMAIL PROTECTED]]
Sent: Monday, December 04, 2000 3:04 PM
To: [EMAIL PROTECTED]
Subject: query archive /*
TSM
has anyone heard of, or experienced any problems with consecutive diskpools,
each having cache turned on ? ? ?
that is
diskpool1(cached) ---next-> diskpool2(cached)-next>
tapepool
I think something is getting dorked up with the pointers to my cached files
in the first disk
ps -ef | grep sched
then look for the process number
kill dsmc_sched_process_number
I set up in inittab
root>lsitab -a | grep sched
adsm:2:respawn:/usr/tivoli/tsm/client/ba/bin/dsmc sched >/dev/null 2>&1
now to start it from the aix prompt
nohup /usr/tivoli/tsm/client/ba/bin/dsmc sched >/dev/nul
is this on a server or client ?
could it be linked with accounting data ?
is accounting on ?
do you have space for the file ?
just off the top of my head
later,
Dwight
-Original Message-
From: Rosa Leung/Toronto/IBM [mailto:[EMAIL PROTECTED]]
Sent: Monday, October 30, 2000 2:02 PM
To: [E
There will only be a single copy sent if your "changingretries" is 0
now if that copy is kept depends on the serialization of the copygroup...
and shrdyn is the one that says "yes, keep it"
Dwight
-Original Message-
From: Rick Marshall [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, October
Yep, working as designed... this is why you might want to alter your
changingretries or start excluding the changing files from backups (since
they probably in the end aren't gettin backed up anyway)
Dwight
-Original Message-
From: Zoltan Forray/AC/VCU [mailto:[EMAIL PROTECTED]]
Sent: We
define a management class, make it not the default, set the archive copy
group to point towards a new storage pool (and I wouldn't even put in a
backup copy group) now you could make this new storage pool, disk, then a
next to tape OR straight to tape... just depends on how many concurrent
client
There is the manual Tivoli Storage Manager Trace Facility Guide (mine is
V3R3 SH26-4104-00)
you can find it out on the website with other manuals, try
http://www.tivoli.com/support/storage_mgr/pubs/admanual.htm
it has all the good poop in it.
Dwight
-Original Message-
From: Selva
OK, I basically keep a diskpool big enough to contain an entire night's
backups.
Now this is compressed...
Things to remember:
Clients, even though using compression, allocate enough disk space
in the diskpool to hold the file uncompressed, once the transfer is
completed, unused space is f
I love this one...
I don't guess anyone has seen this before.
Dwight
Storage Management Server for Solaris 2.6/7 - Version 3, Release 7, Level
3.0
Last License Audit: 10/19/00 22:29:24
Registered Client Nodes: 1
Right, but the new drives can still read the old tapes, and if you drain the
data off the old tapes they can be written to by the E1A's... but if you
have all new tapes to fully populate your new atl, more power to you !
You don't really have to mess with setting the tapes to an overflow
location,
I see the following devices are supported by tsm 3.7 but is anyone using a
L280 ?
will the drivers work with a L280 ? ? ?
any thoughts ?
Dwight
SUN StorEdge L1000 1.05 TB Advanced
SUN StorEdge L1800 960 GB Advanced
SUN StorEdge L11000 11.4 TB Advanced
SUN StorEdge L 3500 11.4 TB Advanced
SUN
I'll just touch #1
just set up a management class that points to an isolated storage pool for
the arhcives...
call it offsitelongtermarcihves or something like that...
when people archive for long periods of time make them point to this via the
archive -archmc=blah /filesystem/dirpath/fil
drop the - in front of the todate... that is a client type format... not an
admin session format !
just
del volhist tod=-30 t=dbb
or t=all or what ever type you want to get rid of
Dwight
-Original Message-
From: Shekhar Dhotre [mailto:[EMAIL PROTECTED]]
Se
Ok, how many people in their typing don't hit the "q" key hard enough and
rather than a "q pr" they issue a "pr" and the #@$%^ PREPARE initiates ! !
!
sorry about that, I'll go take my thorazine now...
Dwight
Do you have collocation "yes" or "file" ? ? ?
if yes you will have as many filling tapes as clients
if file you will have as many filling tapes as total client file space count
here is an example from one of my tsm servers...
CHIDSM01 Tue Oct 10 06:05:43 CDT 2000 You have 281 SCRATCH and 518 PRIVA
and you checked that the disks are also readwrite ! ?
I like a general
q vol acc=reado,destroyed,unavail
to let me know all is well, or not
OK, sure all client code levels work with all server code levels BUT...
the only time I've seen wierd things like that happen is with server to
serv
double quotes around the whole name
"\sys-4:home\shared\release 2.0\user"
-Original Message-
From: Selva, Perpetua [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, October 04, 2000 12:51 PM
To: [EMAIL PROTECTED]
Subject: Restoring directory with spaces in between
Can someone let me know how
You are correct !
look at it as a safety feature... if you could extend it, that would imply
being able to alter it in general which could lead to early deletion with a
reduction in duration.
Sooo you have to retrieve it and re-archive it with the retention they
really wanted to begin with :-(
lat
See the attached Excel spread sheet for info on the "show version" along
with other nifty commands
later,
Dwight
-Original Message-
From: Ricardo Negrete [mailto:[EMAIL PROTECTED]]
Sent: Tuesday, October 03, 2000 10:00 AM
To: [EMAIL PROTECTED]
Subject: Show Version
Anyone help me with
If the data is not available from a primary storage pool volume and the copy
pool volume with the data on it is available...
without any intervention, the data will be retrieved from the copy pool
volume (if/when the client requests it)
to get the data back into the primary storage pool, you woul
With the object of an archive you have to be at least as specific as a "file
system" and by that I mean if you do a "q file" you will see what tsm refers
to as "file systems" on your client... to get a complete archive you would
need to list each of those ~objects~ (as the filespec) with a -subidr
make sure accounting is set on and then just gather the statistics from your
accounting records...
later ,
Dwight
-Original Message-
From: Selva, Perpetua [mailto:[EMAIL PROTECTED]]
Sent: Monday, September 25, 2000 2:43 PM
To: [EMAIL PROTECTED]
Subject: Daily backup
Does someone know of
maybe
tsm: TDCDSM01>select cast((current_timestamp-last_backup_date) as
varchar(10)) f
rom adsm.db
Unnamed[1]
--
0 02:25:42
ANR2900W The character string '0 02:25:42.00' was truncated during
assignment.
where the 1st zero is day, then the 02:25:42 is hour difference...
'bout the only quirk I can see is you have your "TO" file in quotes...
by the exact syntax specifications, double quotes around the "TO" file isn't
allowed.
uh and I believe the main reason for quotes around the "FROM" file is if
you use wildcards...
try without the quotes on the "TO" file jus
On your second note, yes I've performed tests on a smaller scale that show
your retain only would be kept for that long...
BUT what we do here is anything required to be kept 5 years or longer we
force them to archive it into special management class that we put into a
special storage pool to ensu
first, to look at a file under unix just use the "more" command
more filename
If there is a real weird object in the include/exclude list that would be
impacting these files you would be more likely to see something about and
invalid management class, default being used...
This indicates to me th
s.
>
> Overnight would you
>
> a) Run an incremental image backup or
>
> b) Run an incremental image backup and an incremental backup
>
>
>
>
>
>
>
>
> > -Original Message-
> > From: Cook, Dwight E [SMTP:[EMAIL PROTECTED]]
> > Sent: 25 August 2000 14:01
&
do a
show deadlock
just to see if one exists...
may also do a
show locks
just to see the volume of locks that exist... multiple "show locks" can
indicate if things are getting better or worse and how fast things are
changing...
some processes that are waiting on resources won't can
well...
v Improve the speed with which TSM restores file systems containing many
small files.
v Conserve resource on the server during backups since only one entry is
required for the image.
v Provide a point-in-time picture of your file system which may be useful if
your enterprise needs to recal
Uh so did you have things set to require a backup prior to a file being
migrated ?
If so in your pinch you could probably delete the stubs and restore the file
(or restore with the replace option)
Dwight
> --
> From: John Miley[SMTP:[EMAIL PROTECTED]]
> Reply To: ADSM: Di
that is one valid way but you run into a lot of network traffic, sending
that across the net again...
you could do an export of the node(s) active data only, which would give you
a snapshot of the nodes but based on how big they are you might need to
ensure they go to a storage pool with collocati
Allocate a new/additional recovery log file...
Find some file space and lay down a file with the dsmfmt command... like
Unix> dsmfmt -m -log /opt/IBMadsm-C/newlog 16
then do a
Unix> DSMSERV EXTEND LOG /opt/IBMadsm-C/newlog 16
After that you should be good to go, just start the dsmserv again...
NOPE ! (if you are saying you want to convert backed up data into archived
data)
they have to restore the files, archive them, then remove them again...
(hey, it wasn't you that removed them prematurely !)
and if they complain and say "well, tsm isn't very flexible if it can't do
that"
point out
that is a little quirk/bug...I think it might only be with raw logical
volumes, we ran into it a while back (that version sounds maybe about right)
Oh, just checked, I've still got one of these volumes and still can't get
rid of it at 3.7.2.0
yet another reason NOT to use raw logical volumes (have
Wednesday, August 23, 2000 7:35 AM
> To: [EMAIL PROTECTED]
> Subject: Re: Database sizing question
> Sensitivity: Private
>
> Are there any recommended DB filesizes ? For a 4gig Database would you
> use
> 4*1gb on different spindles ? 2*2gb files or 1 4.5gb file ?
Ah I just use an entire 4.5 GB or 9.2 GB drive for each DB file and
just add another one once I run low (hit about 85-90%util)
I've seen cache problems (low hit %) when you have an abundance of space
with nothing in it.
Dwight
> --
> From: Coles, Peter[SMTP:[EMAIL PROTECTED]]
The HSM client (if you have HSM licensed on your tsm server) will take files
from your unix client and replace the actual file with a stub file to reduce
the amount of space occupied by files on your client box. A space
management tool.
Dwight
that was the short-n-sweet condensed version
> -
Ok, maybe my environments are odd balls but I don't see where you are
allowed to use the "vollist=file:x" option with checkout libvol.
and I never do it that way... I just do a bunch of normal checkouts in the
form of a macro
Dwight
> --
> From: Shekhar Dhotre[SMTP:[EMAIL PR
> www.storserver.com
>
>
> -Original Message-----
> From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED]]On Behalf Of
> Cook, Dwight E
> Sent: Wednesday, August 16, 2000 9:40 AM
> To: [EMAIL PROTECTED]
> Subject: Re: multiple label libvols?
>
>
> just brea
just break down the tape volser list into 5 differnet files the perform from
aix
nohup dsmlabel -library=/dev/lmcp# -drive=/dev/rmt1 -keep --
> From: Snyder.John[SMTP:[EMAIL PROTECTED]]
> Reply To: ADSM: Dist Stor Manager
> Sent: Wednesday, August 16, 2000 10:11 AM
>
full : end of volume has been reached... %util is low because files have
expired... must run reclamation to get back
pending : check your "reusedelay" setting... tape has gone empty but
reusedelay is preventing it from rolling scratch again (for x number of
days)
Dwight
Uh try admin guide fo
use something along the lines of this
select * from adsm.actlog where
(cast((current_timestamp-date_time)day as decimal(18,0))=0)
> --
> From: Winfried Heilmann[SMTP:[EMAIL PROTECTED]]
> Reply To: ADSM: Dist Stor Manager
> Sent: Wednesday, August 16, 2000 9:16
tsm server Server Version 3, Release 7, Level 2.0
Client Version: Version 3, Release 1, Level 0.8 Platform: SUN SOLARIS
Client OS Level: 5.6
well, I see what part of my problem might be now...
but anyway if you query the file systems of the client from an admin session
on the server you see
And yes, sometimes the values might seem WAY out of line but here is what
I've had to set mine at to get things to work. But remember the idletimeout
has more to do with the brain-child client node rather than anything to do
with the tsm server. I've had to set the 240 minute idletimeout because
Well, technically they are in sync 'cause the file doesn't exist in either
pool !
Granted the damaged file is a problem...
Being a .dbf I'm guessing that was an archived file... if it were a "backed
up file" TSM, now that the file has been identified as damaged, would back
it up during the next
Ok,
1st) is it within a "domain"
2nd) is it valid when run through include/exclude processing
3rd) a whole bunch of things... ;-)
(I seem to remember way back in V2 that multiple things were
checked... just about along the lines of what virus checkers do... size,
checksum, etc...)
I'd che
Long, check the availability of your db & log files from your operating
system...
looks like you probably had a volume group go unavailable...
also you might have to add to and extend your log to get back up...
DSMSERV EXTEND LOG (Emergency Log Extension)
Use this command to extend the size of the
uhmmm need a "backup db" in there somewhere...
> --
> From: Daniel Swan/TM[SMTP:[EMAIL PROTECTED]]
> Reply To: ADSM: Dist Stor Manager
> Sent: Tuesday, August 08, 2000 4:37 PM
> To: [EMAIL PROTECTED]
> Subject: Help with scheduling ADSM tasks into blocks of tim
for an archive this shows up when you query the archived file
for backups, it all depends on how many versions you keep, how often the
file changes, and how little the file changes... there is some of the
"stuff" logged in the server's activity log but I tend to avoid even going
there and it kind'
check for volumes that might be unavailable...
q vol acc=unavail
also it might be that the volume is currently being used as output or input
elsewhere... say in a migration or reclamation...
if you look in the server act log you'll see the volume it was needing and
just check on its status
I got kind'a miffed 'cause I had a 3GB log & wanted to add another logfile
(a 4 GB) so I could delete the 3GB
What I had to do was allocate a 1 GB log, add it, delete my 3GB log, add my
4 GB log, then delete my 1GB log
AAA
'caus
You may also just update the node and change its domain...
upd node x domain=y
> --
> From: Shekhar Dhotre[SMTP:[EMAIL PROTECTED]]
> Reply To: ADSM: Dist Stor Manager
> Sent: Thursday, August 03, 2000 8:30 AM
> To: [EMAIL PROTECTED]
> Subject: De
Are the devices available at the library? that is under service pull down
menues
Does a "q drive" show them ?
Do you see them in a "show libr" list ?
maybe something like...
Library TTC3494LIB2 (type 349X):
refs=0, online=1, borrows=0, create=0, update=0
driveListBusy=0
libvol_lock_
and if you are going for max capacity (large number of physical devices) in
order to NOT run out of slots in your processor for device controller cards,
you probably want to be looking at SSA disks ;-)
Dwight
> --
> From: Roger C Cook[SMTP:[EMAIL PROTECTED]]
> Reply To: ADSM:
Well, if you have a nice fresh RS/6000 with nothing but the rootvg and a
bunch of disks & tape drives you do a fresh install of TSM server code
You then use the "database restore tapes" to perform a
dsmserv restore db
now you have your TSM environment up and running but all your primary
s
ight?? (Find the archiving somewhat
> tricky to set up, but I'm a newbie :-)) )
>
> Or am totaly out in the blue here??
>
> Henrik Hansson
> Nomafa AB
> Web; www.nomafa.com
> Mail; [EMAIL PROTECTED]
>
>
>
> "Cook, Dwight
&g
Now on a different subnet I'm guessing...
a different interface to the same subnet won't get you much 'cause TSM can
flood the network !
so say you have a client with interfaces
1.2.3.6
and
1.2.4.6
and a tsm server with interface 1.2.4.8
just make sure your tcp/ip is configured to route .4 traffic
I'm doing that...
Lots of different situations...
If they say the data is REALLY REALLY REALLY important I leave it in the tsm
server, have it in an isolated storage pool with a copypool copy of it AND I
export the node, twice, and give the end user one of the copies.
If it is REALLY important (li
I'm sure TSM 4.1 is a "for a price" upgrade and you'll probably need the CD
to do the base code install...
I haven't looked into 4.1 yet though...
Dwight
> --
> From: Shekhar Dhotre[SMTP:[EMAIL PROTECTED]]
> Reply To: ADSM: Dist Stor Manager
> Sent: Tuesday, August 0
Sure you can...!
just put them in different domains, point the mgmt class (copy groups) to
different tape pools and boom...done
full backups ? are you talking in the old terms of "full backups" that
doesn't apply here in the new world ?
every time an "incremental" runs it builds a "full" comple
Now from what I remember (and I haven't double checked the 3.7 docs) is that
if you don't own a file, you CAN NOT back it up because backups
(incremental/selective) actually causes data to go away (rolling off
inactive versions).
I'm surprised that 3.2 let you... when I tested things, you could
Just a plain old "QUERY ARCHIVE blah" shows that...
tsm> q archive /home/zdec23/*
Size Archive Date - TimeFile - Expires on - Description
------
1,536 07/14/00 10:49:07/home/zdec23/bin 07/14/01
What do all the other sessions show during the backups...
"MediaWait" ? ? ?
if so, if you do a "q sess f=d" what volser is listed as being waited on? ?
? the one currently mounted ?
now you are backing up this application by shutting down the DB and letting
incremental processing back it up ? ?
Well, in a "def stg po=co" the default collocation is "no" but it kinda
sounds like you might be seeing collocation=yes in your copy pool...
how many nodes in the primary pool ?
I take it there are no anre messages around the time of the backup stg
I'm also guessing the tapes are still "r
OUCH !... that is really what the include/exclude list is all about... (and
domain statements and the dsm.sys and/or dsm.opt file(s)) to set what you
want to process, then any backup is as easy as
inc
OK, besides that... you have to remember to backup a file you HAVE TO BE THE
OWNER OF THE
I have to agree here.
I try to make it a habbit to focus on good and not dwell on bad...
with that said, let me say I really like all my experiences with ADSM/TSM on
RS/6000's running AIX with 3494's, 3590-B1A's, & SSA dasd.
And as pointed out below, ADSM & AIX are both IBM... (yes, tsm is
technic
Ooops... I would beg to differ with the "expert" !
Now as load fluctuates this will shift from being important to not so
important but...
TSM really works good at spreading its load across all defined logical
devices.
If you have each 8 GB physical device split into 4 - 2GB logical devices
then sa
Well, I'm guessing you are using actual files for tsm volumes...
just define additional space under a file system under aix within the shark
box...
do a
dsmfmt -m -data /somepath/somefile (where is the size of
the file in MB)
to see the format of dsmfmt just issue "dsmfmt" from
We did that with 3.1 & still do it with TSM 3.7
2+ TB of client db file space
compresses down to 560-ish GB transmitted data
currently runs in about 22 hours
Large E1 client with more CPU's than I'd care to count (runs at about
90+% during backups)
S70 TSM server with a couple CPU's (runs at a
did you do the dsmserv with the option to "install" all the newly defined
volumes, listing the quantity of logs and their names and the quantity of
dbvols and their names???
I believe it is
dsmserv install 1 logname 1 dbfilename
double check the manual... you have to do this to init the vo
Yep, they've screwed up... at least in my 3.7.2.0 aix server code running
with a 3494.
I've set my mountretention to 0 or max of 1
Dwight
> --
> From: Rupp Thomas (Illwerke)[SMTP:[EMAIL PROTECTED]]
> Reply To: ADSM: Dist Stor Manager
> Sent: Tuesday, July 25, 2000 2:12
Well, if you look in your listing below you'll see a
> Delay Period for Volume Reuse 5
this indicates that those volumes will remain empty for 5 days before
rolling back scratch...
Dwight
> --
> From: Shekhar Dhotre[SMTP:[EMAIL PROTECTED]]
> Reply To: ADSM: Dist Stor Manager
1st try a
q stg tapepool f=d
and look for the reuse delay... see if it is greater than zero, that would
explain the empty state without rolling back scratch
Ok, you checked in the libvols to the library using device stk9710 as
scratch, correct ?
For each storage pool (such as tapepool) the
does
mtlib -l/dev/lmcp# -qV -VXXX###
show the volser ?
if so do a
mtlib -l/dev/lmcp# -C -s[sourcecategory] -t[targetcategory]
-VXXX###
use the source category as is listed in the results of the query volume
and use a target category of "purge" which is (help me out here people, I
send mail to "[EMAIL PROTECTED]"
make the text
unsubscribe adsm-l
that will do it
> --
> From: Sergio Leyva Diosdado[SMTP:[EMAIL PROTECTED]]
> Reply To: ADSM: Dist Stor Manager
> Sent: Monday, July 24, 2000 11:08 AM
> To: [EMAIL PROTECTED]
> Subject: unsubscribe
Joyce, you're not anywhere different than everyone else at one time or
another...
If you wish you may set up a new domain... if it makes sense to group all
those differently across a domain.
I've set my domains by operating systems and/or major applications (so I
have stuff like UNIX domains but a
401 - 496 of 496 matches
Mail list logo