Peter, 

If I'm running compression on all of my clients, why would I attempt to turn it on at 
the devclass level.  The only thing I can see it doing is prevent me from streaming 
because it's first going to
try and compress anything I send to.  Hence, an overall performance hit.  

Regards, Joe

-----Original Message-----
From: Peter Sattler [mailto:[EMAIL PROTECTED]]
Sent: Thursday, January 31, 2002 12:17 PM
To: [EMAIL PROTECTED]
Subject: Antwort: Re: low bandwitdth and big files


Hi Joe,

I strongly agree with Matt. I've done then same thing, that is client side
compression and hardware compression on tape. With TSM, with Networker -
I've never seen problems. On the server side all you lose is a bit of tape
capacity. The compression is hardware (sic!) not software. The well known
advice is for software compression - and there it is necessary to avoid
double compression because it takes away resources.

So in your case try client side compression - it might well be that you
gain more than a doubled data rate.

Cheers Peter






"Wholey, Joseph (TGA\\MLOL)" <[EMAIL PROTECTED]>@VM.MARIST.EDU> am
30.01.2002 22:07:19

Bitte antworten an "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>

Gesendet von:  "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>


An:    [EMAIL PROTECTED]
Kopie:
Thema: Re: low bandwitdth and big files

Matt,

Don't be so sure...  my MVS folks are telling me that there is no way TSM
is over riding their micro-code hardware level compression.  (but I will
double check with them and again as well as with
Tivoli).

With regard to compressing data twice, I disagree.  There's something very
wrong with it.  That's why it is strongly recommended not to do it. (not
just with TSM, but with all data)  Some data that
goes thru multiple compression routines can "blow up" to 2x the size the
file started out as.

And finally, the reason we turn compression on at the client, is to
compress it before it rides our very slow network.  Otherwise, I wouldn't.


Regards, Joe




-----Original Message-----
From: Matthew Glanville [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, January 30, 2002 3:42 PM
To: [EMAIL PROTECTED]
Subject: Re: low bandwitdth and big files


From: Matthew Glanville

Joe,

I do not believe there is such a thing as 'server' level compression.  My
understanding is that the device class compression settings are reflecting
the hardware level compression settings, they can override what the
microcode may have set the 'default' to.

We have no problems at all with clients that compress with the tsm client,
and then compress again on the tape drive, you loose just a little bit of
space, and yes, the occupancy information does not know that the data has
already been compresssed.  There is really nothing wrong with compressing
data more than once, the files get a bit bigger, but it could be worth the
time and bandwith saved.  Also don't forget that lots of data is stored
already compressed in zipped files or compressed images like jpeg and mpeg.

I would not touch with the compression settings on the device class, keep
them on at the highest level, just turn on or off the TSM client
compression as needed.  Check and see if that helps your low bandwith
backups.

Matthew Glanville
[EMAIL PROTECTED]





"Wholey, Joseph (TGA\\MLOL)" <[EMAIL PROTECTED]>@VM.MARIST.EDU> on
01/30/2002 01:39:10 PM

Please respond to "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>

Sent by:  "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>


To:   [EMAIL PROTECTED]
cc:
Subject:  Re: low bandwitdth and big files


Matt,

Are you running two, maybe 3 compression routines.  i.e.  once at the
client, once at the server level (you'll see if you q devclass f=d on a
sequential storage device) and once at the hardware level
(micro code).

If so, have you kept check on the amount of data in your stgpool.  I say
this because a q occ on a file space is not going to give you an accurrate
indication of the amount of data that node/filespace
has dumped into your stgpool.
Although the IBMers and the manuals say "don't run multiple compression
routines", they've yet to advise on what to do if you have to run client
side compression due to a slow network.  I can shut off
server side/devclass compression, but what about hardware compression.  Can
you shut off compression on a 3590 tape device.  Isn't that a micro code
issue.  i.e.  you can't shut it off?

Regards, Joe

-----Original Message-----
From: Matthew Glanville [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, January 30, 2002 12:43 PM
To: [EMAIL PROTECTED]
Subject: Re: low bandwitdth and big files


From: Matthew Glanville

You might want to turn on TSM client side compression...
In my experience notes databases can get at least 50% compressed.
Your backups will most likely go down to 2 hours, or even more.

TSM:> update node node_name compress=yes

Give it a try.  For low bandwith lines I always prefer to let TSM compress
the data first

Of course, we no longer back up notes as normal files but use the TDP for
Domino agent.
(but still use TSM client compression).

Matthew Glanville
[EMAIL PROTECTED]






Burak Demircan <[EMAIL PROTECTED]>@VM.MARIST.EDU> on
01/30/2002 10:48:46 AM

Please respond to "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>

Sent by:  "ADSM: Dist Stor Manager" <[EMAIL PROTECTED]>


To:   [EMAIL PROTECTED]
cc:
Subject:  Re: low bandwitdth and big files


I have one full backup. What could be the solution? The files change
with minor changes every day but 1.8GB file is backuped up from scratch
every
day.
Regards,
Burak




        [EMAIL PROTECTED]
        Sent by: [EMAIL PROTECTED]

        30.01.2002 16:39
        Please respond to ADSM-L

                        To:        [EMAIL PROTECTED]
                        cc:
                        Subject:        Re: low bandwitdth and big files

Depending on circumstances, this might be a candidate for adaptive
differencing, TSMs version of a block level incremental.  You will still
have to at least once do a complete backup of the big files though.



_____________________________
William Mansfield
Senior Consultant
Solution Technology, Inc





                      Burak Demircan

                      <burak.demircan@DAIMLERCH        To:
[EMAIL PROTECTED]

                      RYSLER.COM>                      cc:

                      Sent by: "ADSM: Dist Stor        Subject:  low
bandwitdth
and big
files
                     Manager"

                      <[EMAIL PROTECTED]>





                      01/30/2002 12:01 AM

                      Please respond to "ADSM:

                      Dist Stor Manager"









Hi,
I am trying to backup up big files ~1GB from a low bandwidth line.
Some of the clients fail the schedule very frequently. Could you recommend
me
some options to increase timeout (any kind of timeout). I pasted the
schedlog
of
one of the clients below which starts at 19:00.
Regards
Burak



29-01-2002 10:26:47 --- SCHEDULEREC QUERY BEGIN
29-01-2002 10:26:47 --- SCHEDULEREC QUERY END
29-01-2002 10:26:47 Next operation scheduled:
29-01-2002 10:26:47
------------------------------------------------------------
29-01-2002 10:26:47 Schedule Name: * * * * NTDAILY
29-01-2002 10:26:47 Action: * * * * * * * *Incremental
29-01-2002 10:26:47 Objects:
29-01-2002 10:26:47 Options:
29-01-2002 10:26:47 Server Window Start: * 19:00:00 on 29-01-2002
29-01-2002 10:26:47
------------------------------------------------------------
29-01-2002 10:26:47 Waiting to be contacted by the server.
29-01-2002 19:03:59 TCP/IP accepted connection from server.
29-01-2002 19:03:59 Querying server for next scheduled event.
29-01-2002 19:03:59 Node Name: DCS_NOTESSRV1
29-01-2002 19:04:04 Session established with server TSM01.MBT: AIX-RS/6000
29-01-2002 19:04:04 * Server Version 4, Release 2, Level 1.9
29-01-2002 19:04:04 * Server date/time: 29-01-2002 19:00:10 *Last access:
29-01-2002 10:22:53

29-01-2002 19:04:04 --- SCHEDULEREC QUERY BEGIN
29-01-2002 19:04:04 --- SCHEDULEREC QUERY END
29-01-2002 19:04:04 Next operation scheduled:
29-01-2002 19:04:04
------------------------------------------------------------
29-01-2002 19:04:04 Schedule Name: * * * * NTDAILY
29-01-2002 19:04:04 Action: * * * * * * * *Incremental
29-01-2002 19:04:04 Objects:
29-01-2002 19:04:04 Options:
29-01-2002 19:04:04 Server Window Start: * 19:00:00 on 29-01-2002
29-01-2002 19:04:04
------------------------------------------------------------
29-01-2002 19:04:04
Executing scheduled command now.
29-01-2002 19:04:04 --- SCHEDULEREC OBJECT BEGIN NTDAILY 29-01-2002
19:00:00
29-01-2002 19:04:05 Incremental backup of volume '\\DCS_NOTESSRV1\D$'
29-01-2002 19:04:26 ANS1898I ***** Processed * * * 500 files *****
29-01-2002 19:04:30 Directory--> * * * * * * * * * 0
\\dcs_notessrv1\d$\Lotus\Domino [Sent]
29-01-2002 19:04:44 ANS1898I ***** Processed * * 1 000 files *****
29-01-2002 19:05:05 Normal File--> * * * * 3 932 160
\\dcs_notessrv1\d$\Lotus\Domino\Notesdata\adressbooks\drosteadres.nsf
[Sent]

29-01-2002 19:06:17 Normal File--> * * * * 8 126 464
\\dcs_notessrv1\d$\Lotus\Domino\Notesdata\adressbooks\kolleradres1.nsf
[Sent]

29-01-2002 19:07:35 Normal File--> * * * * 8 912 896
\\dcs_notessrv1\d$\Lotus\Domino\Notesdata\adressbooks\kolleradres2.nsf
[Sent]

29-01-2002 19:09:11 Normal File--> * * * *10 223 616
\\dcs_notessrv1\d$\Lotus\Domino\Notesdata\adressbooks\kolleradres3.nsf
[Sent]

29-01-2002 19:10:43 Normal File--> * * * *10 485 760
\\dcs_notessrv1\d$\Lotus\Domino\Notesdata\adressbooks\kolleradres4.nsf
[Sent]

29-01-2002 19:11:55 Normal File--> * * * * 8 126 464
\\dcs_notessrv1\d$\Lotus\Domino\Notesdata\adressbooks\names.nsf [Sent]
29-01-2002 19:12:32 Normal File--> * * * * 4 194 304
\\dcs_notessrv1\d$\Lotus\Domino\Notesdata\adressbooks\oaksoyadres.nsf
[Sent]

29-01-2002 19:12:35 Directory--> * * * * * * * * * 0
\\dcs_notessrv1\d$\Lotus\Domino\Notesdata\Mail\Bnemutlu.ft [Sent]
29-01-2002 20:36:19 Normal File--> * * * 497 811 456
\\dcs_notessrv1\d$\Lotus\Domino\Notesdata\Mail\agoktas.nsf [Sent]
29-01-2002 20:36:38 ANS1809E Session is lost; initializing session reopen
procedure.
29-01-2002 20:37:04 Normal File--> * * 1 078 460 416
\\dcs_notessrv1\d$\Lotus\Domino\Notesdata\Mail\bbatumlu.nsf ***
Unsuccessful **
29-01-2002 20:37:04 ANS1114I Waiting for mount of offline media.
29-01-2002 20:37:24 ... failed
29-01-2002 23:16:18 Retry # 1 *Normal File--> * * 1 078 460 416
\\dcs_notessrv1\d$\Lotus\Domino\Notesdata\Mail\bbatumlu.nsf [Sent]
29-01-2002 23:16:18 ANS1809E Session is lost; initializing session reopen
procedure.
29-01-2002 23:17:06 ... successful
29-01-2002 23:17:07 --- SCHEDULEREC STATUS BEGIN
29-01-2002 23:17:07 Total number of objects inspected: * *1 300
29-01-2002 23:17:07 Total number of objects backed up: * * * 11
29-01-2002 23:17:07 Total number of objects updated: * * * * *0
29-01-2002 23:17:07 Total number of objects rebound: * * * * *0
29-01-2002 23:17:07 Total number of objects deleted: * * * * *0
29-01-2002 23:17:07 Total number of objects expired: * * * * *0
29-01-2002 23:17:07 Total number of objects failed: * * * * * 0
29-01-2002 23:17:07 Total number of bytes transferred: * * 1.52 GB
29-01-2002 23:17:07 Data transfer time: * * * * * * * *14 652.27 sec
29-01-2002 23:17:07 Network data transfer rate: * * * * *109.37 KB/sec
29-01-2002 23:17:07 Aggregate data transfer rate: * * * *105.55 KB/sec
29-01-2002 23:17:07 Objects compressed by: * * * * * * * * * *0%
29-01-2002 23:17:07 Elapsed processing time: * * * * * 04:13:02
29-01-2002 23:17:07 --- SCHEDULEREC STATUS END
29-01-2002 23:17:07 ANS1369W Session Rejected: The session was canceled by
the
server administrator.

29-01-2002 23:17:07 --- SCHEDULEREC OBJECT END NTDAILY 29-01-2002 19:00:00
29-01-2002 23:17:07 ANS1512E Scheduled event 'NTDAILY' failed. *Return code
=
4.
29-01-2002 23:17:07 Sending results for scheduled event 'NTDAILY'.
29-01-2002 23:17:08 Results sent to server for scheduled event 'NTDAILY'.

29-01-2002 23:17:08 ANS1483I Schedule log pruning started.
29-01-2002 23:17:08 Schedule Log Prune: 268 lines processed. *0 lines
pruned.
29-01-2002 23:17:08 ANS1484I Schedule log pruning finished successfully

Reply via email to