why retries...
Hi List, Can someone explain this: 11/16/10 23:58:48 Normal File--14,186,959 /usr/IBMIHS/logs/error_log [Sent] 11/16/10 23:58:48 Normal File-- 8 /usr/IBMIHS/logs/httpd.pid [Sent] 11/16/10 23:58:48 Normal File--73,728 /usr/lib/objrepos/PDiagAtt [Sent] 11/16/10 23:58:48 Normal File--20,480 /usr/lib/objrepos/PDiagAtt.vc [Sent] 11/16/10 23:58:48 Normal File-- 8,192 /usr/lib/objrepos/PDiagDev [Sent] 11/16/10 23:58:48 Normal File-- 4,096 /usr/lib/objrepos/PDiagDev.vc [Sent] 11/16/10 23:58:48 Normal File--49,152 /usr/lib/objrepos/PDiagRes [Sent] 11/16/10 23:58:48 Normal File-- 8,192 /usr/lib/objrepos/PDiagRes.vc [Sent] 11/16/10 23:58:48 Normal File-- 851,968 /usr/lib/objrepos/PdAt Changed 11/16/10 23:58:49 Retry # 1 Normal File--14,186,959 /usr/IBMIHS/logs/error_log [Sent] 11/16/10 23:58:49 Retry # 1 Normal File-- 8 /usr/IBMIHS/logs/httpd.pid [Sent] 11/16/10 23:58:49 Retry # 1 Normal File--73,728 /usr/lib/objrepos/PDiagAtt [Sent] 11/16/10 23:58:49 Retry # 1 Normal File--20,480 /usr/lib/objrepos/PDiagAtt.vc [Sent] 11/16/10 23:58:49 Retry # 1 Normal File-- 8,192 /usr/lib/objrepos/PDiagDev [Sent] 11/16/10 23:58:49 Retry # 1 Normal File-- 4,096 /usr/lib/objrepos/PDiagDev.vc [Sent] 11/16/10 23:58:49 Retry # 1 Normal File--49,152 /usr/lib/objrepos/PDiagRes [Sent] 11/16/10 23:58:49 Retry # 1 Normal File-- 8,192 /usr/lib/objrepos/PDiagRes.vc [Sent] 11/16/10 23:58:50 Retry # 1 Normal File-- 851,968 /usr/lib/objrepos/PdAt [Sent] Why does TSM retry files that are already '[Sent]'? I would expect only the /usr/lib/objrepos/PdAt is retried as it apparently has been changed. -Marcel -- == Marcel J.E. MolMESA Consulting B.V. ===-ph. +31-(0)6-54724868 P.O. Box 112 ===-mar...@mesa.nl 2630 AC Nootdorp __ www.mesa.nl ---U_n_i_x__I_n_t_e_r_n_e_t The Netherlands They couldn't think of a number, Linux user 1148 -- counter.li.org so they gave me a name! -- Rupert Hine -- www.ruperthine.com
Re: why retries...
The line with the Changed tells the story. Remember that TSM client-server interactions are *transaction* based, not file-by-file. If a constituent element of the transaction changes, the transaction is void and has to be repeated, according to your Changingretries choice. This relates to Aggregate-based storage in the TSM server. Richard Simshttp://people.bu.edu/rbs
Re: why retries...
On Wed, Nov 17, 2010 at 08:24:51AM -0500, Richard Sims wrote: The line with the Changed tells the story. Remember that TSM client-server interactions are *transaction* based, not file-by-file. If a constituent element of the transaction changes, the transaction is void and has to be repeated, according to your Changingretries choice. This relates to Aggregate-based storage in the TSM server. Yes, I expected that much... But it is just a waste of bandwidth to send the whole aggregate again because maybe one (somteimes small) file in it has been changed. I saw a lot of such retries so am worried about it a bit. I sure this can be implemented in a miuch more optimal way. -Marcel -- == Marcel J.E. MolMESA Consulting B.V. ===-ph. +31-(0)6-54724868 P.O. Box 112 ===-mar...@mesa.nl 2630 AC Nootdorp __ www.mesa.nl ---U_n_i_x__I_n_t_e_r_n_e_t The Netherlands They couldn't think of a number, Linux user 1148 -- counter.li.org so they gave me a name! -- Rupert Hine -- www.ruperthine.com
Re: why retries...
Heey Marcel, Long time ago ;-) The max. size of the aggregates can be set in the options by MoveSizeThresh and MoveBatchSize. But normaly the best choice is high as this improves the backup speed a lot. Mail or call me directly if you think you have a problem. I guess you don't, but we can take a look to it together Regards, Maurice van 't Loo 2010/11/17 Marcel J.E. Mol mar...@mesa.nl: On Wed, Nov 17, 2010 at 08:24:51AM -0500, Richard Sims wrote: The line with the Changed tells the story. Remember that TSM client-server interactions are *transaction* based, not file-by-file. If a constituent element of the transaction changes, the transaction is void and has to be repeated, according to your Changingretries choice. This relates to Aggregate-based storage in the TSM server. Yes, I expected that much... But it is just a waste of bandwidth to send the whole aggregate again because maybe one (somteimes small) file in it has been changed. I saw a lot of such retries so am worried about it a bit. I sure this can be implemented in a miuch more optimal way. -Marcel -- == Marcel J.E. Mol MESA Consulting B.V. ===- ph. +31-(0)6-54724868 P.O. Box 112 ===- mar...@mesa.nl 2630 AC Nootdorp __ www.mesa.nl ---U_n_i_x__I_n_t_e_r_n_e_t The Netherlands They couldn't think of a number, Linux user 1148 -- counter.li.org so they gave me a name! -- Rupert Hine -- www.ruperthine.com