On Tue, 2006-04-11 at 08:26, Casper.Dik at Sun.COM wrote:

> I think it isn't too difficult to bunzip2 on the server and then
> gzip then and see how that makes a difference.

So I did. Recompress none.bz2 to none.gz and rewrite i.none
to use gzcat, having made sure that the netboot filesystem
has gzcat on it.

The original install took 22m 15s to install the packages.
This version took 15m 57s.

So that's almost a 30% speedup. Not as much as I was hoping
for, but pretty good, and analysis gives a couple of further
pieces of information:

The slowdown during the install is now clearer, with the
split for the first and second half of the install now being
6min/10min instead of 10min/12min.

Even at the beginning of the install, it's slow. Looking
at the network traffic and comparing it to that of my
previous install, it's clear that the network traffic is
going faster, as there are larger spikes. It's not sustained,
though - there are gaps where nothing is moving at all, and
the data transfer rate is often low.

For example, this is the first 30s of the package
installation seen from the nfs server:

bge0:      145 k/s out,        57 k/s in
bge0:     1469 k/s out,        55 k/s in
bge0:     4602 k/s out,        77 k/s in
bge0:      362 k/s out,        76 k/s in
bge0:     1890 k/s out,       313 k/s in
bge0:      308 k/s out,       173 k/s in
bge0:      784 k/s out,       160 k/s in
bge0:      128 k/s out,        71 k/s in
bge0:        0 k/s out,         1 k/s in
bge0:        0 k/s out,         1 k/s in
bge0:     5818 k/s out,       124 k/s in
bge0:     2093 k/s out,       157 k/s in
bge0:     1771 k/s out,       113 k/s in
bge0:      939 k/s out,        38 k/s in
bge0:      102 k/s out,        33 k/s in
bge0:      122 k/s out,        50 k/s in
bge0:     4934 k/s out,       131 k/s in
bge0:     3621 k/s out,       180 k/s in
bge0:      354 k/s out,       151 k/s in
bge0:      143 k/s out,       125 k/s in
bge0:      136 k/s out,       122 k/s in
bge0:      116 k/s out,       115 k/s in
bge0:      114 k/s out,       113 k/s in
bge0:      106 k/s out,       106 k/s in
bge0:      115 k/s out,       115 k/s in
bge0:      105 k/s out,       104 k/s in
bge0:       94 k/s out,        94 k/s in
bge0:       98 k/s out,        97 k/s in
bge0:       89 k/s out,        89 k/s in
bge0:       95 k/s out,        95 k/s in
bge0:       78 k/s out,        79 k/s in

So, some big spikes, but it's not pushing the
100M ethernet at all. In fact, you only see
the network close to saturation during one or
two of the biggest packages.

This tells me that the pkgadd process itself
is inefficient or has some other overhead that
needs to be identified.

-- 
-Peter Tribble
L.I.S., University of Hertfordshire - http://www.herts.ac.uk/
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/



Reply via email to