> I disagree.  The trouble is that filing system calls, whether global
> or local, provide a lot more functionality than reading in zip files.
> For example, a filing system has to be able to cope with 
> multiple processes
> reading and writing different files in a directory simultaneously.  My
> recollection with MLj is that while you save quite a lot of 
> time by putting
> all files on local disk, you don't save as much as you do by writing
> archive files.

You're right: the saving by using archived/compressed files across an NFS
connection would be pretty big.  But on a local disk, it would be negligible
(I'm watching some compiles go by with about 95% CPU usage, which isn't
great but I think most of the drop is due to page faults).

The point I was making is that GHC takes a great deal of time to parse all
the interface files on startup.  This cost could be eliminated, and at the
same time reduce the size of the .hi files, by using a binary format.  This
would be a win on both NFS-based and local-disk installations.

Not to discourage the use of compression (which is clearly an easier route
than the binary format I was suggesting), and some kind of generic interface
to zip files would be a bonus.

Cheers,
        Simon

Reply via email to