On Fri, Aug 19, 2011 at 6:40 PM, Tom Tromey <tro...@redhat.com> wrote: >>>>>> "Dimitrios" == Dimitrios Apostolou <ji...@gmx.net> writes: > > Richard> Note that sparsely populated hashes come at the cost of increased > Richard> cache footprint. Not sure what is more important here though, memory > Richard> access or hash computation. > > Tom> I was only approving the change to the dumping. > Tom> I am undecided about making the hash tables more sparse. > > Dimitrios> Since my Core Quad processor has large caches and the i386 > Dimitrios> has small pointer size, the few extra empty buckets impose > Dimitrios> small overhead, which as it seems is minor in comparison to > Dimitrios> gains due to less rehashes. > > Dimitrios> Maybe that's not true on older or alternate equipment. I'd be > Dimitrios> very interested to hear about runtime measurements on various > Dimitrios> equipment, please let me know if you do any. > > I think you are the most likely person to do this sort of testing. > You can use machines on the GCC compile farm for this. > > Your patch to change the symbol table's load factor is fine technically. > I think the argument for putting it in is lacking; what I would like to > see is either some rationale explaining that the increased memory use is > not important, or some numbers showing that it still performs well on > more than a single machine. My reason for wanting this is just that, > historically, GCC has been very sensitive to increases in memory use. > Alternatively, comments from more active maintainers indicating that > they don't care about this would also help your case. > > I can't approve or reject the libiberty change, just the libcpp one.
Yes, memory usage is as important as compile-time. We still have testcases that show a vast imbalance of them. I don't know if the symbol table hash is ever the problem, but changing the generic load factor in libiberty doesn't sound a good idea - maybe instead have a away of specifying that factor per hashtable instance. Or, as usual, improve the hash function to reduce the collision rate and/or to make re-hashing cheaper. Richard. > Tom >