I committed 867dd2dc87, which means my use case for a fast GUC hash table (quickly setting proconfigs) is now solved.
Andres mentioned that it could still be useful to reduce overhead in a few other places: https://postgr.es/m/20231117220830.t6sb7di6h6am4...@awork3.anarazel.de How should we evaluate GUC hash table performance optimizations? Just microbenchmarks, or are there end-to-end tests where the costs are showing up? (As I said in another email, I think the hash function APIs justify themselves regardless of improvements to the GUC hash table.) On Wed, 2023-12-06 at 07:39 +0700, John Naylor wrote: > > There's already a patch to use simplehash, and the API is a bit > > cleaner, and there's a minor performance improvement. It seems > > fairly > > non-controversial -- should I just proceed with that patch? > > I won't object if you want to commit that piece now, but I hesitate > to > call it a performance improvement on its own. > > - The runtime measurements I saw reported were well within the noise > level. > - The memory usage starts out better, but with more entries is worse. I suppose I'll wait until there's a reason to commit it, then. Regards, Jeff Davis