On Mon, Apr 24, 2017 at 12:13 PM, Tomas Vondra <tomas.von...@2ndquadrant.com > wrote:
> On 04/24/2017 08:52 PM, Andres Freund wrote: > >> On 2017-04-24 11:42:12 -0700, Jeff Janes wrote: >> >>> The explain analyze of the hash step of a hash join reports something >>> like >>> this: >>> >>> -> Hash (cost=458287.68..458287.68 rows=24995368 width=37) (actual >>> rows=24995353 loops=1) >>> Buckets: 33554432 Batches: 1 Memory Usage: 2019630kB >>> >>> >>> Should the HashAggregate node also report on Buckets and Memory Usage? I >>> would have found that useful several times. Is there some reason this is >>> not wanted, or not possible? >>> >> >> I've wanted that too. It's not impossible at all. >> >> > Why wouldn't that be possible? We probably can't use exactly the same > approach as Hash, because hashjoins use custom hash table while hashagg > uses dynahash IIRC. But why couldn't measure the amount of memory by > looking at the memory context, for example? > He said "not impossible", meaning it is possible. I've added it to the wiki Todo page. (Hopefully that has not doomed it to be forgotten about) Cheers, Jeff