There was a question on dba.stackexchange recently:

   http://dba.stackexchange.com/a/162117/1822

That question (and the answer) deals with performance difference of a query 
caused by the _declared_ length of a VARCHAR column in SQL Server (everything 
else being equal - especially the actual data length)

For the curios: it does make a (big) difference in performance if you declare 
varchar(100) or varchar(2000) in SQL Server - something that really surprised 
me.

The difference in performance in SQL Servers seems to be caused by SQL Server's 
optimizer that uses the _declared_ length of a column to estimate the memory 
needed for the aggregation (or sorting).

Now, we all know that there is no performance difference whatsoever for varchar 
columns regardless of the declared length.

In one of the comments, to that answer the question was asked how Postgres 
knows how much memory it needs to allocate to do the aggregation.

I guess this is based on the column statistics stored in pg_stats, but I am not 
sure:

So here is my question: how does Postgres estimate/know the memory needed for 
the aggregation? Or does it dynamically resize the memory if the initial 
assumption was wrong?

Thomas



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Reply via email to