Ivan Sergio Borgonovo wrote: > I'm more interested in understanding when I should use materialized > mode. > eg. I should be more concerned about memory or cpu cycles and what > should be taken as a reference to consider memory needs "large"? > If for example I was going to split a large TEXT into a set of > record (let's say I'm processing csv that has been loaded into a > text field)... I'd consider the CPU use "light" but the memory needs > "large". Would be this task suited for the materialized mode?
Currently, there's no difference in terms of memory needs. The backend always materializes the result of a SRF into a tuplestore anyway, if the function didn't do it itself. There has been discussion of optimizing away that materialization step, but no-one has come up with an acceptable patch for that yet. There probably isn't much difference in CPU usage either. -- Heikki Linnakangas EnterpriseDB http://www.enterprisedb.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers