On 1/9/14, 1:18 PM, knizhnik wrote:
So it is clear why do we need shared memory for parallel query execution. But 
why it has to be dynamic? Why it can not be preallocated at start time as most 
of other resources used by PostgreSQL?

That would limit us to doing something like allocating a fixed maximum of 
parallel processes (which might be workable) and only allocating a very small 
amount of memory for IPC. Small as in can only handle a small number of tuples. 
That sounds like a really inefficient way to shuffle data to and from parallel 
processes, especially because one or both sides would probably have to actually 
copy the data if we're doing it that way.

With DSM if you want to do something like a parallel sort each process can put 
their results into memory that the parent process can directly access.

Of course the other enormous win for DSM is it's the foundation for finally 
being able to resize things without a restart. For large dollar sites that 
ability would be hugely beneficial.
--
Jim C. Nasby, Data Architect                       j...@nasby.net
512.569.9461 (cell)                         http://jim.nasby.net


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to