it is interesting idea. For me, a significant information from comparation, so we do some significantly wrong. Memory engine should be faster naturally, but I don't tkink it can be 1000x.
Yesterday we did a some tests, that shows so for large tables (5G)a our hashing is not effective. Disabling hash join and using merge join increased speed 2x Dne 9. 12. 2013 20:41 "knizhnik" <knizh...@garret.ru> napsal(a): > > Hello! > > I want to annouce my implementation of In-Memory Columnar Store extension for PostgreSQL: > > Documentation: http://www.garret.ru/imcs/user_guide.html > Sources: http://www.garret.ru/imcs-1.01.tar.gz > > Any feedbacks, bug reports and suggestions are welcome. > > Vertical representation of data is stored in PostgreSQL shared memory. > This is why it is important to be able to utilize all available physical memory. > Now servers with Tb or more RAM are not something exotic, especially in financial world. > But there is limitation in Linux with standard 4kb pages for maximal size of mapped memory segment: 256Gb. > It is possible to overcome this limitation either by creating multiple segments - but it requires too much changes in PostgreSQL memory manager. > Or just set MAP_HUGETLB flag (assuming that huge pages were allocated in the system). > > I found several messages related with MAP_HUGETLB flag, the most recent one was from 21 of November: > http://www.postgresql.org/message-id/20131125032920.ga23...@toroid.org > > I wonder what is the current status of this patch? > > > > > > > -- > Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-hackers