(2013/09/05 3:50), Pavel Stehule wrote:
     > we very successfully use a tmpfs volume for pgstat files (use a
    backport
     > of multiple statfiles from 9.3 to 9.1

    It works quite well as long as you have the objects (tables, indexes,
    functions) spread across multiple databases. Once you have one database
    with very large number of objects, tmpfs is not as effective.

    It's going to help with stats I/O, but it's not going to help with high
    CPU usage (you're reading and parsing the stat files over and over) and
    every rewrite creates a copy of the file. So if you have 400MB stats,
    you will need 800MB tmpfs + some slack (say, 200MB). That means you'll
    use ~1GB tmpfs although 400MB would be just fine. And this 600MB won't
    be used for page cache etc.

    OTOH, it's true that if you have that many objects, 600MB of RAM is not
    going to help you anyway.


and just idea - can we use a database for storing these files. It can be
used in unlogged tables. Second idea - hold a one bg worker as
persistent memory key value database and hold data in memory with some
optimizations - using anti cache and similar memory database fetures.

Yeah, I'm interested in this idea too.

If the stat collector has a dedicated connection to the backend in
order to store statistics into dedicated tables, we can easily take
advantages of index (btree, or hash?) and heap storage.

Is this worth trying?

Regards,
--
Satoshi Nagayasu <sn...@uptime.jp>
Uptime Technologies, LLC. http://www.uptime.jp


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to