@Merlin Moncure, I got the calculation using pg_tune. And I modified the shared_buffers=24GB and the effective_cache_size=64GB
@Igor Neyman, Yes, I had performance problem which sometimes the response time took 11ms, with the exactly same query it took 100ms, and the response time seems randomly fluctuating even with the exact same query. Any idea on how I should configure postgres to effectively utilize the hardware and reduce the response time to be quicker? *(RAM=128GB, CPU=24cores, RAID-1+0:SSD) Thanks, FattahRozzaq *looking for answer* On 06/10/2015, Igor Neyman <[email protected]> wrote: > > > -----Original Message----- > From: [email protected] > [mailto:[email protected]] On Behalf Of Igor Neyman > Sent: Monday, October 05, 2015 2:25 PM > To: FattahRozzaq <[email protected]>; [email protected] > Subject: Re: [PERFORM] shared-buffers set to 24GB but the RAM only use 4-5 > GB average > > > > -----Original Message----- > From: [email protected] > [mailto:[email protected]] On Behalf Of FattahRozzaq > Sent: Monday, October 05, 2015 10:51 AM > To: [email protected] > Subject: [PERFORM] shared-buffers set to 24GB but the RAM only use 4-5 GB > average > > I have configured postgresql.conf with parameters as below: > > log_destination = 'stderr' > logging_collector = on > log_directory = 'pg_log' > listen_addresses = '*' > log_destination = 'stderr' > logging_collector = on > log_directory = 'pg_log' > log_rotation_age = 1d > log_rotation_size = 1024MB > listen_addresses = '*' > checkpoint_segments = 64 > wal_keep_segments = 128 > max_connections = 9999 > max_prepared_transactions = 9999 > checkpoint_completion_target = 0.9 > default_statistics_target = 10 > maintenance_work_mem = 1GB > effective_cache_size = 64GB > shared_buffers = 24GB > work_mem = 5MB > wal_buffers = 8MB > port = 40003 > pooler_port = 40053 > gtm_host = 'node03' > gtm_port = 10053 > > As you can see, I have set the shared_buffers to 24GB, but my server still > only use 4-5 GB average. > I have 128GB RAM in a single server. > My database has 2 tables: > - room (3GB size if pg_dump'ed) > - message (17GB if pg_dump'ed) > > The backend application is a messaging server, in average there will be > 40-180 connections to the postgres Server. > The traffic is quite almost-heavy. > > How to make postgres-xl effectively utilizes the resource of RAM for > 9999 max_connections? > > > Thanks, > FattahRozzaq > ____________________________________ > > Why are you looking at memory consumption? > Are you experiencing performance problems? > > Regards, > Igor Neyman > > _______________________ > > Also, > Postgres-xl has it's own mailing lists: > http://sourceforge.net/p/postgres-xl/mailman/ > > Regards, > Igor Neyman > > -- Sent via pgsql-performance mailing list ([email protected]) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance
