Re: [PERFORM] bad planning with 75% effective_cache_size

2012-04-18 Thread Josh Berkus
On 4/17/12 2:49 AM, Istvan Endredy wrote: > Hi, > > thanks for the suggestion, but it didn't help. We have tried it earlier. > > 7500ms > http://explain.depesz.com/s/ctn This plan seems very odd -- doing individual index lookups on 2.8m rows is not standard planner behavior. Can you confirm tha

Re: [PERFORM] Linux machine aggressively clearing cache

2012-04-18 Thread Josh Berkus
On 4/12/12 8:47 AM, Steve Crawford wrote: > On 03/30/2012 05:51 PM, Josh Berkus wrote: >> >> So this turned out to be a Linux kernel issue. Will document it on >> www.databasesoup.com. > Anytime soon? About to build two PostgreSQL servers and wondering if you > have uncovered a kernel version or s

Re: [PERFORM] scale up (postgresql vs mssql)

2012-04-18 Thread Merlin Moncure
On Wed, Apr 18, 2012 at 2:32 AM, Eyal Wilde wrote: > hi all, > > i ran vmstat during the test : > > [yb@centos08 ~]$ vmstat 1 15 > procs ---memory-- ---swap-- -io --system-- > -cpu- >  r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id > wa st

Re: [PERFORM] scale up (postgresql vs mssql)

2012-04-18 Thread Andy Colson
On 4/18/2012 2:32 AM, Eyal Wilde wrote: hi all, i ran vmstat during the test : [yb@centos08 ~]$ vmstat 1 15 procs ---memory-- ---swap-- -io --system-- -cpu- r b swpd free buff cache si sobibo in cs us sy id wa st 2 0 0 6118620 1605

Re: [PERFORM] scale up (postgresql vs mssql)

2012-04-18 Thread Eyal Wilde
hi all, i ran vmstat during the test : [yb@centos08 ~]$ vmstat 1 15 procs ---memory-- ---swap-- -io --system-- -cpu- r b swpd free buff cache si sobibo in cs us sy id wa st 0 0 0 6131400 160556 111579200 112 22 17

Re: [PERFORM] Random performance hit, unknown cause.

2012-04-18 Thread Strange, John W
Check your pagecache settings, when doing heavy io writes of a large file you can basically force a linux box to completely stall. At some point once the pagecache has reached it's limit it'll force all IO to go sync basically from my understanding. We are still fighting with this but lots of