Hi Kevin,

I got one more question, please help me out.

Question 1. How do we correlate our memory with kernel parameters, I mean
to say is there any connection between shared_buffer and kernel SHMMAX. For
example if I define my shared buffer more than my current SHMMAX value, it
would not allow me to use that ??or vice versa. Please throw some light.

Questions 2. I want to show the last result of last query before and after
changing the parameters, I found performance was degraded.

USED  EXPLAIN ANALYZE

radius=# explain analyze select * from subsexpired where subsno between
5911 and 50911 and subsno not in (select subsno from subs where subsno
between  5911 and 50911);
                                                                QUERY PLAN
------------------------------------------------------------------------------------------------------------------------------------------
 Index Scan using ind_sub_new on subsexpired  (cost=1943.39..6943.84
rows=30743 width=69) (actual time=124.628..142.203 rows=430 loops=1)
   Index Cond: ((subsno >= 5911) AND (subsno <= 50911))
   Filter: (NOT (hashed SubPlan 1))
   Rows Removed by Filter: 62079
   SubPlan 1
     ->  Index Only Scan using subs_pkey on subs  (cost=0.00..1876.77
rows=26647 width=4) (actual time=0.030..44.743 rows=27397 loops=1)
           Index Cond: ((subsno >= 5911) AND (subsno <= 50911))
           Heap Fetches: 27397
 Total runtime: 142.812 ms
----------------------------------------------------------------------------------------------------------------------

After: using the parameters as suggested.

radius=# explain analyze select * from subsexpired where subsno between
5911 and 50911 and subsno not in (select subsno from subs where subsno
between  5911 and 50911);
                                                                QUERY PLAN
------------------------------------------------------------------------------------------------------------------------------------------
 Index Scan using ind_sub_new on subsexpired  (cost=1943.39..6943.84
rows=30743 width=69) (actual time=128.351..144.532 rows=430 loops=1)
   Index Cond: ((subsno >= 5911) AND (subsno <= 50911))
   Filter: (NOT (hashed SubPlan 1))
   Rows Removed by Filter: 62079
   SubPlan 1
     ->  Index Only Scan using subs_pkey on subs  (cost=0.00..1876.77
rows=26647 width=4) (actual time=0.030..47.848 rows=27397 loops=1)
           Index Cond: ((subsno >= 5911) AND (subsno <= 50911))
           Heap Fetches: 27397
 Total runtime: 145.127 ms
(9 rows)


Thanks



On Sat, Dec 15, 2012 at 1:50 AM, Kevin Grittner <kgri...@mail.com> wrote:

> Shams Khan wrote:
>
> > *PERFORMANCE WAS BOOSTED UP DRASTICALLY* ---when I edited the
> > work_mem to 100 MB---just look at the difference;
>
> You only showed EXPLAIN output, which only shows estimated costs.
> As already suggested, try running both ways with EXPLAIN ANALYZE --
> which will show both estimates and actual.
>
> > One more thing Kevin, could you please help me out to understand
> > how did calculate those parameters?
>
> My own experience and reading about the experiences of others. If
> you follow the pgsql-performance list, you will get a better "gut
> feel" on these issues as well as picking up techniques for problem
> solving. Speaking of which, that would have been a better list to
> post this on. The one actual calculation I did was to make sure
> work_mem was less than RAM * 0.25 / max_connections. I didn't go
> all the way to that number because 100MB is enough for most
> purposes and your database isn't very much smaller than your RAM.
> You know, the melding of a routine calculation with gut feel.  :-)
>
> > Without more info, there's a bit of guesswork, but...
> > What exta info is required...please let me know...
>
> The main things I felt I was missing was a description of your
> overall workload and EXPLAIN ANALYZE output from a "typical" slow
> query.
>
> There's a page about useful information to post, though:
>
> http://wiki.postgresql.org/wiki/SlowQueryQuestions
>
> Now that you have somewhat reasonable tuning for the overall
> server, you can look at the EXPLAIN ANALYZE output of queries which
> don't run as fast as you thing they should be able to do, and  see
> what adjustments to cost factors you might need to make. With the
> numbers you previously gave, a wild guess would be that you'll get
> generally faster run-times with these settings:
>
> seq_page_cost = 0.1
> random_page_cost = 0.1
> cpu_tuple_cost = 0.5
>
> Be sure to look at actual run times, not EXPLAIN cost estimates.
>
> -Kevin
>

Reply via email to