Re: [PERFORM] Forcing use of specific index

2005-06-03 Thread Junaili Lie
HI all,
I also would like to know if there is a way to force a use of a
specific index for a specific query. I am currently using Postgresql
7.4.6

In my case I have a relatively big table (several millions of records)
that are frequently used to join with other tables (explicit join or
through view).
The table has several indices, some are single column and some are multi column.
Some queries are faster if using single colum index while other are
faster using multi column indexes.
I have play around with SET STATISTICS, but it doesn't seem to make
any differences (I tried to set it to 1000 one time, but still the
same). I did analyze and vacuum after SET STATISTICS.
Any pointer on how to do this is greatly appreciated.
Thank you in advance,


J



On 6/1/05, Tobias Brox <[EMAIL PROTECTED]> wrote:
> Is it any way to attempt to force the planner to use some specific index
> while creating the plan?  Other than eventually dropping all the other
> indices (which is obiously not a solution in production setting anyway)?
> 
> I have one case where I have added 16 indices to a table, many of them
> beeing partial indices.  The table itself has only 50k of rows, but are
> frequently used in heavy joins.  I imagine there can be exponential order on
> the number of alternative paths the planner must examinate as function of
> the number of indices?
> 
> It seems to me that the planner is quite often not choosing the "best"
> index, so I wonder if there is any easy way for me to check out what the
> planner think about a specific index :-)
> 
> --
> Tobias Brox, Beijing
> 
> ---(end of broadcast)---
> TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
>

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[PERFORM] Help with rewriting query

2005-06-08 Thread Junaili Lie
Hi,
I have the following table:
person - primary key id, and some attributes
food - primary key id, foreign key p_id reference to table person.

table food store all the food that a person is eating. The more recent
food is indicated by the higher food.id.

I need to find what is the most recent food a person ate for every person.
The query:
select f.p_id, max(f.id) from person p, food f where p.id=f.p_id group
by f.p_id will work.
But I understand this is not the most efficient way. Is there another
way to rewrite this query? (maybe one that involves order by desc
limit 1)

Thank you in advance.

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [PERFORM] Help with rewriting query

2005-06-08 Thread Junaili Lie
Hi,
The suggested query below took forever when I tried it.
In addition, as suggested by Tobias, I also tried to create index on
food(p_id, id), but still no goal (same query plan).
Here is the explain:
TEST1=# explain select f.p_id, max(f.id) from Food f, Person p where
(f.p_id = p.id) group by p.id;
  QUERY PLAN

 GroupAggregate  (cost=0.00..214585.51 rows=569 width=16)
  ->  Merge Join  (cost=0.00..200163.50 rows=2884117 width=16)
Merge Cond: ("outer".id = "inner".p_id)
->  Index Scan using person_pkey on person p
(cost=0.00..25.17 rows=569 width=8)
->  Index Scan using person_id_food_index on food f
(cost=0.00..164085.54 rows=2884117 width=16)
(5 rows)




TEST1=# explain select p.id, (Select f.id from food f where
f.p_id=p.id order by f.id desc limit 1) from person p;
   QUERY PLAN
---
 Seq Scan on Person p  (cost=1.00..17015.24 rows=569 width=8)
  SubPlan
->  Limit  (cost=0.00..12.31 rows=1 width=8)
  ->  Index Scan Backward using food_pkey on food f
(cost=0.00..111261.90 rows=9042 width=8)
Filter: (p_id = $0)
(5 rows)

any ideas or suggestions is appreciate.


On 6/8/05, Tobias Brox <[EMAIL PROTECTED]> wrote:
> [Junaili Lie - Wed at 12:34:32PM -0700]
> > select f.p_id, max(f.id) from person p, food f where p.id=f.p_id group
> > by f.p_id will work.
> > But I understand this is not the most efficient way. Is there another
> > way to rewrite this query? (maybe one that involves order by desc
> > limit 1)
> 
> eventually, try something like
> 
>  select p.id,(select f.id from food f where f.p_id=p.id order by f.id desc 
> limit 1)
>  from person p
> 
> not tested, no warranties.
> 
> Since subqueries can be inefficient, use "explain analyze" to see which one
> is actually better.
> 
> This issue will be solved in future versions of postgresql.
> 
> --
> Tobias Brox, +47-91700050
> Tallinn
>

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [PERFORM] Help with rewriting query

2005-06-09 Thread Junaili Lie
Hi Bruno,
I followed your suggestion.
The query plan shows that it uses the index (id, person_id). However,
the execution time is still slow. I have to do ctl-C to stop it.
Maybe something is wrong with my postgresql config.
It's running Solaris on dual Opteron, 4GB.
I allocated around 128MB for sorting and more than 80% for
effective_cache_size and shared_buffers = 32768.
Any further ideas is much appreciated.




On 6/8/05, Bruno Wolff III <[EMAIL PROTECTED]> wrote:
> On Wed, Jun 08, 2005 at 15:48:27 -0700,
>  Junaili Lie <[EMAIL PROTECTED]> wrote:
> > Hi,
> > The suggested query below took forever when I tried it.
> > In addition, as suggested by Tobias, I also tried to create index on
> > food(p_id, id), but still no goal (same query plan).
> > Here is the explain:
> > TEST1=# explain select f.p_id, max(f.id) from Food f, Person p where
> > (f.p_id = p.id) group by p.id;
> 
> The above is going to require reading all the food table (assuming no
> orphaned records), so the plan below seems reasonable.
> 
> >   QUERY PLAN
> > 
> >  GroupAggregate  (cost=0.00..214585.51 rows=569 width=16)
> >   ->  Merge Join  (cost=0.00..200163.50 rows=2884117 width=16)
> > Merge Cond: ("outer".id = "inner".p_id)
> > ->  Index Scan using person_pkey on person p
> > (cost=0.00..25.17 rows=569 width=8)
> > ->  Index Scan using person_id_food_index on food f
> > (cost=0.00..164085.54 rows=2884117 width=16)
> > (5 rows)
> >
> >
> >
> >
> > TEST1=# explain select p.id, (Select f.id from food f where
> > f.p_id=p.id order by f.id desc limit 1) from person p;
> 
> Using a subselect seems to be the best hope of getting better performance.
> I think you almost got it right, but in order to use the index on
> (p_id, id) you need to order by f.p_id desc, f.id desc. Postgres won't
> deduce this index can be used because f.p_id is constant in the subselect,
> you need to give it some help.
> 
> >QUERY PLAN
> > ---
> >  Seq Scan on Person p  (cost=1.00..17015.24 rows=569 width=8)
> >   SubPlan
> > ->  Limit  (cost=0.00..12.31 rows=1 width=8)
> >   ->  Index Scan Backward using food_pkey on food f
> > (cost=0.00..111261.90 rows=9042 width=8)
> > Filter: (p_id = $0)
> > (5 rows)
> >
> > any ideas or suggestions is appreciate.
> >
> >
> > On 6/8/05, Tobias Brox <[EMAIL PROTECTED]> wrote:
> > > [Junaili Lie - Wed at 12:34:32PM -0700]
> > > > select f.p_id, max(f.id) from person p, food f where p.id=f.p_id group
> > > > by f.p_id will work.
> > > > But I understand this is not the most efficient way. Is there another
> > > > way to rewrite this query? (maybe one that involves order by desc
> > > > limit 1)
> > >
> > > eventually, try something like
> > >
> > >  select p.id,(select f.id from food f where f.p_id=p.id order by f.id 
> > > desc limit 1)
> > >  from person p
> > >
> > > not tested, no warranties.
> > >
> > > Since subqueries can be inefficient, use "explain analyze" to see which 
> > > one
> > > is actually better.
> > >
> > > This issue will be solved in future versions of postgresql.
> > >
> > > --
> > > Tobias Brox, +47-91700050
> > > Tallinn
> > >
> >
> > ---(end of broadcast)---
> > TIP 9: the planner will ignore your desire to choose an index scan if your
> >   joining column's datatypes do not match
>

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [PERFORM] Help with rewriting query

2005-06-09 Thread Junaili Lie
Hi Kevin,
Thanks for the reply.
I tried that query. It definately faster, but not fast enough (took
around 50 second to complete).
I have around 2.5 million on food and 1000 on person.
Here is the query plan:
   QUERY PLAN

 Merge Join  (cost=0.00..11662257.52 rows=1441579 width=16)
   Merge Cond: ("outer".id = "inner".p_id)
   ->  Index Scan using person_pkey on person p  (cost=0.00..25.17
rows=569 width=8)
   ->  Index Scan using p_id_food_index on food f 
(cost=0.00..11644211.28 rows=1441579 width=16)
 Filter: (NOT (subplan))
 SubPlan
   ->  Index Scan using p_id_food_index on food f2 
(cost=0.00..11288.47 rows=2835 width=177)
 Index Cond: (p_id = $0)
 Filter: (id > $1)
(9 rows)

I appreciate if you have further ideas to troubleshoot this issue.
Thank you!

On 6/8/05, Kevin Grittner <[EMAIL PROTECTED]> wrote:
> This is a pattern which I've seen many of times.  I call it a "best
> choice" query -- you can easily match a row from one table against any
> of a number of rows in another, the trick is to pick the one that
> matters most.  I've generally found that I want the query results to
> show more than the columns used for making the choice (and there can be
> many), which rules out the min/max technique.  What works in a pretty
> straitforward way, and generally optimizes at least as well as the
> alternatives, is to join to the set of candidate rows and add a "not
> exists" test to eliminate all but the best choice.
> 
> For your example, I've taken some liberties and added hypothetical
> columns from both tables to the result set, to demonstrate how that
> works.  Feel free to drop them or substitute actual columns as you see
> fit.  This will work best if there is an index for the food table on
> p_id and id.  Please let me know whether this works for you.
> 
> select p.id as p_id, p.fullname, f.id, f.foodtype, f.ts
> from food f join person p
> on f.p_id = p.id
> and not exists (select * from food f2 where f2.p_id = f.p_id and f2.id >
> f.id)
> order by p_id
> 
> Note that this construct works for inner or outer joins and works
> regardless of how complex the logic for picking the best choice is.  I
> think one reason this tends to optimize well is that an EXISTS test can
> finish as soon as it finds one matching row.
> 
> -Kevin
> 
> 
> >>> Junaili Lie <[EMAIL PROTECTED]> 06/08/05 2:34 PM >>>
> Hi,
> I have the following table:
> person - primary key id, and some attributes
> food - primary key id, foreign key p_id reference to table person.
> 
> table food store all the food that a person is eating. The more recent
> food is indicated by the higher food.id.
> 
> I need to find what is the most recent food a person ate for every
> person.
> The query:
> select f.p_id, max(f.id) from person p, food f where p.id=f.p_id group
> by f.p_id will work.
> But I understand this is not the most efficient way. Is there another
> way to rewrite this query? (maybe one that involves order by desc
> limit 1)
> 
> Thank you in advance.
> 
> ---(end of broadcast)---
> TIP 9: the planner will ignore your desire to choose an index scan if
> your
>  joining column's datatypes do not match
> 
> 
> ---(end of broadcast)---
> TIP 5: Have you checked our extensive FAQ?
> 
>   http://www.postgresql.org/docs/faq
>

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


[PERFORM] slow deletes on pgsql 7.4

2006-04-25 Thread Junaili Lie
Hi all,
we encounter issues when deleting from a table based on id (primary
key). On certain 'id', it took forever to delete and the i/o is 100%
busy.
Table scenario has around 1400 entries. It is the parent of 3 other table.

Table "public.scenario"
 Column 
|
Type 
|  
Modifiers
-+---+
 id 
|
bigint   
| not null default nextval('scenario_seq'::text)
 name    | character varying(50) |
 description |
text 
|
 subscriber_id   |
bigint   
|
 organization_id | bigint    |
 schedule_id |
bigint   
|
Indexes:
    "scenario_pkey" primary key, btree (id)
    "org_ind_scenario_index" btree (organization_id)
    "sch_ind_scenario_index" btree (schedule_id)
    "sub_ind_scenario_index" btree (subscriber_id)
Check constraints:
    "$3" CHECK (schedule_id >= 0)
    "$2" CHECK (organization_id >= 0)
    "$1" CHECK (subscriber_id >= 0)
Foreign-key constraints:
    "0_4774" FOREIGN KEY (schedule_id) REFERENCES schedule(id) ON DELETE CASCADE
    "0_4773" FOREIGN KEY (organization_id) REFERENCES organization(id) ON DELETE CASCADE
    "0_4772" FOREIGN KEY (subscriber_id) REFERENCES subscriber(id) ON DELETE CASCADE

In all the child tables, the foreign key has the same data type and are indexed.
When I do "delete from scenario where id='1023', it takes less than 200 ms.
But when i do "delete from scenario where id='1099', it took forever (more than 10 minutes that i decided to cancel it.
I can't do explain analyze, but here is the explain:
MONSOON=# begin;
BEGIN
MONSOON=# explain delete from scenario where id='1099';
 
QUERY PLAN
--
 Index Scan using scenario_pkey on scenario  (cost=0.00..3.14 rows=1 width=6)
   Index Cond: (id = 1099::bigint)
(2 rows)

MONSOON=# explain delete from scenario where id='1023';
 
QUERY PLAN
--
 Index Scan using scenario_pkey on scenario  (cost=0.00..3.14 rows=1 width=6)
   Index Cond: (id = 1023::bigint)
(2 rows)

MONSOON=# explain analyze delete from scenario where id='1023';
  
QUERY PLAN

 Index Scan using scenario_pkey on scenario  (cost=0.00..3.14
rows=1 width=6) (actual time=0.028..0.030 rows=1 loops=1)
   Index Cond: (id = 1023::bigint)
 Total runtime: 0.174 ms
(3 rows)

I have also tried increasing statistics on both parent and child tables
to 100, vacuum analyze parent and all child tables. But still the same
slowness.
The o/s is Solaris 10, with fsync = true.
Any ideas what's going on?
Thanks in advance,

J


Re: [PERFORM] slow deletes on pgsql 7.4

2006-04-25 Thread Junaili Lie
I should also mention that select ... for update is fast:
MONSOON=# begin;explain analyze select * from SCENARIO WHERE id = '1099' FOR UPDATE;
BEGIN
  
QUERY PLAN
-
 Index Scan using scenario_pkey on scenario  (cost=0.00..3.17
rows=1 width=64) (actual time=0.016..0.017 rows=1 loops=1)
   Index Cond: (id = 1099::bigint)
 Total runtime: 0.072 ms
(3 rows)


On 4/25/06, Junaili Lie <[EMAIL PROTECTED]> wrote:
Hi all,
we encounter issues when deleting from a table based on id (primary
key). On certain 'id', it took forever to delete and the i/o is 100%
busy.
Table scenario has around 1400 entries. It is the parent of 3 other table.
Table "public.scenario"
 Column 
|
Type 
|  
Modifiers
-+---+
 id 
|
bigint   
| not null default nextval('scenario_seq'::text)
 name    | character varying(50) |
 description |
text 
|
 subscriber_id   |
bigint   
|
 organization_id | bigint    |
 schedule_id |
bigint   
|
Indexes:
    "scenario_pkey" primary key, btree (id)
    "org_ind_scenario_index" btree (organization_id)
    "sch_ind_scenario_index" btree (schedule_id)
    "sub_ind_scenario_index" btree (subscriber_id)
Check constraints:
    "$3" CHECK (schedule_id >= 0)
    "$2" CHECK (organization_id >= 0)
    "$1" CHECK (subscriber_id >= 0)
Foreign-key constraints:
    "0_4774" FOREIGN KEY (schedule_id) REFERENCES schedule(id) ON DELETE CASCADE
    "0_4773" FOREIGN KEY (organization_id) REFERENCES organization(id) ON DELETE CASCADE
    "0_4772" FOREIGN KEY (subscriber_id) REFERENCES subscriber(id) ON DELETE CASCADE

In all the child tables, the foreign key has the same data type and are indexed.
When I do "delete from scenario where id='1023', it takes less than 200 ms.
But when i do "delete from scenario where id='1099', it took forever (more than 10 minutes that i decided to cancel it.
I can't do explain analyze, but here is the explain:
MONSOON=# begin;
BEGIN
MONSOON=# explain delete from scenario where id='1099';
 
QUERY PLAN
--
 Index Scan using scenario_pkey on scenario  (cost=0.00..3.14 rows=1 width=6)
   Index Cond: (id = 1099::bigint)
(2 rows)

MONSOON=# explain delete from scenario where id='1023';
 
QUERY PLAN
--
 Index Scan using scenario_pkey on scenario  (cost=0.00..3.14 rows=1 width=6)
   Index Cond: (id = 1023::bigint)
(2 rows)

MONSOON=# explain analyze delete from scenario where id='1023';
  
QUERY PLAN

 Index Scan using scenario_pkey on scenario  (cost=0.00..3.14
rows=1 width=6) (actual time=0.028..0.030 rows=1 loops=1)
   Index Cond: (id = 1023::bigint)
 Total runtime: 0.174 ms
(3 rows)

I have also tried increasing statistics on both parent and child tables
to 100, vacuum analyze parent and all child tables. But still the same
slowness.
The o/s is Solaris 10, with fsync = true.
Any ideas what's going on?
Thanks in advance,

J




Re: [PERFORM] slow deletes on pgsql 7.4

2006-04-25 Thread Junaili Lie
hi,
Thanks for the answer.
I have double checked that all the foreign key that are referencing "id" on scenario are indexed.
I have even vacuum analyze scenario table and all the tables that referenced this table.
Something that is interesting is that: it only happens for a certain values. 
ie. delete from scenario where id='1023' is very fast, but delete from scenario where id='1099' is running forever.

Any ideas?

J


On 4/25/06, Tom Lane <[EMAIL PROTECTED]> wrote:
"Junaili Lie" <[EMAIL PROTECTED]> writes:> we encounter issues when deleting from a table based on id (primary key). O=> n> certain 'id', it took forever to delete and the i/o is 100% busy.
Almost always, if delete is slow when selecting the same rows is fast,it's because you've got a trigger performance problem --- most commonly,there are foreign keys referencing this table from other tables and you
don't have the referencing columns indexed.regards,
tom lane


Re: [PERFORM] slow deletes on pgsql 7.4

2006-04-26 Thread Junaili Lie
It was on my first email.
Here it is again:
MONSOON=# explain delete from scenario where id='1099';  QUERY PLAN-- Index Scan using scenario_pkey on scenario  (cost=
0.00..3.14 rows=1 width=6)   Index Cond: (id = 1099::bigint)(2 rows)MONSOON=# explain delete from scenario where id='1023';  QUERY PLAN--
 Index Scan using scenario_pkey on scenario  (cost=0.00..3.14 rows=1 width=6)   Index Cond: (id = 1023::bigint)(2 rows) 
Thanks,
J 
On 4/25/06, Tom Lane <[EMAIL PROTECTED]> wrote:
"Junaili Lie" <[EMAIL PROTECTED]> writes:> ie. delete from scenario where id=3D'1023' is very fast, but delete from
> scenario where id=3D'1099' is running forever.What does EXPLAIN show for each of those cases?   regards, tom lane


[PERFORM] slow i/o

2006-08-28 Thread Junaili Lie
Hi everyone,
We have a postgresql 8.1 installed on Solaris 10. It is running fine.
However, for the past couple days, we have seen the i/o reports
indicating that the i/o is busy most of the time. Before this, we only
saw i/o being busy occasionally (very rare). So far, there has been no
performance complaints by customers, and the slow query reports doesn't
indicate anything out of the ordinary.
There's no code changes on the applications layer and no database configuration changes.
I am wondering if there's a tool out there on Solaris to tell which process is doing most of the i/o activity?
Thank you in advance.

J



Re: [PERFORM] slow i/o

2006-08-29 Thread Junaili Lie
Hi Jignesh,
Thank you for my reply.
I have the setting just like what you described:
wal_sync_method = fsyncwal_buffers = 128checkpoint_segments = 128bgwriter_all_percent = 0bgwriter_maxpages = 0

I ran the dtrace script and found the following:
During the i/o busy time, there are postgres processes that has very
high BYTES count. During that non i/o busy time, this same process
doesn't do a lot of i/o activity. I checked the pg_stat_activity but
couldn't found this process. Doing ps revealed that this process is
started at the same time since the postgres started, which leads me to
believe that it maybe background writer or some other internal process.

This process are not autovacuum because it doesn't disappear when I  tried turning autovacuum off. 
Except for the ones mentioned above, I didn't modify the other background setting:

MONSOON=# show bgwriter_delay ;

 bgwriter_delay



 200

(1 row)



MONSOON=# show bgwriter_lru_maxpages ;  bgwriter_lru_maxpages

---

 5

(1 row)



MONSOON=# show bgwriter_lru_percent ;

 bgwriter_lru_percent

--

 1

(1 row)
This
i/o spike only happens at minute 1 and minute 6 (ie. 10.51, 10.56) . If
I do select * from pg_stat_activity during this time, I will see a lot
of write queries waiting to be processed. After a few seconds,
everything seems to be gone. All writes that are not happening at the
time of this i/o jump are being processed very fast, thus do not show on pg_stat_activity.

Thanks in advance for the reply,
Best,

J
On 8/29/06, Jignesh K. Shah <[EMAIL PROTECTED]
> wrote:
Also to answer your real question:DTrace On Solaris 10:# dtrace -s /usr/demo/dtrace/whoio.dIt will tell you the pids doing the io activity and  on which devices.There are more scripts in that directory like 
iosnoop.d, iotime.d and others which also will giveother details like file accessed, time it took for the io etc.Hope this helps.Regards,JigneshJunaili Lie wrote:> Hi everyone,

> We have a postgresql 8.1 installed on Solaris 10. It is running fine.> However, for the past couple days, we have seen the i/o reports> indicating that the i/o is busy most of the time. Before this, we only
> saw i/o being busy occasionally (very rare). So far, there has been no> performance complaints by customers, and the slow query reports doesn't> indicate anything out of the ordinary.> There's no code changes on the applications layer and no database
> configuration changes.> I am wondering if there's a tool out there on Solaris to tell which> process is doing most of the i/o activity?> Thank you in advance.>> J>




Re: [PERFORM] slow i/o

2006-08-30 Thread Junaili Lie
I have tried this to no avail.
I have also tried changing the bg_writer_delay parameter to 10. The spike in i/o still occurs although not in a consistent basis and it is only happening for a few seconds.
 
 
On 8/30/06, Jignesh K. Shah <[EMAIL PROTECTED]> wrote:
The bgwriter parameters changed in 8.1Trybgwriter_lru_maxpages=0bgwriter_lru_percent=0
to turn off bgwriter and see if there is any change.-JigneshJunaili Lie wrote:> Hi Jignesh,> Thank you for my reply.> I have the setting just like what you described:>
> wal_sync_method = fsync> wal_buffers = 128> checkpoint_segments = 128> bgwriter_all_percent = 0> bgwriter_maxpages = 0>>> I ran the dtrace script and found the following:
> During the i/o busy time, there are postgres processes that has very> high BYTES count. During that non i/o busy time, this same process> doesn't do a lot of i/o activity. I checked the pg_stat_activity but
> couldn't found this process. Doing ps revealed that this process is> started at the same time since the postgres started, which leads me to> believe that it maybe background writer or some other internal process.
> This process are not autovacuum because it doesn't disappear when I> tried turning autovacuum off.> Except for the ones mentioned above, I didn't modify the other> background setting:> MONSOON=# show bgwriter_delay ;
>  bgwriter_delay> >  200> (1 row)>> MONSOON=# show bgwriter_lru_maxpages ;  bgwriter_lru_maxpages> --->  5> (1 row)>
> MONSOON=# show bgwriter_lru_percent ;>  bgwriter_lru_percent> -->  1> (1 row)>> This i/o spike only happens at minute 1 and minute 6 (ie. 10.51, 10.56
)> . If I do select * from pg_stat_activity during this time, I will see a> lot of write queries waiting to be processed. After a few seconds,> everything seems to be gone. All writes that are not happening at the
> time of this i/o jump are being processed very fast, thus do not show on> pg_stat_activity.>> Thanks in advance for the reply,> Best,>> J>> On 8/29/06, *Jignesh K. Shah* <
[EMAIL PROTECTED]> [EMAIL PROTECTED]>> wrote:>> Also to answer your real question:>> DTrace On Solaris 10:
>> # dtrace -s /usr/demo/dtrace/whoio.d>> It will tell you the pids doing the io activity and  on which devices.> There are more scripts in that directory like iosnoop.d, iotime.d
> and others which also will give> other details like file accessed, time it took for the io etc.>> Hope this helps.>> Regards,> Jignesh>>
> Junaili Lie wrote:>  > Hi everyone,>  > We have a postgresql 8.1 installed on Solaris 10. It is running fine.>  > However, for the past couple days, we have seen the i/o reports
>  > indicating that the i/o is busy most of the time. Before this, we> only>  > saw i/o being busy occasionally (very rare). So far, there has> been no>  > performance complaints by customers, and the slow query reports
> doesn't>  > indicate anything out of the ordinary.>  > There's no code changes on the applications layer and no database>  > configuration changes.>  > I am wondering if there's a tool out there on Solaris to tell which
>  > process is doing most of the i/o activity?>  > Thank you in advance.>  >>  > J>  >>>


Re: [PERFORM] slow i/o

2006-09-26 Thread Junaili Lie
 
extended device statistics

    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device

    0.0  319.0    0.0 2576.0 147.4 256.0  462.0  802.5 100 100 c1t0d0s6 (/usr)

 cpu

 us sy wt id

  0  1  0 98

   
extended device statistics

    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device

    0.0    0.0   
0.0    0.0  0.0  0.2   
0.0    0.0   2  13 c1t0d0s1 (/var)

    0.0  366.0    0.0 3088.0 124.4 255.8  339.9  698.8 100 100 c1t0d0s6 (/usr)

 cpu

 us sy wt id

  6  5  0 90

   
extended device statistics

    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device

    0.0    2.0   
0.0   16.0  0.0  1.1    0.0 
533.2   0  54 c1t0d0s1 (/var)

    1.0  282.0    8.0 2849.0 
1.5 129.2    5.2  456.5  10 100 c1t0d0s6 (/usr)

Thank you in advance for your help!

JunOn 8/30/06, Junaili Lie <[EMAIL PROTECTED]> wrote:
I have tried this to no avail.
I have also tried changing the bg_writer_delay parameter to 10.
The spike in i/o still occurs although not in a consistent basis and it
is only happening for a few seconds.
 
 
On 8/30/06, Jignesh K. Shah <[EMAIL PROTECTED]
> wrote:
The bgwriter parameters changed in 8.1Trybgwriter_lru_maxpages=0bgwriter_lru_percent=0
to turn off bgwriter and see if there is any change.-JigneshJunaili Lie wrote:> Hi Jignesh,> Thank you for my reply.> I have the setting just like what you described:>
> wal_sync_method = fsync> wal_buffers = 128> checkpoint_segments = 128> bgwriter_all_percent = 0> bgwriter_maxpages = 0>>> I ran the dtrace script and found the following:
> During the i/o busy time, there are postgres processes that has very> high BYTES count. During that non i/o busy time, this same process> doesn't do a lot of i/o activity. I checked the pg_stat_activity but
> couldn't found this process. Doing ps revealed that this process is> started at the same time since the postgres started, which leads me to> believe that it maybe background writer or some other internal process.
> This process are not autovacuum because it doesn't disappear when I> tried turning autovacuum off.> Except for the ones mentioned above, I didn't modify the other> background setting:> MONSOON=# show bgwriter_delay ;
>  bgwriter_delay> >  200> (1 row)>> MONSOON=# show bgwriter_lru_maxpages ;  bgwriter_lru_maxpages> --->  5> (1 row)>
> MONSOON=# show bgwriter_lru_percent ;>  bgwriter_lru_percent> -->  1> (1 row)>> This i/o spike only happens at minute 1 and minute 6 (ie. 10.51, 10.56

)> . If I do select * from pg_stat_activity during this time, I will see a> lot of write queries waiting to be processed. After a few seconds,> everything seems to be gone. All writes that are not happening at the
> time of this i/o jump are being processed very fast, thus do not show on> pg_stat_activity.>> Thanks in advance for the reply,> Best,>> J>> On 8/29/06, *Jignesh K. Shah* <
[EMAIL PROTECTED]> 
[EMAIL PROTECTED]>> wrote:>> Also to answer your real question:>> DTrace On Solaris 10:
>> # dtrace -s /usr/demo/dtrace/whoio.d>> It will tell you the pids doing the io activity and  on which devices.> There are more scripts in that directory like iosnoop.d, iotime.d

> and others which also will give> other details like file accessed, time it took for the io etc.>> Hope this helps.>> Regards,> Jignesh>>
> Junaili Lie wrote:>  > Hi everyone,>  > We have a postgresql 8.1 installed on Solaris 10. It is running fine.>  > However, for the past couple days, we have seen the i/o reports
>  > indicating that the i/o is busy most of the time. Before this, we> only>  > saw i/o being busy occasionally (very rare). So far, there has> been no>  > performance complaints by customers, and the slow query reports
> doesn't>  > indicate anything out of the ordinary.>  > There's no code changes on the applications layer and no database>  > configuration changes.>  > I am wondering if there's a tool out there on Solaris to tell which
>  > process is doing most of the i/o activity?>  > Thank you in advance.>  >>  > J>  >>>




[PERFORM] Hardware questions

2005-03-22 Thread Junaili Lie
Hi guys,
We are in the process of buying a new dell server.
Here is what we need to be able to do:
- we need to be able to do queries on tables that has 10-20 millions
of records (around 40-60 bytes each row) in less than 5-7 seconds.
We also need the hardware to be able to handle up to 50 millions
records on a few tables (5 tables in the DB).

Here is what we are thinking:
- Dual Xeon 2.8 Ghz
- 4GB DDR2 400 Mhz Dual Ranked DIMMS (is dual ranked or single ranked
makes any differences in terms of performance?). Do you guys think 4GB
is reasonably enough?
- 73 GB 15k RPM Ultra 320 SCSI Hard Drive
- Dual on-board NICS (is this enough, or Gigabit network adapter will help?)

Any input or suggestions is greatly appreciated.
Thank you,


Jun

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq