Re: [GENERAL] Slow COUNT

2005-12-04 Thread Bruno Wolff III
On Sun, Dec 04, 2005 at 18:28:53 +0100,
  Tino Wildenhain <[EMAIL PROTECTED]> wrote:
> Am Sonntag, den 04.12.2005, 09:56 -0600 schrieb Bruno Wolff III:
> > On Sun, Dec 04, 2005 at 14:40:49 +0100,
> >   Tino Wildenhain <[EMAIL PROTECTED]> wrote:
> > > 
> > > Doing something to enable aggregates in general to use
> > > an existent index would be a nice ide imho.
> > > (With all the visibility hinting in place)
> > 
> > Assuming you are refering to max and min, this has already been done and is
> > in 8.1.
> 
> I also mean sum, avg, ... and last not least count :-)

Your comment about indexes threw me there. Indexes are not the problem. If you
use a WHERE clause with enough selectivity and the is an appropiate index, an
an index scan will be used. There is a related issue that when postgres does
an index scan, it also needs to visit the hep to check visibility. The issue
there is that maintaining visibility in the index has costs that are currently
believed to outweigh the benefits of not having to check visibility in the
heap. (Though recently there have been some new suggestions in this area.)

What you are looking for seems to be caching values for the case where the
full table is selected. That has problems as described in the other response
and in more details in the archives. This isn't something you want turned on
by default, but it would be nice if there was something packaged to make doing
this easier for people who want it for selected tables.

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Slow COUNT

2005-12-04 Thread Greg Stark
Tino Wildenhain <[EMAIL PROTECTED]> writes:

> Am Sonntag, den 04.12.2005, 09:56 -0600 schrieb Bruno Wolff III:
> > On Sun, Dec 04, 2005 at 14:40:49 +0100,
> > 
> > Assuming you are refering to max and min, this has already been done and is
> > in 8.1.
> 
> I also mean sum, avg, ... and last not least count :-)

The naive implementation would mean serializing all table updates. In other
words only one person can update, insert, or delete at a time. Until that user
commits everybody else would be locked out of the table. You may as well be
using something like mysql then if that's acceptable.

The more sophisticated implementation would require customization to get
right. It requires a second table keeping track of deltas and a periodic job
aggregating those deltas. Which aggregates to put in it, how often to
aggregate them, and when to consult them instead of consulting the main table
would all be things that would require human intervention to get right.

It would be cool if there were a shrinkwrapped package, perhaps in contrib, to
do this with knobs for the user to play with instead of having to roll your
own. perhaps in contrib. But nobody's done a good enough version yet to
consider it.

-- 
greg


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Slow COUNT

2005-12-04 Thread Tino Wildenhain
Am Sonntag, den 04.12.2005, 09:56 -0600 schrieb Bruno Wolff III:
> On Sun, Dec 04, 2005 at 14:40:49 +0100,
>   Tino Wildenhain <[EMAIL PROTECTED]> wrote:
> > 
> > Doing something to enable aggregates in general to use
> > an existent index would be a nice ide imho.
> > (With all the visibility hinting in place)
> 
> Assuming you are refering to max and min, this has already been done and is
> in 8.1.

I also mean sum, avg, ... and last not least count :-)

Thx for info though.

++Tino


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Slow COUNT

2005-12-04 Thread Bruno Wolff III
On Sun, Dec 04, 2005 at 14:40:49 +0100,
  Tino Wildenhain <[EMAIL PROTECTED]> wrote:
> 
> Doing something to enable aggregates in general to use
> an existent index would be a nice ide imho.
> (With all the visibility hinting in place)

Assuming you are refering to max and min, this has already been done and is
in 8.1.

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Slow COUNT

2005-12-04 Thread Tino Wildenhain
Am Sonntag, den 04.12.2005, 14:02 +0100 schrieb Guido Neitzer:
> On 02.12.2005, at 20:02 Uhr, Jaime Casanova wrote:
> 
> > so the way to do it is create a trigger that record in a table the
> > number of rows...
> 
> As there are SO MANY questions about the "count(*)" issue, I wonder  
> whether it makes sense to add a mechanism which does exactly the  
> method mentioned above in a default PostgreSQL installation (perhaps  
> switched of by default for other performance impacts)?!

I dont think this would match postgres style - to include
a kludge for a rarely usefull special case. I may be wrong
but personally I never needed unqualified count(*) on a 
table to be very fast.

Doing something to enable aggregates in general to use
an existent index would be a nice ide imho.
(With all the visibility hinting in place)

Just my 0.02Ct.

++Tino



---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] Slow COUNT

2005-12-04 Thread Guido Neitzer

On 02.12.2005, at 20:02 Uhr, Jaime Casanova wrote:


so the way to do it is create a trigger that record in a table the
number of rows...


As there are SO MANY questions about the "count(*)" issue, I wonder  
whether it makes sense to add a mechanism which does exactly the  
method mentioned above in a default PostgreSQL installation (perhaps  
switched of by default for other performance impacts)?!


cug

--
PharmaLine, Essen, GERMANY
Software and Database Development





smime.p7s
Description: S/MIME cryptographic signature


Re: [GENERAL] Slow COUNT

2005-12-02 Thread Jan Wieck

On 12/2/2005 2:02 PM, Jaime Casanova wrote:

so the way to do it is create a trigger that record in a table the
number of rows...


Neither, because now you have to update one single row in that new 
table, which causes a row exclusive lock. That is worse than an 
exclusive lock on the original table because it has the same 
serialization of writers but the additional work to update the count 
table as well as vacuum it.


What you need is a separate table where your trigger will insert delta
rows with +1 or -1 for insert and delete. A view will sum() over that 
and tell you the true number of rows. Periodically you condense the 
table by replacing all current rows with one that represents the sum().



Jan

--
#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Slow COUNT

2005-12-02 Thread Jaime Casanova
On 12/2/05, Poul Møller Hansen <[EMAIL PROTECTED]> wrote:
> Rodrigo Gonzalez wrote:
> > I answer here so each one can help you.
> >
> > Cannot understand what you meanindex is numbered? Are you talking
> > about "autoincrement" as called in mysql? use sequences please
> >
>
> Sorry, forgot that your email address was in the reply-to field.
>
> I was seeking a solution on how to make a faster count on a lot of rows,
> and I was wondering on the difference between PostgreSQL's and MySQL's
> (MyISAM) of handling counts.
>
> I understand the advantages of MVCC compared to row/table locking.
> And as far as I have understood PostgreSQL count the rows looping
> through all rows, and that's why it takes that long when there are many
> rows.
>
> But how is MySQL (MyISAM) doing it, and why doesn't that way work in
> the MVCC model.
>
>
> Thanks,
>  Poul
>

That's because MyISAM isn't concerned about transactions and
visibility stuff... it simply stores and increments...

in postgres you have to now if the row is visible to the transaction
that is counting, if the row was deleted by a concurrent
transaction... etc, etc... it's not as easy as insert, increment...

so the way to do it is create a trigger that record in a table the
number of rows...
in postgres there isn't such mechanism implicit for all tables because
it will be a penalty for both: insert and deletes in all tables and
the case is that there few tables were you want know exact counts, if
any

--
regards,
Jaime Casanova
(DBA: DataBase Aniquilator ;)

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Slow COUNT

2005-12-02 Thread Rodrigo Gonzalez

No problem at all.

MyISAM stores the row count in it's header (.frm file).

You can do something like this using a table that has for example 
tablename, rowcount.


On the other hand, MySQL with MyISAM does not have row locking, so this 
problem is not considered.


Maybe if you tell what you want and why you want to know exact row count 
someone can help you





Poul Møller Hansen wrote:


Rodrigo Gonzalez wrote:


I answer here so each one can help you.

Cannot understand what you meanindex is numbered? Are you talking 
about "autoincrement" as called in mysql? use sequences please




Sorry, forgot that your email address was in the reply-to field.

I was seeking a solution on how to make a faster count on a lot of rows,
and I was wondering on the difference between PostgreSQL's and MySQL's 
(MyISAM) of handling counts.


I understand the advantages of MVCC compared to row/table locking.
And as far as I have understood PostgreSQL count the rows looping
through all rows, and that's why it takes that long when there are many
rows.

But how is MySQL (MyISAM) doing it, and why doesn't that way work in
the MVCC model.


Thanks,
 Poul


---(end of broadcast)---
TIP 6: explain analyze is your friend



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Slow COUNT

2005-12-02 Thread Poul Møller Hansen

Rodrigo Gonzalez wrote:

I answer here so each one can help you.

Cannot understand what you meanindex is numbered? Are you talking 
about "autoincrement" as called in mysql? use sequences please




Sorry, forgot that your email address was in the reply-to field.

I was seeking a solution on how to make a faster count on a lot of rows,
and I was wondering on the difference between PostgreSQL's and MySQL's 
(MyISAM) of handling counts.


I understand the advantages of MVCC compared to row/table locking.
And as far as I have understood PostgreSQL count the rows looping
through all rows, and that's why it takes that long when there are many
rows.

But how is MySQL (MyISAM) doing it, and why doesn't that way work in
the MVCC model.


Thanks,
 Poul


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Slow COUNT

2005-12-02 Thread Rodrigo Gonzalez

I answer here so each one can help you.

Cannot understand what you meanindex is numbered? Are you talking 
about "autoincrement" as called in mysql? use sequences please


Best regards

Rodrigo

Poul Møller Hansen wrote:


Rodrigo Gonzalez wrote:


Poul,

2 thingsfirst, why do you think it will have an impact on inserts?

And the second oneuse InnoDb tables in MySQL, and you will have 
the same than with PostgreSQL, it's because of MVCC


Best regards

Rodrigo



If the index is numbered, I assume or rather guessing that the indexes 
must be renumbered at inserts.


I was not trying to favourite MySQL to PostgreSQL, I prefer PostgreSQL 
to MySQL at any time, I was just wondering why it was much faster in 
MySQL and how I can achieve the same result in PostgreSQL ?



Regards, Poul




---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Slow COUNT

2005-12-02 Thread Andrew Schmidt
And InnoDB is actually quite a bit worse than PostgreSQL (ignoring the 
slight difference in row numbers)


InnoDB:

olp_live> select count(*) from team_players;
+--+
| count(*) |
+--+
|   465004 |
+--+
1 row in set (1.54 sec)


PostgreSQL:

olp_live=# select count(*) from team_players;
++
| count  |
++
| 464747 |
++
(1 row)

Time: 934.935 ms


Rodrigo Gonzalez wrote:


Poul,

2 thingsfirst, why do you think it will have an impact on inserts?

And the second oneuse InnoDb tables in MySQL, and you will have 
the same than with PostgreSQL, it's because of MVCC


Best regards

Rodrigo

Poul Møller Hansen wrote:

I can see it has been discussed before, why COUNT(*) tends to be slow 
on PostgreSQL compared with ex. MySQL.
As far as I understood it has something to do with missing numbering 
on the rows in the indexes and that there should be plenty of reasons 
not to implement that in PostgreSQL, not that I found an explanation.

However I can imagine it will have an impact on inserts.

My questions is, which statements can use to count the rows faster ?
32 secs compared to 10 ms !


Thanks,
 Poul


db=# explain analyze select count(*) from my.table;
 QUERY PLAN
- 

 Aggregate  (cost=60008.28..60008.28 rows=1 width=0) (actual 
time=32028.469..32028.474 rows=1 loops=1)
   ->  Seq Scan on table  (cost=0.00..54962.62 rows=2018262 width=0) 
(actual time=14.492..19592.014 rows=2018252 loops=1)

 Total runtime: 32028.750 ms
(3 rows)

db=# explain analyze select count(*) from my.table where node = 
'1234567890';


QUERY PLAN
--- 

 Aggregate  (cost=50023.14..50023.14 rows=1 width=0) (actual 
time=1790.967..1790.971 rows=1 loops=1)
   ->  Index Scan using idx_node_date_id on table  
(cost=0.00..49968.76 rows=21753 width=0) (actual 
time=80.218..1570.747 rows=34648 loops=1)

 Index Cond: ((node)::text = '1234567890'::text)
 Total runtime: 1792.084 ms
(4 rows)

mysql>select count(*) from table;
+--+
| count(*) |
+--+
|  2018160 |
+--+
1 row in set (0.01 sec)

mysql>select count(*) from table where node = '1234567890';
+--+
| count(*) |
+--+
|34648 |
+--+
1 row in set (0.23 sec)



---(end of broadcast)---
TIP 6: explain analyze is your friend



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings

.



--
_
G l o b a l   D i a g n o s t i c sI n c.
Andrew Schmidtt.416-304-0049 x206
[EMAIL PROTECTED] f.866-697-8726
_ 



---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org


Re: [GENERAL] Slow COUNT

2005-12-01 Thread Greg Stark
Poul Møller Hansen <[EMAIL PROTECTED]> writes:

> My questions is, which statements can use to count the rows faster ?
> 32 secs compared to 10 ms !

Try starting a transaction in one window updating one of those records and see
how long it takes your count(*) to complete in the other window while that
update is still pending.


-- 
greg


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Slow COUNT

2005-12-01 Thread Rodrigo Gonzalez

Poul,

2 thingsfirst, why do you think it will have an impact on inserts?

And the second oneuse InnoDb tables in MySQL, and you will have the 
same than with PostgreSQL, it's because of MVCC


Best regards

Rodrigo

Poul Møller Hansen wrote:

I can see it has been discussed before, why COUNT(*) tends to be slow 
on PostgreSQL compared with ex. MySQL.
As far as I understood it has something to do with missing numbering 
on the rows in the indexes and that there should be plenty of reasons 
not to implement that in PostgreSQL, not that I found an explanation.

However I can imagine it will have an impact on inserts.

My questions is, which statements can use to count the rows faster ?
32 secs compared to 10 ms !


Thanks,
 Poul


db=# explain analyze select count(*) from my.table;
 QUERY PLAN
- 

 Aggregate  (cost=60008.28..60008.28 rows=1 width=0) (actual 
time=32028.469..32028.474 rows=1 loops=1)
   ->  Seq Scan on table  (cost=0.00..54962.62 rows=2018262 width=0) 
(actual time=14.492..19592.014 rows=2018252 loops=1)

 Total runtime: 32028.750 ms
(3 rows)

db=# explain analyze select count(*) from my.table where node = 
'1234567890';


QUERY PLAN
--- 

 Aggregate  (cost=50023.14..50023.14 rows=1 width=0) (actual 
time=1790.967..1790.971 rows=1 loops=1)
   ->  Index Scan using idx_node_date_id on table  
(cost=0.00..49968.76 rows=21753 width=0) (actual time=80.218..1570.747 
rows=34648 loops=1)

 Index Cond: ((node)::text = '1234567890'::text)
 Total runtime: 1792.084 ms
(4 rows)

mysql>select count(*) from table;
+--+
| count(*) |
+--+
|  2018160 |
+--+
1 row in set (0.01 sec)

mysql>select count(*) from table where node = '1234567890';
+--+
| count(*) |
+--+
|34648 |
+--+
1 row in set (0.23 sec)



---(end of broadcast)---
TIP 6: explain analyze is your friend



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] Slow COUNT

2005-12-01 Thread Poul Møller Hansen
I can see it has been discussed before, why COUNT(*) tends to be slow on 
PostgreSQL compared with ex. MySQL.
As far as I understood it has something to do with missing numbering on 
the rows in the indexes and that there should be plenty of reasons not 
to implement that in PostgreSQL, not that I found an explanation.

However I can imagine it will have an impact on inserts.

My questions is, which statements can use to count the rows faster ?
32 secs compared to 10 ms !


Thanks,
 Poul


db=# explain analyze select count(*) from my.table;
 QUERY PLAN 


-
 Aggregate  (cost=60008.28..60008.28 rows=1 width=0) (actual 
time=32028.469..32028.474 rows=1 loops=1)
   ->  Seq Scan on table  (cost=0.00..54962.62 rows=2018262 width=0) 
(actual time=14.492..19592.014 rows=2018252 loops=1)

 Total runtime: 32028.750 ms
(3 rows)

db=# explain analyze select count(*) from my.table where node = 
'1234567890';


QUERY PLAN 


---
 Aggregate  (cost=50023.14..50023.14 rows=1 width=0) (actual 
time=1790.967..1790.971 rows=1 loops=1)
   ->  Index Scan using idx_node_date_id on table  (cost=0.00..49968.76 
rows=21753 width=0) (actual time=80.218..1570.747 rows=34648 loops=1)

 Index Cond: ((node)::text = '1234567890'::text)
 Total runtime: 1792.084 ms
(4 rows)

mysql>select count(*) from table;
+--+
| count(*) |
+--+
|  2018160 |
+--+
1 row in set (0.01 sec)

mysql>select count(*) from table where node = '1234567890';
+--+
| count(*) |
+--+
|34648 |
+--+
1 row in set (0.23 sec)



---(end of broadcast)---
TIP 6: explain analyze is your friend