On Friday 01 October 2004 9:04 am, Shane | SkinnyCorp wrote:
> Hey, my name is Shane Witschen and I'm the Systems Administrator
> (mainly a developer) for a small web development company.  We
> recently switched over to PostgreSQL after going over some of the
> powerful features that it holds over MySQL.
>
> However, after we launched one site, we slapped ourselves in the
> head.  We have run a message board for a few years now, and have
> always used MySQL for the backend, but recently switched to
> PostgreSQL.  It has always been lightening fast with MySQL, but has
> slowed to nearly a halt in terms of online access time.  I can't
> seem to do anything about it!!  PLEASE HELP US!!

Others have mentioned issues with indexes, schema and such. Let me 
explain vacuum.

If you haven't vacuumed regularly then you may have very large disk 
files relative to your database size - expecially if you have much 
update activity.

PG, unlike mysql, has full multi-version concurrency control. This is 
just one piece of the power that sets it apart from mysql and others. 
With MVCC in PG, every record that is updated within a transaction is 
duplicated on-disk. The updated tuples are not visible to other 
transactions until the updating transaction is complete. The old 
versions of the updated tuples are no longer visible when all running 
transactions no longer need the old tuples.

Vacuuming marks the unused space as reusable. If you did a lot of 
development without regular vacuums then your disk size may be huge 
compared to what is necessary. You can physically shrink the file by 
running a "vacuum full" but this will lock the table for the duration 
of the vacuum (probably not too much of a problem if you are already 
seeing 10+ second page loads).

Note: getting your disk files to a size that allows them to be better 
cached by the OS can make a huge difference in query speed - 
especially if you are doing full table scans either intentionally or 
due to bad design.

The "analyze" command updates the statistics that PG needs in order 
for the query planner to make appropriate use of indexes and such.

After running your "vacuum full", be sure to run "vacuum analyze" 
regularly. Check out the autovacuum daemon to help automate this 
process.

I have a similar machine (but more RAM) and can easily run a count(*) 
of a 4+ million row table in well under 4 seconds which makes me 
suspicious of your vacuuming.

Cheers,
Steve


---------------------------(end of broadcast)---------------------------
TIP 7: don't forget to increase your free space map settings

Reply via email to