Chris Fossenier wrote:

I currently have a client with a database that must hold 125 million records
and all tallied about 250 fields.
The database has been normalized and indexed appropriately.
If any of you have worked with MySQL, you will have discovered that indexing
is very limited. You can only have one index file per table. The indexing
process actuallly creates a full copy of the original table and once you get
above 2 indexes with 125million records, it is extremely slow.
Should I even bother trying PostgreSQL to resolve this issue?
We can generate the same indexes in MS SQL and Oracle in a fraction of the
amount of time when held up to MySQL.

This is a known MySQL specific problem. I know of no other database that handles index creation in such an inefficient way.


Creating an index in PostgreSQL requires scanning the entire main table and sorting the resulting key set ... don't see how to do it better. Every index is stored in its own (set of) file(s). During index creation, an exclusive lock on the table is required by create index, so reindexing your entire DB is not your preferred operation during peak access times of your webserver. But I think professional DBA's don't torture Oracle that way either.


Jan


Thanks
Chris.




--
#======================================================================#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.                                  #
#================================================== [EMAIL PROTECTED] #


---------------------------(end of broadcast)--------------------------- TIP 8: explain analyze is your friend

Reply via email to