Andrew Sullivan wrote:
On Mon, Aug 14, 2006 at 05:26:10PM -0400, Sumeet Ambre wrote:
Hi All,

I have a database which consists of 20 million records and I've split up the db into 6-7 dbs.

You can do this (as someone suggested with dblink), but I'm wondering
why the split?  20 million records isn't very big.

A
The Design of the database is because our organization wants to split up different datasets into different entities, and there might be a possibility that they'll run different instances of postgres for each dataset. The problem is that one of the databases consists of 14 million records and when i query the base database which consists of 20million records, the query runs damn slow.......below is the sample schema for my base table and the query i try to run on it....it takes me more than 2-3 minutes to run a query....Is there any way i could speed this up......

sample schema for my base table
---------------------------------

doc_id  |  title |  authors  |  pub_year  |   abs   |   db_name........ |

In the above scheme the field db_name is the name of the other databases which contain the whole record. I'm trying to run query which searches on title, authors name, pub_year or abstract simultaneously...

e.g. SELECT * FROM base WHERE ((title ILIKE '%something%') AND (authors ~* '.*something.*') AND (db_name='something'));

Any suggestions to speed up this query.....

Thanks,
Sumeet.

---------------------------(end of broadcast)---------------------------
TIP 9: In versions below 8.0, the planner will ignore your desire to
      choose an index scan if your joining column's datatypes do not
      match

Reply via email to