Some talk on giant databases in postgres:
  
http://wiki.postgresql.org/images/3/38/PGDay2009-EN-Datawarehousing_with_PostgreSQL.pdf

wikipedia
  http://en.wikipedia.org/wiki/Partition_%28database%29
  (says to use a UNION)
postgres description on how to do it:
  http://www.postgresql.org/docs/current/interactive/ddl-partitioning.html

 Dennis Gearon


Signature Warning
----------------
It is always a good idea to learn from your own mistakes. It is usually a 
better 
idea to learn from others’ mistakes, so you do not have to make them yourself. 
from 'http://blogs.techrepublic.com.com/security/?p=4501&tag=nl.e036'


EARTH has a Right To Life,
otherwise we all die.



----- Original Message ----
From: Andy <angelf...@yahoo.com>
To: solr-user@lucene.apache.org
Sent: Sat, December 18, 2010 6:20:54 PM
Subject: DIH for sharded database?

I have a table that is broken up into many virtual shards. So basically I have 
N 
identical tables:

Document1
Document2
.
.
Document36

Currently these tables all live in the same database, but in the future they 
may 
be moved to different servers to scale out if the needs arise.

Is there any way to configure a DIH for these tables so that it will 
automatically loop through the 36 identical tables and pull data out for 
indexing?

Something like (pseudo code):

for (i = 1; i <= 36; i++) {
   ## retrieve data from the table Document{$i} & index the data
}

What's the best way to handle a situation like this?

Thanks

Reply via email to