HI List,
 
I´m curently in the design phase of a rather large DB project
and would love to hear your qualified thoughts on this.
 
You can think of the data to be stored as objects with some properties, an
unique ID and then 21 fields which will contain TEXT data,
 
The DB acts as kind of archieve that allows the user to search in the
object´s text fields.
Each field will contain around 100 to 1.000 chars, and per object only 7-12
fields will be used.
I´m expecting an amount of about 1.000.000 objects to be stored.
 
I thought of organizing the whole archive in three tables:
one for the general properties of the object,
one that contains the object´s ID and the text columns 1 to 9
and one table that will contain the ID and the other columns
 
I splitted the data up like this because in most cases the user will only
search for data in the first 9 columns,
so in that case the DB would not need to scan the whole data.
 
Does this layout makes sense?
The queries run on that table will be simple lookups for words in the text
with options simliar to Google (exact phrase, single word, all words,..)
He can also select which colums to search in.
 
If you have any thoughts about this design or ideas on creating indexes or
something else what might improve performance I would
love to hear your thoughts!
 
TIA
 
André
 

Reply via email to