Ryan Bradetich <[EMAIL PROTECTED]> writes: > Although the table schema is immaterial, I will provide it so we have a > common framework for this discussion:
> host_id integer (not null) > timestamp datetime (not null) > category text (not null) [<= 5 chars] > anomaly text (not null) [<= 1024 chars] > This table is used to store archived data, so each row in the table must > be unique. Currently I am using a primary key across each column to > enforce this uniqueness. It's not real clear to me why you bother enforcing a constraint that the complete row be unique. Wouldn't a useful constraint be that the first three columns be unique? Even if that's not correct, what's wrong with tolerating a few duplicates? You can't tell me it's to save on storage ;-) > I am not sure why all the data is duplicated in the index ... but i bet > it has to do with performance since it would save a lookup in the main > table. An index that can't prevent looking into the main table wouldn't be worth anything AFAICS ... regards, tom lane ---------------------------(end of broadcast)--------------------------- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html