Jonah H. Harris wrote:
On 6/6/07, Craig James <[EMAIL PROTECTED]> wrote:
They're blowing smoke if they think Oracle can do this.

Oracle could handle this fine.

Oracle fell over dead, even with the best indexing possible,
tuned by the experts, and using partitions keyed to the
customerID.

I don't think so, whoever tuned this likely didn't know what they were doing.

Wrong on both counts.

You didn't read my message.  I said that *BOTH* Oracle and Postgres performed 
well with table-per-customer.  I wasn't Oracle bashing.  In fact, I was doing 
the opposite: Someone's coworker claimed ORACLE was the miracle cure for all 
problems, and I was simply pointing out that there are no miracle cures.  (I 
prefer Postgres for many reasons, but Oracle is a fine RDBMS that I have used 
extensively.)

The technical question is simple: Table-per-customer or big-table-for-everyone.  The 
answer is, "it depends."  It depends on your application, your 
read-versus-write ratio, the table size, the design of your application software, and a 
dozen other factors.  There is no simple answer, but there are important technical 
insights which, I'm happy to report, various people contributed to this discussion.  
Perhaps you have some technical insight too, because it really is an important question.

The reason I assert (and stand by this) that "They're blowing smoke" when they 
claim Oracle has the magic cure, is because Oracle and Postgres are both relational 
databases, they write their data to disks, and they both have indexes with O(log(N)) 
retrieval/update times.  Oracle doesn't have a magical workaround to these facts, nor 
does Postgres.

Craig

---------------------------(end of broadcast)---------------------------
TIP 6: explain analyze is your friend

Reply via email to