Marek Florianczyk <[EMAIL PROTECTED]> writes: > Each client was doing:
> 10 x connect,"select * from table[rand(1-4)] where > number=[rand(1-1000)]",disconnect--(fetch one row) Seems like this is testing the cost of connect and disconnect to the exclusion of nearly all else. PG is not designed to process just one query per connection --- backend startup is too expensive for that. Consider using a connection-pooling module if your application wants short-lived connections. > I noticed that queries like: "\d table1" "\di" "\dp" are extremly slow, I thought maybe you'd uncovered a performance issue with lots of schemas, but I can't reproduce it here. I made 10000 schemas each containing a table "mytab", which is about the worst case for an unqualified "\d mytab", but it doesn't seem excessively slow --- maybe about a quarter second to return the one mytab that's actually in my search path. In realistic conditions where the users aren't all using the exact same table names, I don't think there's an issue. regards, tom lane ---------------------------(end of broadcast)--------------------------- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html