lou wrote:

In some email I received from Gianni Mariani <[EMAIL PROTECTED]> on Tue, 01 Jul 
2003
22:25:11 -0700, wrote :

...

not true in PgSQL 7.3.3 works fine with > 64 conn,
alter the value in PGDATA/postgresql.conf + alter the buffer size 2xMAX_CONN
I dont see a point hardcoding something like this?:)
Good point. I hadn't noticed this.  Kewel.

I think the right model is to have a single process server. In my last job we created a streaming server that was a single process handling 1000 clients at 256kbit streams and hardly breaking a sweat on a 1.2Ghz AMD dual. Everything is asynchronous. I'd reccomend this model with database connection cacheing and you could probably survive with fewer that 20 connections to the database and have thousands of clients "connected". Using the new esyspoll stuff it's possible that to support 10,000*n of connections. This is the *Right*(TM) was to do this. Heck, while I'm at it, I may as well rant on. Here are some more possible improvements.

a) dbmail could use transactions (BEGIN/COMMIT blocks) and it would be faster on Postgresql because the constraints need only be checked at the commit.

wey, how exactly is dbmail scheme slow?
Too many indexes. I think you could achieve the same effect with fewer indexes hence less updates to indexes.

and how exactly using  transaction will make it
faster, please elaborate?

Try a test. Doing a bunch of updates in a single transaction is faster than doing multiple transactions.

I suppose rewriting the queries might speed up things a little
bit. it's quite possible to wrap some queries into transactions, but I cant see 
_this_
really good reason why this should be implemented at this point  (safe model).

Transactions are safer. It means that if you do multiple updates, you're only going to commit completed ones.

As to what should be done at this point all depends on how much resources people are willing to devote.



lock <ids>
start trans;
insert_message(id,blahblah) && insert_messageblks(id,blahblah) commit;

yep.


b) dbmail could also use stored procedures in Postgresql. This would eliminate a bunch of processing on the server.

why? it will break portability with other servers like mysql. Also that means 
all the
load to be pushed to the database server rather than the client (dbmail server).
(MySQL have stored procedures in 4.0.x or?)

Because pgsql procedures automagically prepare statements and hence there is a lot less "planning" going on.

Also, you really don't want the C code to know too much about the schema. (Encapsulation peeve on mine).


cheers

Likewise.


Reply via email to