On Wed, Dec 29, 2010 at 11:11:36AM +0100, Cosimo Streppone wrote:
> Hi there,
> 
> It's been a while since I first read about DBD::Gofer.
> 
> I will soon be reaching the point where we're saturating
> the database with useless, maybe idle, connections,

Umm "useless, maybe idle, connections".  That suggests that you're
connecting from general purpose web servers rather than using (e.g.) a
reverse proxy to route requests that require db access to a few specific
'heavy' web servers, leaving many 'light' servers to handle non-db
requests. [Just checking.]

> - an apache/mod_perl/Plack/PSGI app, around ~500 req/s
>   running on a few backends.
> 
> - on each backend, every child connects to the database.
>   For this particular app, 4 db connections to 2 different dbs are
> required:
>     db1 master (write-only) + db1 slave (read-only)
>   + db2 master (write-only) + db2 slave (read-only)
> - 100 children * 4 = 400 connections per backend at startup

What db? (Idle curiosity.)

> - The majority of use cases are not transactional and
>   90% read/10% write, but I also have a few transactions,
>   and that is going to increase soon.
> 
> I'd like to get some advice related to your experience
> in using DBD::Gofer for connection pooling.

DBD::Gofer doesn't handle transactions.

What's the maximum additional db request latency (in milliseconds)
you're willing to accept?

> I'm thinking of running a per-backend Gofer-based server
> that just pools connections to the DBs.
>
> All the apache/mod_perl/PSGI children would then
> connect to the local Gofer daemon over a unix socket
> or maybe localhost:<someport>.

The key issues are the tranasctions and extra db request latency.

Tim.

Reply via email to