On Thu, 23 Aug 2007 21:29:46 +0400, Bill Moran <[EMAIL PROTECTED]> wrote:

In response to "Joshua D. Drake" <[EMAIL PROTECTED]>:

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Max Zorloff wrote:
> Hello.
>
> I have a subject setup and a few questions.
>
> The first one is this. PHP establishes a connection to the Postgres
> database through pg_pconnect().

Don't use pconnect. Use pgbouncer or pgpool.

> Then it
> runs some query, then the script returns, leaving the persistent
> connection hanging. But the trouble
> is that in this case any query takes significantly more time to execute
> than in the case of one PHP script
> running the same query with different parameters for N times. How can I
> achieve the same performance in the first
> case? Persistent connections help but not enough - the queries are still
> 10 times slower than they would be on
> the 2nd time.

Well you haven't given us any indication of data set or what you are
trying to do. However, I can tell you, don't use pconnect, its broke ;)

Broke?  How do you figure?

I'm not trying to argue the advantages of a connection pooler such as
pgpool, but, in my tests, pconnect() does exactly what it's supposed
to do: reuse existing connections.  In our tests, we saw a 2x speed
improvement over connect().  Again, I understand that pgpool will do
even better ...

Also, I'm curious as to whether he's timing the actual _query_ or the
entire script execution.  If you're running a script multiple times
to get multiple queries, most of your time is going to be tied up in
PHP's parsing and startup -- unless I misunderstood the question.


I'm timing it with the php gettimeofday(). And I'm timing the actual pg_query()
run time, excluding db connection and everything else.

---------------------------(end of broadcast)---------------------------
TIP 5: don't forget to increase your free space map settings

Reply via email to