[GENERAL] Followup: Here's why I want to use connection pooling middleware!

2009-01-21 Thread Kirk Strauser
On Thursday 15 January 2009 09:54:50 Kirk Strauser wrote:
 I have a PostgreSQL 8.3.5 server with max_connections = 400.  At this
 moment, I have 223 open connections, including 64 from a bunch of
 webserver processes and about 100 from desktop machines running a
 particular application.  The rest are from various scheduled processes
 and other assorted things.  Now, I know there are projects like pgpool-
 II that can serve to pool connections to the server.  Why would I want
 to do that, though?

After installing and configuring PgBouncer and then pointing all of our clients 
at it, our average number of database connections has dropped from 250+ to 17.  
Query times are also much better, and more RAM is going to caching than to 
holding processes.  Count me as a new fan.
-- 
Kirk Strauser

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Why would I want to use connection pooling middleware?

2009-01-15 Thread Kirk Strauser
I have a PostgreSQL 8.3.5 server with max_connections = 400.  At this  
moment, I have 223 open connections, including 64 from a bunch of  
webserver processes and about 100 from desktop machines running a  
particular application.  The rest are from various scheduled processes  
and other assorted things.  Now, I know there are projects like pgpool- 
II that can serve to pool connections to the server.  Why would I want  
to do that, though?


I understand why pooling within a process itself is a good thing.   
However, say I have two users running the same program on different  
desktop machines.  At present, those applications connect with the  
same username/password that's tied to the program and not the actual  
user.  It seems like if Abby and Barb end up sharing the same  
connection from the pool, and Abby runs some giant report query, then  
Barb would get held back while she waits for it to finish.  Is that  
true?  Even if not, what would be the advantage in the two of them  
sharing a connection?


I'm just trying to wrap my head around this.  Thanks!
--
Kirk Strauser





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why would I want to use connection pooling middleware?

2009-01-15 Thread Kirk Strauser

On Jan 15, 2009, at 10:08 AM, Tom Lane wrote:

 As an example, any system catalog update has to be broadcast to all  
live backends, and they all have to dutifully search their catalog  
caches to flush stale entries.  That costs the same whether the  
backend is being put to use or has been sitting idle for minutes.


I didn't realize that.  I wasn't sure what types of overheads were  
involved and didn't think about those sorts of things.



There's no percentage in trying to pool connections from applications
that are constantly doing something; but webserver sessions tend to  
have

lots of user think time as well as similar DB environments, so often
they can be pooled profitably.



That makes sense.  Along those lines, how do you actually enable  
connection pooling in pgpool-II?  I've RTFM a few times but it doesn't  
seem to have a flag for enable_pooling.  Is num_init_children  
effectively the same as a hypothetical max_children?  If I set it to  
1 and leave max_pool at 4, then clients queue up while one at a  
time gets to connect.


Sorry, I know this isn't the pgpool-II mailing list.  :-)
--
Kirk Strauser





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why would I want to use connection pooling middleware?

2009-01-15 Thread Kirk Strauser

On Jan 15, 2009, at 10:20 AM, Bill Moran wrote:

I don't believe that's true.  My understanding of pgpool is that it  
will
reuse an existing connection if it's free, or open a new one if  
required.



Gah!  It just made it worse!

$ ps auxwww | grep pgpool | grep dbuser | wc -l
  30
$ ps auxwww | grep postgres: | grep dbuser | wc -l
  38

So not only is it not sharing connections among clients, but it's  
keeping old ones open too.  This isn't really what I had in mind.


Also, many of the applications are launched in the morning and open a  
connection, then get used all day, then closed at the end of the  
afternoon.  I'm starting to wonder if perhaps pgpool-II isn't what I  
was looking for.

--
Kirk Strauser





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why would I want to use connection pooling middleware?

2009-01-15 Thread Kirk Strauser

On Jan 15, 2009, at 12:30 PM, Steve Crawford wrote:

But if your application is designed to work well with pooling, it  
can provide dramatic performance benefits.


I think that's the problem.  As I mentioned at one point, a lot of our  
applications have connections open for hours at a time and fire off  
queries when the user does something.  I'm coming to think that  
pooling wouldn't give much benefit to long-living processes like that.


On a related note, is max_connections=400 reasonably sized for a  
server with 8GB of RAM?  Again, most of these are dormant at any given  
time.  The database itself is currently hosted on a  dual Xeon server  
with 3GB of RAM and other applications so I'm sure the new 8-core/8GB  
hardware is bound to do better at any rate.

--
Kirk Strauser





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why would I want to use connection pooling middleware?

2009-01-15 Thread Kirk Strauser

On Jan 15, 2009, at 2:39 PM, Bill Moran wrote:


However, it pgpool can't pool connections if each connection has its
own username.  Not sure what exactly is causing it not to work for  
you,

but that was the first thing that came to mind.


The usernames are per-app.  Zope connections with username zope, for  
example.  However, any given application might have 30 instances  
running at any time.



Are you having a problem?  If so, what is the problem?



Honestly?  That so many people are singing the praises of connection  
pooling and I thought I'd better at least see what the excitingment is  
about.

--
Kirk Strauser





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why would I want to use connection pooling middleware?

2009-01-15 Thread Kirk Strauser

On Jan 15, 2009, at 2:54 PM, Steve Crawford wrote:

If you know that the application does not change GUC variables then  
you will probably benefit greatly by using pgbouncer.


Thanks, Steve!  That's just the kind of pointer I can use.  I've been  
using PostgreSQL for years but I've never really gone far into low- 
level optimizations like this.

--
Kirk Strauser





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] FreeBSD and large shared_buffers a no-go?

2009-01-07 Thread Kirk Strauser
I'm running PostgreSQL 8.3.5 on a FreeBSD 7.1/amd64 system with 8GB of  
RAM and two quad-core Xeon CPUs.  The data filesystem is on a battery- 
backed RAID-10 system.  This is a dedicated server so I want to commit  
all resources to PostgreSQL alone.  The database will get hit with a  
lot of small, quick queries with a few complex 10-second ones thrown  
in, and gets hourly bulk uploads to the tune of a few million rows.


This is actually an upgrade from older hardware that handled the load  
just fine, so I know this system will perform well.  My biggest  
concern is getting the best performance for my boss's money.


I'm using the default postgresql.conf with the following additions:

max_connections = 400
listen_addresses = '*'
shared_buffers = 2GB
temp_buffers = 32MB
work_mem = 64MB
maintenance_work_mem = 256MB
max_stack_depth = 500MB
max_fsm_pages = 204800
full_page_writes = off
wal_buffers = 1MB
commit_delay = 10
checkpoint_segments = 32
random_page_cost = 2.0
effective_cache_size = 4GB
default_statistics_target = 100
log_connections = on
log_disconnections = on
log_min_duration_statement = 5000
log_statement = 'ddl'

Now, what's confusing me is that I've set shmmax to 3GB and shmall to  
3GB/4096 (the page size):


$ sysctl kern.ipc.shmmax
kern.ipc.shmmax: 3221225472
$ sysctl kern.ipc.shmall
kern.ipc.shmall: 786432
$ sysctl hw.pagesize
hw.pagesize: 4096

However, when shared_buffers is 2GB (one fourth of 8GB of RAM),  
PostgreSQL's startup fails with a call to allocated shared memory:


Jan  7 11:39:24 db1 postgres[60872]: [1-1] FATAL:  could not create  
shared memory segment: Cannot allocate memory
Jan  7 11:39:24 db1 postgres[60872]: [1-2] DETAIL:  Failed system call  
was shmget(key=5432001, size=2209497088, 03600).
Jan  7 11:39:24 db1 postgres[60872]: [1-3] HINT:  This error usually  
means that PostgreSQL's request for a shared memory segment exceeded  
available memory or swap space.
Jan  7 11:39:24 db1 postgres[60872]: [1-4]  To reduce the request size  
(currently 2209497088 bytes), reduce PostgreSQL's shared_buffers  
parameter (currently 262144) and/or
Jan  7 11:39:24 db1 postgres[60872]: [1-5]  its max_connections  
parameter (currently 403).
Jan  7 11:39:24 db1 postgres[60872]: [1-6] 	The PostgreSQL  
documentation contains more information about shared memory  
configuration.


First, shmget is asking for a lot less than shmmax - why is it failing?

Second, does the one fourth of RAM rule of thumb still apply on  
systems with decent amounts of memory?


Third, is there anything else I can be doing to take advantage of this  
RAM and 8 CPU cores?


Thanks!
--
Kirk Strauser





Re: [GENERAL] FreeBSD and large shared_buffers a no-go?

2009-01-07 Thread Kirk Strauser

On Jan 7, 2009, at 12:30 PM, Tom Lane wrote:


Kirk Strauser k...@strauser.com writes:
First, shmget is asking for a lot less than shmmax - why is it  
failing?


Check to see if things work as expected when you have shmmax and  
shmall

set to a shade less than 2GB and fail when they are a shade more.  If
so, it would seem there's a signed-integer-overflow bug somewhere in  
the

kernel's handling of shmem requests ... which would be a reportable
kernel bug.



Actually, it seems that if shmget is trying to get more than 2GB, it  
will fail regardless of shmmax.  When I backed shared_buffers down to  
1980MB, PostgreSQL was able to start as long as shmmax is = 2GB  
(which I verified with 2GB+1MB, 3GB, and 8GB).  I'm off to file a  
report now.

--
Kirk Strauser





Re: [GENERAL] FreeBSD and large shared_buffers a no-go?

2009-01-07 Thread Kirk Strauser

On Jan 7, 2009, at 12:30 PM, Tom Lane wrote:


Kirk Strauser k...@strauser.com writes:
First, shmget is asking for a lot less than shmmax - why is it  
failing?


Check to see if things work as expected when you have shmmax and  
shmall

set to a shade less than 2GB and fail when they are a shade more.  If
so, it would seem there's a signed-integer-overflow bug somewhere in  
the

kernel's handling of shmem requests ... which would be a reportable
kernel bug.


BTW, that's at http://www.freebsd.org/cgi/query-pr.cgi?pr=130274 for  
anyone who wants to follow along.

--
Kirk Strauser






Re: [GENERAL] PostgreSQL vs FreeBSD 7.0 as regular user

2008-07-28 Thread Kirk Strauser
On Friday 25 July 2008, Zoltan Boszormenyi wrote:

 is there anyone using PostgreSQL on FreeBSD 7.0 starting from scratch?

Every day.

 I compiled 8.3.3 and wanted to run initdb in my home directory but
 it fails with the error below.

How did you install PostgreSQL?
-- 
Kirk Strauser
Daycos

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] PgDBF, Another XBase to PostgreSQL converter

2008-07-24 Thread Kirk Strauser
It'd been a couple of years since I wrote a program to convert some  
random database to PostgreSQL, and it seemed like time to crank out  
another one.  The results, PgDBF, are available under the GPLv3 and  
downloadable from http://honeypot.net/project/pgdbf .


Why yet another program to convert XBase databases in general (and  
FoxPro in particular) to PostgreSQL?  Because the other ones I found  
were incomplete, complex, slow, or not available on Unix-like  
systems.  We needed something that could run hourly to keep our legacy  
database in sync with our new production system, and the program we'd  
been using (XBaseToPg: http://honeypot.net/project/xbasetopg) was a  
hack on another program that was never meant to be abused that way.


Seriously, PgDBF is fast and simple.  I profiled it, optimized,  
profiled, optimized, and profiled again until I couldn't find anything  
else to tweak.  And yet its speed comes primarily through its  
simplicity, so it should be very easy to maintain.


Give it a try.  It runs on everything Unixy that I had available for  
testing (including Linux x86, FreeBSD x86-64, and OS X PPC).  I'm  
pretty pleased with how this turned out.

--
Kirk Strauser

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Re: A903-35A9-76AF : CONFIRM from pgsql-general (subscribe)

2001-08-30 Thread Kirk Strauser

accept A903-35A9-76AF

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]