Re: [GENERAL] Query sometimes takes down server

2009-01-15 Thread Jeff Davis
On Thu, 2009-01-15 at 19:37 -0600, Jason Long wrote:
> >   
> > > I have not looked into the detail of the explain, and I do see visually 
> > > that very different plans are being chosen.
> > > 
> > 
> > It would help to share these plans with us...
> >   
> See EXPLAIN ANALYZE below for three different plans @ .5, 6, and 84
> seconds

Well, it's clear why you're getting a variety of plans. With that many
tables in the FROM list, there are a huge number of possible plans, so
PostgreSQL will use a genetic algorithm to try to find the best one,
which is based on random numbers.

My only advice would be to step back for a minute and see if there might
be a better way to solve the original problem. Typically, numbers in
table and field names are a sign something could be better designed.

I may have some better advice if you say why you are joining so many
tables, and what the numbers are for.

Regards,
Jeff Davis


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Initial ugly reverse-translator

2009-01-15 Thread Oleg Bartunov

Hi,

ltree and pg_trgm with UTF8 support are available from CVS HEAD, see
See http://archives.postgresql.org/pgsql-committers/2008-06/msg00356.php
http://archives.postgresql.org/pgsql-committers/2008-11/msg00139.php

Oleg
On Fri, 16 Jan 2009, pepone.onrez wrote:


On Sat, Apr 19, 2008 at 6:10 PM, Oleg Bartunov  wrote:

On Sat, 19 Apr 2008, Tom Lane wrote:


Craig Ringer  writes:


Tom Lane wrote:


I don't really see the problem.  I assume from your reference to pg_trgm
that you're using trigram similarity as the prefilter for potential
matches



It turns out that's no good anyway, as it appears to ignore characters
outside the ASCII range. Rather less than useful for searching a
database of translated strings ;-)


A quick look at the pg_trgm code suggests that it is only prepared to
deal with single-byte encodings; if you're working in UTF8, which I
suppose you'd have to be, it's dead in the water :-(.  Perhaps fixing
that should be on the TODO list.


as well as ltree. they are in our todo list:
http://www.sai.msu.su/~megera/wiki/TODO



Hi Oleg

In your TODO list says that UTF8 was added to ltree, is this code
currently available for download?

Regards,
JosЪЪ


But in any case maybe the full-text-search stuff would be more useful
as a prefilter?  Although honestly, for the speed we need here, I'm
not sure a prefilter is needed at all.  Full text might be useful
if a LIKE-based match fails, though.


(And besides, speed doesn't seem like the be-all and end-all here.)



True. It's not so much the speed as the fragility when faced with small
changes to formatting. In addition to whitespace, some clients mangle
punctuation with features like automatic "curly"-quoting.


Yeah.  I was wondering whether encoding differences wouldn't be a huge
problem in practice, as well.

   regards, tom lane




   Regards,
   Oleg
_
Oleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),
Sternberg Astronomical Institute, Moscow University, Russia
Internet: o...@sai.msu.su, http://www.sai.msu.su/~megera/
phone: +007(495)939-16-83, +007(495)939-23-83

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general






Regards,
Oleg
_
Oleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),
Sternberg Astronomical Institute, Moscow University, Russia
Internet: o...@sai.msu.su, http://www.sai.msu.su/~megera/
phone: +007(495)939-16-83, +007(495)939-23-83
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Question regarding Postgres + OpenSSL + FIPs

2009-01-15 Thread Dhaval Shah
I  am setting up Postgres for OpenSSL + FIPs.

I am compiling Postgres with OpenSSL FIPS library using the
"-with-openssl" option. The question I have is, just doing that
suffice? Or do I have to modify the postgres source code?

Since I read through the OpenSSL FIPS documentation, it mentions to
take this step as well:

1. Fips mode initialization via
a. direct call to FIPS_mode_set() or
b. indirect call to OPENSSL_config()

With either 1a or 1b, it indicates that I have to modify the postgres
source code [that looks like a fork and local maintenance of the
postgres source code].

Of course I would like to hear that -with-openssl option takes care of
the above and I just have to compile with that option. If not, which
postgres files should I modify? Is it possible to create a header file
and compile link it as part of postgres so that when postgres starts
up, it can do either of the above mentioned calls.

Please advice.

Thanks in advance,
Dhaval Shah

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Query sometimes takes down server

2009-01-15 Thread marcin mank
> I have a dynamically built query that will periodically(2 times a day and 
> becoming more frequent) make my server totally unresponsive.

does this query involve more than geqo_threshold (default 12) tables?
If so, this most probably is geqo (genetic query optimizer) kicking
in. Try to fiddle with some geqo parameters (raising geqo_effort is
the first choice).

Greetings
Marcin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


HTML email (was: Re: [GENERAL] Vacuum and Reindex hangs)

2009-01-15 Thread Ivan Sergio Borgonovo
On Thu, 15 Jan 2009 12:56:51 -0800
"Joshua D. Drake"  wrote:

> I know many perfectly intelligent people that are better served
> through diagrams, pdf and color than a mailing list. Most of them
> make sure geeks like us, *EAT*.

> Does that mean they are not intelligent or perhaps that there
> talent set is just different?

Inspired by your mail I just wrote a post that explain why jerky
geeks may be fine esthetes but still they've good reason to
avoid HTML in emails.
I think I followed a slight different approach to the more
traditional ones.

http://www.webthatworks.it/d1/node/page/are_geeks_rich_media_impaired_or_why_html_emails_are_evil

-- 
Ivan Sergio Borgonovo
http://www.webthatworks.it


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Query sometimes takes down server

2009-01-15 Thread Jeff Davis
On Thu, 2009-01-15 at 18:44 -0600, Jason Long wrote:
> The query that hangs the system is requesting a count(*)
> based on some parameters the users selects.

Can you show an example of the full offending query? How big is the
table?

> And maybe 1 in 20 will not complete.

If you really have nothing else running on the server, the queries
should complete.

> I have not looked into the detail of the explain, and I do see visually 
> that very different plans are being chosen.

It would help to share these plans with us...

Run it several times using EXPLAIN ANALYZE, and send the output of one
that completes quickly, one that takes a while, and preferably wait for
a long-running one to finish. For those that you can't wait for, show
plain EXPLAIN output.

> How can I see the plan that was chosen when the time it very high?
> Is there a way to print the query plan chosen when the query times out 
> as well?
> 

There's a new module coming in 8.4 that makes this much easier. Right
now, you have to run it yourself with EXPLAIN ANALYZE as above.

Regards,
Jeff Davis


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Query sometimes takes down server

2009-01-15 Thread Jason Long
I am having a serious problem with my application and I hope someone can 
help me out.
This could not happen at a worse time as a consulting firm is at my 
clients to recommend a new financial system and the inventory 
system(which I developed) keeps locking up.


I have a dynamically built query that will periodically(2 times a day 
and becoming more frequent) make my server totally unresponsive.


The query that hangs the system is requesting a count(*)
based on some parameters the users selects.

Since I set my timeout to 2 minutes(the only way I  have been able to 
deal with this so far) I see the offending query in the log.


I took the query from the logs and pasted it into pgAdmin and ran it a 
few times.


Sometime is takes 700-900 ms, but others it takes 60-100 seconds.
Other times it never stops(I waited 10 minutes).

If I run 10 times I get

8 less that 1 sec
2 5-10 sec

And maybe 1 in 20 will not complete.

Dev Server specs
1 CPU Xeon 5472 Quad core 3.0 GHz 1600MHz FSB
2x10k Raptor Raid 1
DB 50 mb with a lot of table joins

These queries are being run with nothing else running on the server.

My guess it that the planner is picking an inefficient plan sometimes.

I have not looked into the detail of the explain, and I do see visually 
that very different plans are being chosen.


How can I see the plan that was chosen when the time it very high?
Is there a way to print the query plan chosen when the query times out 
as well?






--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Autovacuum daemon terminated by signal 11

2009-01-15 Thread Justin Pasher

Tom Lane wrote:

I read it like this:

#0  0x0827441d in MemoryContextAlloc () <-- real
#1  0x08274467 in MemoryContextStrdup ()<-- real
#2  0x0826501c in database_getflatfilename ()   <-- real
#3  0x0826504e in database_getflatfilename ()   <-- must be write_database_file
#4  0x08265ec1 in AtEOXact_UpdateFlatFiles ()   <-- real
#5  0x080a9111 in RecordTransactionCommit ()<-- must be CommitTransaction
#6  0x080a93a7 in CommitTransactionCommand ()   <-- real
#7  0x081a6c3b in autovac_stopped ()<-- must be process_whole_db
#8  0x081a75cd in autovac_start ()  <-- real
#9  0x081ae33c in ClosePostmasterPorts ()   <-- must be ServerLoop
#10 0x081af058 in PostmasterMain ()
#11 0x0816b3e2 in main ()

although this requires one or two leaps of faith about single-call
static functions getting inlined so that they don't produce a callstack
entry (in particular that must have happened to AutoVacMain).  In any
case, it's very hard to see how MemoryContextAlloc would dump core
unless the method pointer of the context it was pointed to was
clobbered.  So I'm pretty sure that's what happened, and now we must
work backwards to how it happened,

Justin, it's entirely possible that the only way we'll figure it out
is for a developer to go poking at the entrails.  Are you in a position
to give Alvaro or me ssh access to your test machine?

regards, tom lane
  


I'm currently working on recompiling Postgres with the new configure 
parameters. I'm trying to go the easier route by downloading the Debian 
source package, add the new options, compile, then install the package. 
Hopefully this will give the closest possible binary to the current one. 
Incidentally, the --enable-debug option is already set for the Debian 
package (I did have to add --enable-cassert though). I'll let you know 
once I get it up if things work properly.


As far as access to the machine, I'll contact you off-list if I can work 
something out for that. The data is not overly sensitive, but it's still 
client data nonetheless. I'll try to make a copy of the cluster and try 
to reduce the database count and see if I can still duplicate the problem.


Thanks.

Justin Pasher

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Autovacuum daemon terminated by signal 11

2009-01-15 Thread Tom Lane
Alvaro Herrera  writes:
> Tom Lane wrote:
>> Hmm.  This isn't very trustworthy for lack of debug symbols (what we're
>> probably looking at are the nearest global function names before the
>> actual locations).

> The lack of debug symbols makes this all mere guesses though.  The
> backtrace did not make a lot of sense to me.

I read it like this:

#0  0x0827441d in MemoryContextAlloc () <-- real
#1  0x08274467 in MemoryContextStrdup ()<-- real
#2  0x0826501c in database_getflatfilename ()   <-- real
#3  0x0826504e in database_getflatfilename ()   <-- must be write_database_file
#4  0x08265ec1 in AtEOXact_UpdateFlatFiles ()   <-- real
#5  0x080a9111 in RecordTransactionCommit ()<-- must be CommitTransaction
#6  0x080a93a7 in CommitTransactionCommand ()   <-- real
#7  0x081a6c3b in autovac_stopped ()<-- must be process_whole_db
#8  0x081a75cd in autovac_start ()  <-- real
#9  0x081ae33c in ClosePostmasterPorts ()   <-- must be ServerLoop
#10 0x081af058 in PostmasterMain ()
#11 0x0816b3e2 in main ()

although this requires one or two leaps of faith about single-call
static functions getting inlined so that they don't produce a callstack
entry (in particular that must have happened to AutoVacMain).  In any
case, it's very hard to see how MemoryContextAlloc would dump core
unless the method pointer of the context it was pointed to was
clobbered.  So I'm pretty sure that's what happened, and now we must
work backwards to how it happened,

Justin, it's entirely possible that the only way we'll figure it out
is for a developer to go poking at the entrails.  Are you in a position
to give Alvaro or me ssh access to your test machine?

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Autovacuum daemon terminated by signal 11

2009-01-15 Thread Alvaro Herrera
Tom Lane wrote:

> Hmm.  This isn't very trustworthy for lack of debug symbols (what we're
> probably looking at are the nearest global function names before the
> actual locations).  However, it strongly suggests that something is
> broken in the active memory context, and the most likely explanations
> for that are either a memory clobber (eg overrunning the requested size
> of a chunk) or CurrentMemoryContext pointing at a context that was
> already freed.  The latter theory ties into the fact that it seems to be
> happening during transaction end.  But any such bug of either type
> should have been found years ago given that development is invariably
> done with CLOBBER_FREED_MEMORY enabled.
> 
> Alvaro, any thoughts?  Remember this is 8.1.15.

Not really.  It seems like this must be happening on the vicinity of
process_whole_db(), which is a less used code path than do_autovacuum(),
so it's more likely to have bugs.  I don't see anything obviously wrong
though.

I note that process_whole_db is not changing to AutovacMemCxt the way
do_autovacuum() does, but I don't see any way that this could cause a
problem.

Hmm, vacuum() creates a new memory context under PortalContext, but I
don't see that one set anywhere on the autovacuum path ... is that
bogus?

The lack of debug symbols makes this all mere guesses though.  The
backtrace did not make a lot of sense to me.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How good is the default values for autovacuum?

2009-01-15 Thread Scott Marlowe
On Thu, Jan 15, 2009 at 3:49 PM, Bjørn T Johansen  wrote:
> Just wondering if I need to change the defalt values for autovacuum in 
> version 8.3.5?

They're fairly good.  A good way to see if it's working for you is to
let autovacuum run for a few days with your server handling a normal
load, and every night at midnight or so send yourself the last 15 or
so lines of output from vacuum verbose on your database(s).

If it keeps telling you to increase your free space map then either
autovacuum isn't aggresive enough, or your usage pattern simply
requires a larger FSM to keep things from bloating.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Initial ugly reverse-translator

2009-01-15 Thread pepone . onrez
On Sat, Apr 19, 2008 at 6:10 PM, Oleg Bartunov  wrote:
> On Sat, 19 Apr 2008, Tom Lane wrote:
>
>> Craig Ringer  writes:
>>>
>>> Tom Lane wrote:

 I don't really see the problem.  I assume from your reference to pg_trgm
 that you're using trigram similarity as the prefilter for potential
 matches
>>
>>> It turns out that's no good anyway, as it appears to ignore characters
>>> outside the ASCII range. Rather less than useful for searching a
>>> database of translated strings ;-)
>>
>> A quick look at the pg_trgm code suggests that it is only prepared to
>> deal with single-byte encodings; if you're working in UTF8, which I
>> suppose you'd have to be, it's dead in the water :-(.  Perhaps fixing
>> that should be on the TODO list.
>
> as well as ltree. they are in our todo list:
> http://www.sai.msu.su/~megera/wiki/TODO
>

Hi Oleg

In your TODO list says that UTF8 was added to ltree, is this code
currently available for download?

Regards,
José
>>
>> But in any case maybe the full-text-search stuff would be more useful
>> as a prefilter?  Although honestly, for the speed we need here, I'm
>> not sure a prefilter is needed at all.  Full text might be useful
>> if a LIKE-based match fails, though.
>>
 (And besides, speed doesn't seem like the be-all and end-all here.)
>>
>>> True. It's not so much the speed as the fragility when faced with small
>>> changes to formatting. In addition to whitespace, some clients mangle
>>> punctuation with features like automatic "curly"-quoting.
>>
>> Yeah.  I was wondering whether encoding differences wouldn't be a huge
>> problem in practice, as well.
>>
>>regards, tom lane
>>
>>
>
>Regards,
>Oleg
> _
> Oleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),
> Sternberg Astronomical Institute, Moscow University, Russia
> Internet: o...@sai.msu.su, http://www.sai.msu.su/~megera/
> phone: +007(495)939-16-83, +007(495)939-23-83
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] How good is the default values for autovacuum?

2009-01-15 Thread Bjørn T Johansen
Just wondering if I need to change the defalt values for autovacuum in version 
8.3.5?


Regards,

BTJ

-- 
---
Bjørn T Johansen

b...@havleik.no
---
Someone wrote:
"I understand that if you play a Windows CD backwards you hear strange Satanic 
messages"
To which someone replied:
"It's even worse than that; play it forwards and it installs Windows"
---

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Autovacuum daemon terminated by signal 11

2009-01-15 Thread Tom Lane
Justin Pasher  writes:
> Program terminated with signal 11, Segmentation fault.
> #0  0x0827441d in MemoryContextAlloc ()
> (gdb) bt
> #0  0x0827441d in MemoryContextAlloc ()
> #1  0x08274467 in MemoryContextStrdup ()
> #2  0x0826501c in database_getflatfilename ()
> #3  0x0826504e in database_getflatfilename ()
> #4  0x08265ec1 in AtEOXact_UpdateFlatFiles ()
> #5  0x080a9111 in RecordTransactionCommit ()
> #6  0x080a93a7 in CommitTransactionCommand ()
> #7  0x081a6c3b in autovac_stopped ()
> #8  0x081a75cd in autovac_start ()
> #9  0x081ae33c in ClosePostmasterPorts ()
> #10 0x081af058 in PostmasterMain ()
> #11 0x0816b3e2 in main ()

Hmm.  This isn't very trustworthy for lack of debug symbols (what we're
probably looking at are the nearest global function names before the
actual locations).  However, it strongly suggests that something is
broken in the active memory context, and the most likely explanations
for that are either a memory clobber (eg overrunning the requested size
of a chunk) or CurrentMemoryContext pointing at a context that was
already freed.  The latter theory ties into the fact that it seems to be
happening during transaction end.  But any such bug of either type
should have been found years ago given that development is invariably
done with CLOBBER_FREED_MEMORY enabled.

Alvaro, any thoughts?  Remember this is 8.1.15.

Justin, if you do feel like recompiling, please do so with original
configure options (run pg_config to verify) and add --enable-cassert
as well as --enable-debug.  That might give us more to work with.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Vacuum and Reindex hangs

2009-01-15 Thread Scott Marlowe
On Thu, Jan 15, 2009 at 2:24 PM, Jason Long
 wrote:
> Scott Marlowe wrote:

> You got me.  I have a set of mirrored raptors.  I am not sure the disk i/o
> subsystem is a bottleneck.
> The whole DB is 50 mb with minimal users.

Then you're only ever writing to the db, and 50Meg is teeny tiny.
Even my laptop can write out at 50Megs in about 5 seconds.

> Would a 16 SAS Disk RAID -10 really help me that much?

Depends on your usage pattern.  We use a 12 disk one with 15k5
seagates to handle a couple of 30G databases running 2000 to 5000
requests per minute, 97% or so being reads.

> The dataset is small, but contains a complex data structure with many joins
> between tables.
> I would appreciate any advice on the effect of a high end disk setup for my
> case.

Given how small your dataset is, a simple caching RAID controller
should offer enough throughput that you don't need more drives.

> I used to use full vacuum and reindex ever night just before I did a dump
> backup.  Then I started to try the autovacuum.
> The reason for the vacuum now it that I have a dynamic query that sometimes
> brings the server to a grinding halt.

You might be better served by a cluster command than a vacuum full.
It rewrites the table much like a vacuum full, but it's faster,
doesn't bloat the index, and results in a table who's order follows
that of the index you clustered on.  We have a large table that went
from 5 to 300 seconds to .5 to 3 seconds avg query speed because of a
cluster command.  Took 80 minutes to cluster the first time, but it
was well worth it.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Autovacuum daemon terminated by signal 11

2009-01-15 Thread Justin Pasher

Tom Lane wrote:

Having debug symbols would be more useful, but unless the binary is
totally stripped, a backtrace might provide enough info without that.
Try it and see if you get any function names in the trace, or only
numbers.

(BTW, does Debian have anything comparable to Red Hat's debuginfo
packages?  You might be able to get the debug symbols without having
to recompile for yourself.  Recompile is a bit of a pain since you have
to take care to match the original compilation options exactly.)

regards, tom lane



Alrighty. Here's what I got (assuming I did this right). My untrained 
eyes see some stuff regarding memory allocation. I wonder if overly 
aggressive memory related tweaks in the config file are causing the 
problem? I don't recall making any changes to the config file within a 
short time period of the problem starting, but let me know if I need to 
post any config settings.



hostname:/var/lib/postgresql/8.1/mc-db2# gdb 
/usr/lib/postgresql/8.1/bin/postmaster core

GNU gdb 6.4.90-debian
Copyright (C) 2006 Free Software Foundation, Inc.
GDB is free software, covered by the GNU General Public License, and you are
welcome to change it and/or distribute copies of it under certain 
conditions.

Type "show copying" to see the conditions.
There is absolutely no warranty for GDB.  Type "show warranty" for details.
This GDB was configured as "i486-linux-gnu"...(no debugging symbols found)
Using host libthread_db library "/lib/tls/i686/cmov/libthread_db.so.1".


warning: Can't read pathname for load map: Input/output error.
Reading symbols from /lib/libpam.so.0...(no debugging symbols found)...done.
Loaded symbols for /lib/libpam.so.0
Reading symbols from /usr/lib/i686/cmov/libssl.so.0.9.8...(no debugging 
symbols found)...done.

Loaded symbols for /usr/lib/i686/cmov/libssl.so.0.9.8
Reading symbols from /usr/lib/i686/cmov/libcrypto.so.0.9.8...(no 
debugging symbols found)...done.

Loaded symbols for /usr/lib/i686/cmov/libcrypto.so.0.9.8
Reading symbols from /usr/lib/libkrb5.so.3...(no debugging symbols 
found)...done.

Loaded symbols for /usr/lib/libkrb5.so.3
Reading symbols from /lib/libcom_err.so.2...(no debugging symbols 
found)...done.

Loaded symbols for /lib/libcom_err.so.2
Reading symbols from /lib/tls/i686/cmov/libcrypt.so.1...
(no debugging symbols found)...done.
Loaded symbols for /lib/tls/i686/cmov/libcrypt.so.1
Reading symbols from /lib/tls/i686/cmov/libdl.so.2...(no debugging 
symbols found)...done.

Loaded symbols for /lib/tls/i686/cmov/libdl.so.2
Reading symbols from /lib/tls/i686/cmov/libm.so.6...(no debugging 
symbols found)...done.

Loaded symbols for /lib/tls/i686/cmov/libm.so.6
Reading symbols from /lib/tls/i686/cmov/libc.so.6...(no debugging 
symbols found)...done.

Loaded symbols for /lib/tls/i686/cmov/libc.so.6
Reading symbols from /usr/lib/libz.so.1...(no debugging symbols 
found)...done.

Loaded symbols for /usr/lib/libz.so.1
Reading symbols from /usr/lib/libk5crypto.so.3...
(no debugging symbols found)...done.
Loaded symbols for /usr/lib/libk5crypto.so.3
Reading symbols from /lib/tls/i686/cmov/libresolv.so.2...(no debugging 
symbols found)...done.

Loaded symbols for /lib/tls/i686/cmov/libresolv.so.2
Reading symbols from /lib/ld-linux.so.2...(no debugging symbols 
found)...done.

Loaded symbols for /lib/ld-linux.so.2
Reading symbols from /usr/lib/libkrb5support.so.0...(no debugging 
symbols found)...done.

Loaded symbols for /usr/lib/libkrb5support.so.0
(no debugging symbols found)
Core was generated by `postgres: autovacuum process   
mc_dairyqueen  '.

Program terminated with signal 11, Segmentation fault.
#0  0x0827441d in MemoryContextAlloc ()
(gdb) bt
#0  0x0827441d in MemoryContextAlloc ()
#1  0x08274467 in MemoryContextStrdup ()
#2  0x0826501c in database_getflatfilename ()
#3  0x0826504e in database_getflatfilename ()
#4  0x08265ec1 in AtEOXact_UpdateFlatFiles ()
#5  0x080a9111 in RecordTransactionCommit ()
#6  0x080a93a7 in CommitTransactionCommand ()
#7  0x081a6c3b in autovac_stopped ()
#8  0x081a75cd in autovac_start ()
#9  0x081ae33c in ClosePostmasterPorts ()
#10 0x081af058 in PostmasterMain ()
#11 0x0816b3e2 in main ()



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Autovacuum daemon terminated by signal 11

2009-01-15 Thread Tom Lane
Justin Pasher  writes:
> I'll let you know when I get a chance to get a core dump from the 
> process. I assume I will need a version of Postgres built with debug 
> symbols for it to be useful? I'm not seeing one in the standard Debian 
> repositories, so I might have to compile from source.

Having debug symbols would be more useful, but unless the binary is
totally stripped, a backtrace might provide enough info without that.
Try it and see if you get any function names in the trace, or only
numbers.

(BTW, does Debian have anything comparable to Red Hat's debuginfo
packages?  You might be able to get the debug symbols without having
to recompile for yourself.  Recompile is a bit of a pain since you have
to take care to match the original compilation options exactly.)

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Autovacuum daemon terminated by signal 11

2009-01-15 Thread Justin Pasher

Tom Lane wrote:

Justin Pasher  writes:
  

Richard Huxton wrote:


Segmentation fault - probably a bug or bad RAM.
  
It's a relatively new machine, but that's obviously a possibility with 
any hardware. I haven't seen any other programs experiencing problems on 
the box, but the Postgres daemon is the one that is primarily utilized, 
so it's a little biased toward that.



I agree that the behavior seems a bit too specific to be a hardware
issue.

Can you get a stack trace from the crash?  You might need to restart the
postmaster under "ulimit -c unlimited" to get a core dump from the
crashing autovacuum process.

regards, tom lane
  



I'm working on getting the database running on another server so I can 
perform more tests. So far I was able to get a copy of the cluster up 
and running. Once the autovacuum process kicked in, it started 
experiencing the same segfault on the new box. At this point, the 
hardware on the original box no longer seems to be a culprit (assuming 
the data files themselves aren't corrupted and I didn't just bring the 
corruption along with the cluster).


I'll let you know when I get a chance to get a core dump from the 
process. I assume I will need a version of Postgres built with debug 
symbols for it to be useful? I'm not seeing one in the standard Debian 
repositories, so I might have to compile from source.


Justin Pasher

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] MD5 password issue

2009-01-15 Thread Alvaro Herrera
Andreas Wenk wrote:

> Yes thats correct with the IP address range. Maybe I did not understand  
> the auth concept yet. I thought, that with METHOD set to md5, a md5  
> hashed password is required. The password is submitted with the PHP 5  
> pg_connect function - as plain text.

It is specified to pg_connect as plain text, but it is sent over the
wire md5-hashed.

> So maybe the better question is: what is the difference between METHOD  
> password and md5? As I assume now because of your answers, it has  
> nothing to do with either the password is md5 hashed or not?

The difference is what travels on the wire.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Autovacuum daemon terminated by signal 11

2009-01-15 Thread Tom Lane
Justin Pasher  writes:
> Richard Huxton wrote:
>> Segmentation fault - probably a bug or bad RAM.

> It's a relatively new machine, but that's obviously a possibility with 
> any hardware. I haven't seen any other programs experiencing problems on 
> the box, but the Postgres daemon is the one that is primarily utilized, 
> so it's a little biased toward that.

I agree that the behavior seems a bit too specific to be a hardware
issue.

Can you get a stack trace from the crash?  You might need to restart the
postmaster under "ulimit -c unlimited" to get a core dump from the
crashing autovacuum process.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] MD5 password issue

2009-01-15 Thread Andreas Wenk

Hi Tom,

Tom Lane schrieb:

Andreas Wenk  writes:

In pg_hba.conf we have:



# TYPE  DATABASEUSERCIDR-ADDRESS  METHOD



# "local" is for Unix domain socket connections only
local   all all   ident sameuser



# IPv4 local connections:
hostall all 127.0.0.1/32  md5
hostall all 192.168.97.0/24   md5



Now the question: why is the user pgadmin able to connect to the database using 
pgAdmin
III from 192.168.97.30? That sould not be possible ... or am I wrong?


Why shouldn't it be possible?  You've specifically allowed connections
from that IP range.


Yes thats correct with the IP address range. Maybe I did not understand 
the auth concept yet. I thought, that with METHOD set to md5, a md5 
hashed password is required. The password is submitted with the PHP 5 
pg_connect function - as plain text.



(If you're wondering why he didn't have to type his password,
it's likely because pgAdmin is getting it out of ~/.pgpass or
some private settings file.)


>regards, tom lane

Also to Peter. It is like that - the pasword is stored in ~/.pgpass as 
expected.


So maybe the better question is: what is the difference between METHOD 
password and md5? As I assume now because of your answers, it has 
nothing to do with either the password is md5 hashed or not?


Thanks to everybody!

cheers

Andy

--

St.Pauli - Hamburg - Germany

Andreas Wenk


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Use PSQLFS for photo storage

2009-01-15 Thread Jason Long

Steven Lembark wrote:

I would like to use PSQLFS(http://www.edlsystems.com/psqlfs/)
to store 100 GB of images in PostgreSQL.

Once they are in there I can deal with them.  My main purpose is to use
rsync to get the files into the database.

Is there a better way to load 20,000 plus files reliably into Postgres?



Don't: put them into a reasonably-organized filesystem
and store the paths. 
This is exactly what I do not want to do.  I do not want to deal with 
the file system at all except for initial import.  I want to manage the 
photos entirely with SQL.

I use dir names of date-venue-descr
and basenames of the dir-image number (e.g.,
"2009.01.12-foo-bar/123-4567.nef"). You'll probably find
that moving the data into a viewer using a path will be
simpler than pumping the raw data through postgres
anyway.
  

Please explain.  What does moving the data into a viewer using a path mean.

At that point you could create COPY statements that look
like a pg_dump output to load the stuff the first time.

  

Please elaborate.


  




Re: [GENERAL] Why would I want to use connection pooling middleware?

2009-01-15 Thread Bill Moran
In response to Kirk Strauser :

> On Jan 15, 2009, at 2:39 PM, Bill Moran wrote:
> 
> > However, it pgpool can't pool connections if each connection has its
> > own username.  Not sure what exactly is causing it not to work for  
> > you,
> > but that was the first thing that came to mind.
> 
> The usernames are per-app.  Zope connections with username "zope", for  
> example.  However, any given application might have 30 instances  
> running at any time.

You might be hitting up against pgpool being pre-emptive on startup.
i.e., it's establishing a bunch of connections right off the bat
so they're available right away.  If your application actually uses
less connections than pgpool maintains, then it's not going to be
a benefit.

> > Are you having a problem?  If so, what is the problem?
> 
> Honestly?  That so many people are singing the praises of connection  
> pooling and I thought I'd better at least see what the excitingment is  
> about.

Well, it's a good instinct to look into stuff like that.  Especially now
that you've discovered that it's not cut and dry.

Try exercising your application under load to see if pgpool helps.  If
it keeps extra connections open during idle time, that won't really
hurt much, but if it reduces server load under stress, that's worthwhile.

-- 
Bill Moran
http://www.potentialtech.com
http://people.collaborativefusion.com/~wmoran/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Autovacuum daemon terminated by signal 11

2009-01-15 Thread Justin Pasher

Richard Huxton wrote:

Justin Pasher wrote:
  

Hello,

I have a server running PostgreSQL 8.1.15-0etch1 (Debian etch) that was
recently put into production. Last week a developer started having a problem
with his psql connection being terminated every couple of minutes when he
was running a query. When I look through the logs, I noticed this message.

2009-01-09 08:09:46 CST LOG:  autovacuum process (PID 15012) was terminated
by signal 11



Segmentation fault - probably a bug or bad RAM.
  


It's a relatively new machine, but that's obviously a possibility with 
any hardware. I haven't seen any other programs experiencing problems on 
the box, but the Postgres daemon is the one that is primarily utilized, 
so it's a little biased toward that.



I looked through the logs some more and I noticed that this was occurring
every minute or so. The database is a pretty heavily utilized system
(judging by the age(datfrozenxid) from pg_database, the system had run
approximately 500 million queries in less than a week). I noticed that right
before every autovacuum termination, it tried to autovacuum a database.

2009-01-09 08:09:46 CST LOG:  transaction ID wrap limit is 4563352, limited
by database "database_name"

It was always showing the same database, so I decided to manually vacuum the
database. Once that was done (it was successful the first time without
errors), the problem seemed to go away. I went ahead and manually vacuumed
the remaining databases just to take care of the potential xid wraparound
issue.



I'd be suspicious of possible corruption in autovacuum's internal data.
Can you trace these problems back to a power-outage or system crash? It
doesn't look like "database_name" itself since you vacuumed that
successfully. If autovacuum is running normally now, that might indicate
it was something in the way autovacuum was keeping track of "database_name".
  


The server hasn't been rebooted since it was installed (about 9 months 
ago, but only being utilized within the past month), so there haven't 
been any crashes or power outages. The only abnormal things I can find 
in the Postgres logs are the autovacuum segfaults. Looking in the logs 
today, it looks like it's still happening (once again on a different 
database). I manually vacuumed that one database and the problem went 
away (for now).


Are there any internal Postgres tables I can look at that may shed some 
light on this? Any particular maintenance commands that could be run for 
repair?



It's also probably worth running some memory tests on the server -
(memtest86 or similar) to see if that shows anything. Was it *always*
the autovacuum process getting sig11? If not then it might just be a
pattern of usage that makes it more likely to use some bad RAM


I might try the memtest if we can actually get the databases off of the 
server to allow some downtime. None of the logs indicate anything else 
acting abnormally or being terminated abnormally, just the autovacuum 
daemon. From what I can tell, the segfaults only when the databases pass 
the half way point (when age(datfrozenxid) exceeds around 15). 
When this is not the case, the segfaults do not occur according to the logs.



Justin Pasher

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why would I want to use connection pooling middleware?

2009-01-15 Thread Steve Crawford

Kirk Strauser wrote:

...

I understand why pooling within a process itself is a good thing.  
However, say I have two users running the same program on different 
desktop machines.  At present, those applications connect with the 
same username/password that's tied to the program and not the actual 
user.  It seems like if Abby and Barb end up sharing the same 
connection from the pool, and Abby runs some giant report query, then 
Barb would get held back while she waits for it to finish.  Is that 
true?  Even if not, what would be the advantage in the two of them 
sharing a connection?


Short answer: performance.

First you need the correct mental model. The "pool" in pooling is a 
pre-established pool of connections to the database. When a client 
connects to the pooler, the pooler decides, based on its configuration, 
which database connection will receive the statement sent to the pool by 
the client. The pooler can also monitor the connection pool and decide 
when to increase the pool size or release database connections.


Pgbouncer (part of Skype tools) has worked very well for me. It can be 
configured to handle connections in a variety of ways. At one extreme, a 
connection to the database is only used for the duration of a single 
statement then that database connection is available and waiting for the 
next statement. At the other extreme, a connection is assigned and tied 
up for the entire duration that the client is connected to the pooler.


In any case, Barb and Abby will not be sharing a database connection 
*simultaneously* - if they are using their desktop app which is 
connected to the db through the pooler and they are both running queries 
at the same time then each of them will have their own connection to the 
database.


Whether or not you will benefit from pooling depends on the nature of 
your application. If, for example, the application runs queries by 
connecting, running query, disconnecting and those queries are 
long-running and/or infrequent then pooling will probably be of little 
if any benefit.


At the other extreme, say you have a very high volume of short 
single-statement transactions (think web). You start seeing a 
significant impact of the connection setup/teardown time. Even if the 
app connects for each web-hit, connecting to a pooler like pgbouncer is 
much faster than connecting directly to the database (I've done some 
tests showing nearly a factor of 10 improvement in page delivery rates). 
But if the pooler is set to statement-level pooling, you can have 
hundreds of web processes each maintaining a persistent connection to 
the pooler while the pooler only needs to maintain sufficient 
connections for simultaneous statements - generally a tiny fraction of 
the number of web processes.


The pooler typically has a number of other configuration parameters that 
govern things like the maximum amount of time a server connection will 
be used before it is dropped and reconnected as well as how long idle 
database connections hang around before they are dropped.


Beware. You can not just drop a pooler in place without understanding 
the applications that will connect through it. Setting GUC variables,for 
example, is a red flag. You could easily create a situation where you 
increase work-memory for a specific statement but that statement ends up 
assigned to a different database backend or have a client connection be 
unaware of GUC changes made by a previously connected client that was 
assigned to that backend.


But if your application is designed to work well with pooling, it can 
provide dramatic performance benefits.


Cheers,
Steve


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Vacuum and Reindex hangs

2009-01-15 Thread Jason Long

Scott Marlowe wrote:

On Thu, Jan 15, 2009 at 1:28 PM, Jason Long
 wrote:
  

A faster server.
Well the sever is plenty fast.  It has 2 quad core 1600MHz FSB 3.0 GHz Xeon
5472 CPUs and a very light workload.



A few things.

That doesn't make a fast server.  The disk i/o subsystem makes a fast
server.  And you've mentioned nothing of that yet.  If you've got a 16
SAS Disk RAID -10 array on a fast RAID controller with battery backed
cache, you've got a fast database server.

If you've got a single SATA drive or a mirror set of two SATA drives,
you do not have a fast database server.
  
You got me.  I have a set of mirrored raptors.  I am not sure the disk 
i/o subsystem is a bottleneck.

The whole DB is 50 mb with minimal users.
Would a 16 SAS Disk RAID -10 really help me that much?
The dataset is small, but contains a complex data structure with many 
joins between tables.
I would appreciate any advice on the effect of a high end disk setup for 
my case.
  

My statement about the time is that it has never taken that long.  Ever.
Not even close.



I wonder if you're getting a lot of bloating in your indexes from the
full vacuums.  Is there a reason you're running full vacuums over
regular vacuums?  While there are quite a few circumstances where full
vacuums are the right answer, most of the time they are not, at least
not on a regular basis.
  
I used to use full vacuum and reindex ever night just before I did a 
dump backup.  Then I started to try the autovacuum.
The reason for the vacuum now it that I have a dynamic query that 
sometimes brings the server to a grinding halt.

This is why I set the timeout to 2 minutes.
Now all I get is users pissed about 2 times a day and the problem 
corrects itself before they finish complaining to me.


I tried to add some more indexes and then tried to vacuum full and 
reindex the database.

A lot of things can cause your current vacuums to run slow.  Maybe
there's a competing regular autovacuum that's kicked in at the same
time, someone is backing up the database, and so on.
  
Autovacuum could be the case, but I have total control of the database 
an no backups are in progress.

As for :

  

However, this is about the most anal list ever.
I see so many emails on here about people complaining regarding the proper way 
to reply
or post to the list.



That's because many of us receive hundreds of emails a week, and if
everyone starts sending html email,using bouncing email addresses, or
sending emails to 5 lists at once, things can get out of hand pretty
quickly.

Since your email agent is sending multi-part mime email with regular
text and html email, there's no real reason to complain, as any agent
worth its salt can be set to show only the text part.  I'm pretty sure
the email archive process also lops off the html part before storing
it.
  

I totally understand and will limit my use of HTML in the future.

Busy lists tend to be anal.  Wanna get a bunch of people mad at once?
Break the rules on the lkml.  We're a bunch of fuzzy little kittens
playing with balls of yarn by comparison.  :)
  




Re: HTML email (was Re: [GENERAL] Vacuum and Reindex hangs

2009-01-15 Thread Steve Atkins


On Jan 15, 2009, at 1:02 PM, Scott Marlowe wrote:


On Thu, Jan 15, 2009 at 1:56 PM, Jason Long
 wrote:

Steve Atkins wrote:


I'm sure none of that other than the last actually applies to you,  
but

those are
the expectations you set by using HTML email and then insulting all
the list members when someone asks you to stop. That's not the way  
to get

useful help from a technical peer support list.


Fair enough and I had no intention of insulting anyone.  Being anal  
is not

necessarily a bad thing. :)


I certainly wasn't offended.  It seemed like an honest question.


I wasn't either. But while this is a less prickly environment than some
technical mailing lists, it's still pretty prickly, and there's a risk  
some people

were offended by being described as anal (even if, or perhaps especially
when, it's an accurate description :) ).

I find HTML mail quite useful when I know that all the recipients can
read it - but it tends to harm communication when some of the recipients
can't. In the latter case I find plain text mail, possibly with attached
diagrams or embedded URLs to web docs more useful than inline
HTML. That's technical mailing lists, pretty much.

Cheers,
  Steve


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why would I want to use connection pooling middleware?

2009-01-15 Thread Kirk Strauser

On Jan 15, 2009, at 2:54 PM, Steve Crawford wrote:

If you know that the application does not change GUC variables then  
you will probably benefit greatly by using pgbouncer.


Thanks, Steve!  That's just the kind of pointer I can use.  I've been  
using PostgreSQL for years but I've never really gone far into low- 
level optimizations like this.

--
Kirk Strauser





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why would I want to use connection pooling middleware?

2009-01-15 Thread Scott Marlowe
On Thu, Jan 15, 2009 at 1:26 PM, Kirk Strauser  wrote:
> On Jan 15, 2009, at 12:30 PM, Steve Crawford wrote:
>
>> But if your application is designed to work well with pooling, it can
>> provide dramatic performance benefits.
>
> I think that's the problem.  As I mentioned at one point, a lot of our
> applications have connections open for hours at a time and fire off queries
> when the user does something.  I'm coming to think that pooling wouldn't
> give much benefit to long-living processes like that.

As mentioned, this is exactly what pooling is good for.  Imagine 200
users each accessing the database once every hour.  If on average you
have two or three users actually hitting the database, then you could
handle that many users with only 10 or 20 database connections.  The
apps each have a persistent connection up to the pool service, which
then routes the active sessions through a connection pool so the db
only thinks it has 20 or fewer users.

> On a related note, is max_connections=400 reasonably sized for a server with
> 8GB of RAM?  Again, most of these are dormant at any given time.  The
> database itself is currently hosted on a  dual Xeon server with 3GB of RAM
> and other applications so I'm sure the new 8-core/8GB hardware is bound to
> do better at any rate.

That depends very much on how many of those are ever active, or could
become active.  When things get ugly is when everyone at the company
needs to log in at once.  Suddenly, the 390 connections out of the 400
that were idle go active, and the server grinds to a crawl under the
load.  A connection pool would simple hold the connection in a wait
state until one of the 20 or so live connections to the db became
available, preventing server meltdown.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: HTML email (was Re: [GENERAL] Vacuum and Reindex hangs

2009-01-15 Thread Scott Marlowe
On Thu, Jan 15, 2009 at 1:56 PM, Jason Long
 wrote:
> Steve Atkins wrote:
>>
>> I'm sure none of that other than the last actually applies to you, but
>> those are
>> the expectations you set by using HTML email and then insulting all
>> the list members when someone asks you to stop. That's not the way to get
>> useful help from a technical peer support list.
>
> Fair enough and I had no intention of insulting anyone.  Being anal is not
> necessarily a bad thing. :)

I certainly wasn't offended.  It seemed like an honest question.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Vacuum and Reindex hangs

2009-01-15 Thread Joshua D. Drake
On Thu, 2009-01-15 at 20:39 +, Grzegorz Jaśkiewicz wrote:
> and we also oppose to answering on top of message, and citing
> everything underneeth.
> Why? Because your words should say what you mean, not show it by its
> look. Hence, plain ascii is enough for us - and should be for every
> intelligent human being.

Well now that we are completely off topic :). Let me just say that it is
exactly the type of thinking above that makes Open Source people seem
like jerks.

I know many perfectly intelligent people that are better served through
diagrams, pdf and color than a mailing list. Most of them make sure
geeks like us, *EAT*.

Does that mean they are not intelligent or perhaps that there talent set
is just different?

Sincerely,

Joshua D. Drake

-- 
PostgreSQL - XMPP: jdr...@jabber.postgresql.org
   Consulting, Development, Support, Training
   503-667-4564 - http://www.commandprompt.com/
   The PostgreSQL Company, serving since 1997


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: HTML email (was Re: [GENERAL] Vacuum and Reindex hangs

2009-01-15 Thread Jason Long

Steve Atkins wrote:


On Jan 15, 2009, at 12:32 PM, Jason Long wrote:

I don't mean to be a pain either and I mean no disrespect to anyone 
on this list in the following comments.


However, this is about the most anal list ever.
I see so many emails on here about people complaining regarding the 
proper way to reply or post to the list.


I used larger font to point point out my statement from the code.  I 
also did not realize it appeared that large to you.


My res is 2560X1600 so it didn't look that large.

I apologize.

Just out of curiosity, why are you so apposed to HTML in a email?


There are technical reasons. One of them is that most clients sending
html mail are actually sending multipart/alternative mail with an html
part and a plain text generate mechanically from the html part. People
using plain text mail clients (and there tend to be a lot of them on 
technical
lists) will see the plain text part only. That's fine for some sorts 
of email,
but leads to tears when someone insists that they've hilighted the 
problem

in red or bold or whatever, and half the recipients are reading the plain
text version.

Also, HTML mail tends to not use standard email quoting, meaning that
it tends to discard context about who said what, which makes it very
difficult to follow discussions. And it often plays hell with list 
digests and

archives.

There are also social reasons - it tends to be used by people who 
don't realize
how it looks when received by the recipient, and who don't care. It's 
generally a
sign of someone who has little experience of normal technical mailing 
list etiquette or

polite online behaviour (such as following community norms).

It also correlates strongly with people whose behaviour is antisocial 
in other

respects (not so much use of html per-se as use of large font sizes,
colours and suchlike, which are perceived by most recipients as SHOUTING,
or vehement defense of html email).

And it tends to derail threads into discussions like this, which is 
always

bad.

I'm sure none of that other than the last actually applies to you, but 
those are

the expectations you set by using HTML email and then insulting all
the list members when someone asks you to stop. That's not the way to get
useful help from a technical peer support list.
Fair enough and I had no intention of insulting anyone.  Being anal is 
not necessarily a bad thing. :)


Cheers,
  Steve




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why would I want to use connection pooling middleware?

2009-01-15 Thread Kirk Strauser

On Jan 15, 2009, at 2:39 PM, Bill Moran wrote:


However, it pgpool can't pool connections if each connection has its
own username.  Not sure what exactly is causing it not to work for  
you,

but that was the first thing that came to mind.


The usernames are per-app.  Zope connections with username "zope", for  
example.  However, any given application might have 30 instances  
running at any time.



Are you having a problem?  If so, what is the problem?



Honestly?  That so many people are singing the praises of connection  
pooling and I thought I'd better at least see what the excitingment is  
about.

--
Kirk Strauser





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why would I want to use connection pooling middleware?

2009-01-15 Thread Steve Crawford

Kirk Strauser wrote:

On Jan 15, 2009, at 12:30 PM, Steve Crawford wrote:

But if your application is designed to work well with pooling, it can 
provide dramatic performance benefits.


I think that's the problem.  As I mentioned at one point, a lot of our 
applications have connections open for hours at a time and fire off 
queries when the user does something.  I'm coming to think that 
pooling wouldn't give much benefit to long-living processes like that.




If you know that the application does not change GUC variables then you 
will probably benefit greatly by using pgbouncer. If all the queries are 
single-statements then set pool_mode=statement. If you have 
multiple-statement transactions then configure pgbouncer to use 
pool_mode=transaction. Either way, your app won't tie up a back-end 
connection when it is sitting idle. You will probably find that you can 
handle your hundreds of clients with a pretty small pool of backend 
connections. Pgbouncer will give you some nice statistics to help you 
adjust the pool sizing and such.


On a related note, is max_connections=400 reasonably sized for a 
server with 8GB of RAM?  Again, most of these are dormant at any given 
time.  The database itself is currently hosted on a  dual Xeon server 
with 3GB of RAM and other applications so I'm sure the new 8-core/8GB 
hardware is bound to do better at any rate.


Too little info (and others here can answer better anyway). But I think 
you should test pooling and find out how many you really need before 
jumping into tuning. I haven't tried Pgpool* but have found pgbouncer to 
be easy-to-use, reliable and effective.


Cheers,
Steve


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


HTML email (was Re: [GENERAL] Vacuum and Reindex hangs

2009-01-15 Thread Steve Atkins


On Jan 15, 2009, at 12:32 PM, Jason Long wrote:

I don't mean to be a pain either and I mean no disrespect to anyone  
on this list in the following comments.


However, this is about the most anal list ever.
I see so many emails on here about people complaining regarding the  
proper way to reply or post to the list.


I used larger font to point point out my statement from the code.  I  
also did not realize it appeared that large to you.


My res is 2560X1600 so it didn't look that large.

I apologize.

Just out of curiosity, why are you so apposed to HTML in a email?


There are technical reasons. One of them is that most clients sending
html mail are actually sending multipart/alternative mail with an html
part and a plain text generate mechanically from the html part. People
using plain text mail clients (and there tend to be a lot of them on  
technical
lists) will see the plain text part only. That's fine for some sorts  
of email,
but leads to tears when someone insists that they've hilighted the  
problem
in red or bold or whatever, and half the recipients are reading the  
plain

text version.

Also, HTML mail tends to not use standard email quoting, meaning that
it tends to discard context about who said what, which makes it very
difficult to follow discussions. And it often plays hell with list  
digests and

archives.

There are also social reasons - it tends to be used by people who  
don't realize
how it looks when received by the recipient, and who don't care. It's  
generally a
sign of someone who has little experience of normal technical mailing  
list etiquette or

polite online behaviour (such as following community norms).

It also correlates strongly with people whose behaviour is antisocial  
in other

respects (not so much use of html per-se as use of large font sizes,
colours and suchlike, which are perceived by most recipients as  
SHOUTING,

or vehement defense of html email).

And it tends to derail threads into discussions like this, which is  
always

bad.

I'm sure none of that other than the last actually applies to you, but  
those are

the expectations you set by using HTML email and then insulting all
the list members when someone asks you to stop. That's not the way to  
get

useful help from a technical peer support list.

Cheers,
  Steve

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Vacuum and Reindex hangs

2009-01-15 Thread Raymond O'Donnell
On 15/01/2009 20:44, Scott Marlowe wrote:

> We're a bunch of fuzzy little kittens
> playing with balls of yarn by comparison.  :)

Now *there's* an image! :-)

Ray.

--
Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland
r...@iol.ie
Galway Cathedral Recitals: http://www.galwaycathedral.org/recitals
--

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Vacuum and Reindex hangs

2009-01-15 Thread Raymond O'Donnell
On 15/01/2009 20:32, Jason Long wrote:

> However, this is about the most anal list ever. I see so many emails on
> here about people complaining regarding the proper way to reply or post
> to the list.

Well, as someone else has just pointed out, it's all about readability
and making your words easy to understand.

However, let's not re-open old flame wars - this has been raked over
*many* times in the past.

> *Just out of curiosity, why are you so apposed to HTML in a email?*

I'm not opposed to HTML email, as long as it's easy to understand and
looks reasonably OK.

Ray.


--
Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland
r...@iol.ie
Galway Cathedral Recitals: http://www.galwaycathedral.org/recitals
--

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Vacuum and Reindex hangs

2009-01-15 Thread Scott Marlowe
On Thu, Jan 15, 2009 at 1:28 PM, Jason Long
 wrote:
> A faster server.
> Well the sever is plenty fast.  It has 2 quad core 1600MHz FSB 3.0 GHz Xeon
> 5472 CPUs and a very light workload.

A few things.

That doesn't make a fast server.  The disk i/o subsystem makes a fast
server.  And you've mentioned nothing of that yet.  If you've got a 16
SAS Disk RAID -10 array on a fast RAID controller with battery backed
cache, you've got a fast database server.

If you've got a single SATA drive or a mirror set of two SATA drives,
you do not have a fast database server.

> My statement about the time is that it has never taken that long.  Ever.
> Not even close.

I wonder if you're getting a lot of bloating in your indexes from the
full vacuums.  Is there a reason you're running full vacuums over
regular vacuums?  While there are quite a few circumstances where full
vacuums are the right answer, most of the time they are not, at least
not on a regular basis.

A lot of things can cause your current vacuums to run slow.  Maybe
there's a competing regular autovacuum that's kicked in at the same
time, someone is backing up the database, and so on.

As for :

> However, this is about the most anal list ever.
> I see so many emails on here about people complaining regarding the proper 
> way to reply
> or post to the list.

That's because many of us receive hundreds of emails a week, and if
everyone starts sending html email,using bouncing email addresses, or
sending emails to 5 lists at once, things can get out of hand pretty
quickly.

Since your email agent is sending multi-part mime email with regular
text and html email, there's no real reason to complain, as any agent
worth its salt can be set to show only the text part.  I'm pretty sure
the email archive process also lops off the html part before storing
it.

Busy lists tend to be anal.  Wanna get a bunch of people mad at once?
Break the rules on the lkml.  We're a bunch of fuzzy little kittens
playing with balls of yarn by comparison.  :)

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Vacuum and Reindex hangs

2009-01-15 Thread Joshua D. Drake
On Thu, 2009-01-15 at 14:32 -0600, Jason Long wrote:
> I don't mean to be a pain either and I mean no disrespect to anyone on
> this list in the following comments.
> 
> However, this is about the most anal list ever.  

You haven't been to the debian list have you? :).

> I see so many emails on here about people complaining regarding the
> proper way to reply or post to the list.

The basic gist is this. We have to be anal otherwise there is no
conformity to a standard (even if the standard is arbitrary). Without
that conformity it is impossible to be productive on the list.
> 
> I used larger font to point point out my statement from the code.  I
> also did not realize it appeared that large to you.

Right but just as a tip that is really not needed on a technical list.
Your words should be able to convey your meaning without the requirement
of a cosmetic modification. If it can't it would be better to post to a
pastebin or static web page and link a URL.

Just FYI most on this list probably never saw your HTML. They, like I
said have already turned off HTML email.
> 
> Just out of curiosity, why are you so apposed to HTML in a email?

http://www.american.edu/econ/notes/htmlmail.htm
http://www.georgedillon.com/web/html_email_is_evil.shtml

Sincerely,

Joshua D. Drake



-- 
PostgreSQL - XMPP: jdr...@jabber.postgresql.org
   Consulting, Development, Support, Training
   503-667-4564 - http://www.commandprompt.com/
   The PostgreSQL Company, serving since 1997


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Vacuum and Reindex hangs

2009-01-15 Thread Grzegorz Jaśkiewicz
and we also oppose to answering on top of message, and citing
everything underneeth.
Why? Because your words should say what you mean, not show it by its
look. Hence, plain ascii is enough for us - and should be for every
intelligent human being.


On Thu, Jan 15, 2009 at 8:32 PM, Jason Long
 wrote:
> I don't mean to be a pain either and I mean no disrespect to anyone on this
> list in the following comments.
>
> However, this is about the most anal list ever.
> I see so many emails on here about people complaining regarding the proper
> way to reply or post to the list.
>
> I used larger font to point point out my statement from the code.  I also
> did not realize it appeared that large to you.
>
> My res is 2560X1600 so it didn't look that large.
>
> I apologize.
>
> Just out of curiosity, why are you so apposed to HTML in a email?
>
> Raymond O'Donnell wrote:
>
> On 15/01/2009 20:06, Jason Long wrote:
>
> I am attempting to vacuum...[snip]
>
> I don't mean to be a pain, but could you please avoid HUGE type sizes
> such as the aboveor better still, avoid using HTML altogether in
> your emails to this list.
>
> It makes it look as if you are not just shouting, but SCREAMING at the
> top of your lungs! :-)
>
> Thanks in advance.
>
> Ray.
>
> --
> Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland
> r...@iol.ie
> Galway Cathedral Recitals: http://www.galwaycathedral.org/recitals
> --
>
>



-- 
GJ

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why would I want to use connection pooling middleware?

2009-01-15 Thread Bill Moran
In response to Kirk Strauser :

> On Jan 15, 2009, at 12:30 PM, Steve Crawford wrote:
> 
> > But if your application is designed to work well with pooling, it  
> > can provide dramatic performance benefits.
> 
> I think that's the problem.  As I mentioned at one point, a lot of our  
> applications have connections open for hours at a time and fire off  
> queries when the user does something.  I'm coming to think that  
> pooling wouldn't give much benefit to long-living processes like that.

Actually, that's exactly the kind of app that should benefit from
pooling.

However, it pgpool can't pool connections if each connection has its
own username.  Not sure what exactly is causing it not to work for you,
but that was the first thing that came to mind.

> On a related note, is max_connections=400 reasonably sized for a  
> server with 8GB of RAM?  Again, most of these are dormant at any given  
> time.  The database itself is currently hosted on a  dual Xeon server  
> with 3GB of RAM and other applications so I'm sure the new 8-core/8GB  
> hardware is bound to do better at any rate.

It all depends.  We had servers with 300 - 400 connections that did
just fine.  We've looked at pgpool, but deemed it not worth the
added complexity.  Quite honestly, why attempt to solve problems that
don't even exist?

Are you having a problem?  If so, what is the problem?

-- 
Bill Moran
http://www.potentialtech.com
http://people.collaborativefusion.com/~wmoran/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Vacuum and Reindex hangs

2009-01-15 Thread Jason Long
I don't mean to be a pain either and I mean no disrespect to anyone on 
this list in the following comments.


However, this is about the most anal list ever. 
I see so many emails on here about people complaining regarding the 
proper way to reply or post to the list.


I used larger font to point point out my statement from the code.  I 
also did not realize it appeared that large to you.


My res is 2560X1600 so it didn't look that large.

I apologize.

*Just out of curiosity, why are you so apposed to HTML in a email?*

Raymond O'Donnell wrote:

On 15/01/2009 20:06, Jason Long wrote:

I am attempting to vacuum...[snip]

I don't mean to be a pain, but could you please avoid HUGE type sizes
such as the aboveor better still, avoid using HTML altogether in
your emails to this list.

It makes it look as if you are not just shouting, but SCREAMING at the
top of your lungs! :-)

Thanks in advance.

Ray.

--
Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland
r...@iol.ie
Galway Cathedral Recitals: http://www.galwaycathedral.org/recitals
--
  




Re: [GENERAL] Vacuum and Reindex hangs

2009-01-15 Thread Jason Long

Alan Hodgson wrote:
On Thursday 15 January 2009, Jason Long  
wrote:
  

*I am attempting to vacuum and reindex my database.  It keeps timing
out.  See commands and last part of output below.  The vacuum or reindex
only takes a short time to complete normally because the database it
less than 50 mb.  I have the query timeout set to 2 minutes, but I do
not know if this can affect vacuumdb or reindexdb and the time should
take much less than 2 minutes anyway.



Obviously it is affecting it and it is taking longer than 2 minutes 
regardless of how long you think it "should take". Set it higher for these 
tasks or buy a faster server. 

  
A faster server. 
Well the sever is plenty fast.  It has 2 quad core 1600MHz FSB 3.0 GHz 
Xeon 5472 CPUs and a very light workload.


My statement about the time is that it has never taken that long.  
Ever.  Not even close.


I will increase the setting and see if that works.




Re: [GENERAL] Why would I want to use connection pooling middleware?

2009-01-15 Thread Kirk Strauser

On Jan 15, 2009, at 12:30 PM, Steve Crawford wrote:

But if your application is designed to work well with pooling, it  
can provide dramatic performance benefits.


I think that's the problem.  As I mentioned at one point, a lot of our  
applications have connections open for hours at a time and fire off  
queries when the user does something.  I'm coming to think that  
pooling wouldn't give much benefit to long-living processes like that.


On a related note, is max_connections=400 reasonably sized for a  
server with 8GB of RAM?  Again, most of these are dormant at any given  
time.  The database itself is currently hosted on a  dual Xeon server  
with 3GB of RAM and other applications so I'm sure the new 8-core/8GB  
hardware is bound to do better at any rate.

--
Kirk Strauser





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Vacuum and Reindex hangs

2009-01-15 Thread Raymond O'Donnell
On 15/01/2009 20:21, Joshua D. Drake wrote:
> On Thu, 2009-01-15 at 20:13 +, Raymond O'Donnell wrote:
>> I don't mean to be a pain, but could you please avoid HUGE type sizes
>> such as the aboveor better still, avoid using HTML altogether in
>> your emails to this list.

> The answer to this is to not allow HTML email at all to your client. It
> is the first thing I disable on any family/friend/church member that
> asks for help.

Indeed - 'tis now done, and you won't hear any more complaints from
me I still think I made a fair point, though. :-)

Ray.

--
Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland
r...@iol.ie
Galway Cathedral Recitals: http://www.galwaycathedral.org/recitals
--

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Vacuum and Reindex hangs

2009-01-15 Thread Joshua D. Drake
On Thu, 2009-01-15 at 20:13 +, Raymond O'Donnell wrote:
> On 15/01/2009 20:06, Jason Long wrote:
> 
> I am attempting to vacuum...[snip]
> 
> I don't mean to be a pain, but could you please avoid HUGE type sizes
> such as the aboveor better still, avoid using HTML altogether in
> your emails to this list.
> 
> It makes it look as if you are not just shouting, but SCREAMING at the
> top of your lungs! :-)

The answer to this is to not allow HTML email at all to your client. It
is the first thing I disable on any family/friend/church member that
asks for help.

Joshua D. Drake


-- 
PostgreSQL - XMPP: jdr...@jabber.postgresql.org
   Consulting, Development, Support, Training
   503-667-4564 - http://www.commandprompt.com/
   The PostgreSQL Company, serving since 1997


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Vacuum and Reindex hangs

2009-01-15 Thread Alan Hodgson
On Thursday 15 January 2009, Jason Long  
wrote:
> *I am attempting to vacuum and reindex my database.  It keeps timing
> out.  See commands and last part of output below.  The vacuum or reindex
> only takes a short time to complete normally because the database it
> less than 50 mb.  I have the query timeout set to 2 minutes, but I do
> not know if this can affect vacuumdb or reindexdb and the time should
> take much less than 2 minutes anyway.

Obviously it is affecting it and it is taking longer than 2 minutes 
regardless of how long you think it "should take". Set it higher for these 
tasks or buy a faster server. 

-- 
Current Peeve: The mindset that the Internet is some sort of school for
novice sysadmins and that everyone -not- doing stupid dangerous things
should act like patient teachers with the ones who are. -- Bill Cole, NANAE 

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Vacuum and Reindex hangs

2009-01-15 Thread Joshua D. Drake
On Thu, 2009-01-15 at 14:06 -0600, Jason Long wrote:
> I am attempting to vacuum and reindex my database.  It keeps timing
> out.  See commands and last part of output below.  The vacuum or
> reindex only takes a short time to complete normally because the
> database it less than 50 mb.  I have the query timeout set to 2
> minutes, but I do not know if this can affect vacuumdb or reindexdb
> and the time should take much less than 2 minutes anyway.
> 
> I am using 8.3.3 on Centos 5.

You have statement_timeout set. You can use SET in psql to change
this

SET statement_timeout TO ...

Joshua D. Drake

-- 
PostgreSQL - XMPP: jdr...@jabber.postgresql.org
   Consulting, Development, Support, Training
   503-667-4564 - http://www.commandprompt.com/
   The PostgreSQL Company, serving since 1997


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Vacuum and Reindex hangs

2009-01-15 Thread Raymond O'Donnell
On 15/01/2009 20:06, Jason Long wrote:

I am attempting to vacuum...[snip]

I don't mean to be a pain, but could you please avoid HUGE type sizes
such as the aboveor better still, avoid using HTML altogether in
your emails to this list.

It makes it look as if you are not just shouting, but SCREAMING at the
top of your lungs! :-)

Thanks in advance.

Ray.

--
Raymond O'Donnell, Director of Music, Galway Cathedral, Ireland
r...@iol.ie
Galway Cathedral Recitals: http://www.galwaycathedral.org/recitals
--

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Vacuum and Reindex hangs

2009-01-15 Thread Jason Long
*I am attempting to vacuum and reindex my database.  It keeps timing 
out.  See commands and last part of output below.  The vacuum or reindex 
only takes a short time to complete normally because the database it 
less than 50 mb.  I have the query timeout set to 2 minutes, but I do 
not know if this can affect vacuumdb or reindexdb and the time should 
take much less than 2 minutes anyway.


I am using 8.3.3 on Centos 5.

Any advice would be greatly appreciated.

vacuumdb -f -v -z -e -d pipetracker-live*

INFO:  analyzing "general.t_access"
INFO:  "t_access": scanned 3000 of 20186 pages, containing 100449 live 
rows and 0 dead rows; 3000 rows in sample, 675888 estimated total rows

INFO:  vacuuming "information_schema.sql_parts"
INFO:  "sql_parts": found 0 removable, 9 nonremovable row versions in 1 
pages

DETAIL:  0 dead row versions cannot be removed yet.
Nonremovable row versions range from 56 to 96 bytes long.
There were 0 unused item pointers.
Total free space (including removable row versions) is 7500 bytes.
0 pages are or will become empty, including 0 at the end of the table.
1 pages containing 7500 free bytes are potential move destinations.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
INFO:  "sql_parts": moved 0 row versions, truncated 1 to 1 pages
DETAIL:  CPU 0.00s/0.00u sec elapsed 0.00 sec.
INFO:  vacuuming "pg_toast.pg_toast_11449"
INFO:  "pg_toast_11449": found 0 removable, 0 nonremovable row versions 
in 0 pages

DETAIL:  0 dead row versions cannot be removed yet.
Nonremovable row versions range from 0 to 0 bytes long.
There were 0 unused item pointers.
Total free space (including removable row versions) is 0 bytes.
0 pages are or will become empty, including 0 at the end of the table.
0 pages containing 0 free bytes are potential move destinations.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
INFO:  index "pg_toast_11449_index" now contains 0 row versions in 1 pages
DETAIL:  0 index pages have been deleted, 0 are currently reusable.
CPU 0.00s/0.00u sec elapsed 0.00 sec.
INFO:  analyzing "information_schema.sql_parts"
INFO:  "sql_parts": scanned 1 of 1 pages, containing 9 live rows and 0 
dead rows; 9 rows in sample, 9 estimated total rows
vacuumdb: vacuuming of database "pipetracker-live" failed: ERROR:  
canceling statement due to statement timeout


*reindexdb -d pipetracker-live*

NOTICE:  table "pg_class" was reindexed
NOTICE:  table "pg_type" was reindexed
NOTICE:  table "sql_features" was reindexed
NOTICE:  table "sql_implementation_info" was reindexed
NOTICE:  table "sql_languages" was reindexed
NOTICE:  table "pg_statistic" was reindexed
NOTICE:  table "sql_packages" was reindexed
NOTICE:  table "t_access" was reindexed
NOTICE:  table "sql_parts" was reindexed
reindexdb: reindexing of database "pipetracker-live" failed: ERROR:  
canceling statement due to statement timeout





Re: [GENERAL] MD5 password issue

2009-01-15 Thread Andreas Wenk



Alvaro Herrera schrieb:

Andreas Wenk wrote:

Yes thats correct with the IP address range. Maybe I did not understand  
the auth concept yet. I thought, that with METHOD set to md5, a md5  
hashed password is required. The password is submitted with the PHP 5  
pg_connect function - as plain text.


It is specified to pg_connect as plain text, but it is sent over the
wire md5-hashed.

So maybe the better question is: what is the difference between METHOD  
password and md5? As I assume now because of your answers, it has  
nothing to do with either the password is md5 hashed or not?


The difference is what travels on the wire.



ok thanks - I think I got it now ... ;-)

Cheers

Andy

--

St.Pauli - Hamburg - Germany

Andreas Wenk


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Query question

2009-01-15 Thread mailinglists
> On Wed, Jan 14, 2009 at 07:36:03PM -0800, mailingli...@net-virtual.com
> wrote:
>> CREATE TABLE listings (
>>   trans_id  SERIAL,
>>   mode CHAR(1),
>>   listing_id INT,
>>   region_id INT,
>>   category INT
>> );
>>
>> "SELECT * FROM listings ORDER BY region_id, category, listing_id,
>> trans_id"
>
>> [...] what I want to do is get only the last transaction for
>> a given listing_id, because the earlier ones don't matter.
>
> If you have an index on (region_id,category,listing_id,trans_id) you
> should be able to do:
>
>   SELECT region_id,category,listing_id,MAX(trans_id)
>   FROM listings
>   GROUP BY region_id,category,listing_id;
>
> And have PG answer this using the index (it'll only do this if it thinks
> there are many transactions for each group though).
>
>> On top of
>> that, each region_id and category_id has its own index.  I need to be
>> able
>> to process the indexes in-full, one-at-a-time because there are too many
>> to hold that many open filehandles/processes at one time.
>
> Not sure what you mean by "index" here; I'm assuming you're talking
> about something outside PG, or am I missing some context?

Yes, sorry I wasn't clear...  This is to keep track of incremental updates
to an index outside of Postgres

>> So, my question is, is there some way to return the rows in a
>> deterministic order, without actually having to do an explicit sort on
>> the
>> data?  What I mean is, I don't care if category_id 4 / region_id 10 /
>> listing_id 1 comes before category_id 1 / region_id 1 / lisitng_id 1
>> -- I just need them returned to me in that sort of grouped order
>> (although
>> sorted by trans_id).
>
> If you want to know all the transaction ids then you need to do the
> sort, if you only want the largest/latest then you're probably better
> off telling PG that's what you want (i.e. using GROUP BY and MAX
> aggregate and letting it make an appropiate decision).
>

I only need to know the max id, but the problem is that for each region_id
has N categories and I need to process each one individually.  The query
you suggested does, indeed, sort them by category, but it does not sort
them by region_id.

Here's an example: (select region_id,category,listing_id,max(trans_id)
from listings_pending where region_id IN('313400', '00') 
group by region_id,category,listing_id;)

 region_id  |category_id| listing_id |  max
+++
 00 | 1  |2221473 | 640799
 00 | 1  |2426142 | 845468
 00 | 1  |2103599 | 522925
 313400 | 1  |2146326 | 565652
 00 | 1  |2462112 | 881438
 00 | 1  |1947690 | 367016
 00 | 1  |2526731 | 946057
 00 | 1  |2217864 | 637190
 00 | 1  |2288420 | 707746

As you can see, at transaction 565652, I would close the index (which is
very expensive, because it actually has to do a "merge" of the newly
created index with old one), then reopen it at transaction 881438.

Thanks for your help!

- Greg




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Autovacuum daemon terminated by signal 11

2009-01-15 Thread Alvaro Herrera
>> Justin Pasher wrote:

> Are there any internal Postgres tables I can look at that may shed some  
> light on this? Any particular maintenance commands that could be run for  
> repair?

Please obtain a backtrace from the core file.  If there's no core file,
please set "ulimit -c unlimited" in the Postgres start script and
restart it so that it is allowed to produce one.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] MD5 password issue

2009-01-15 Thread Andreas Wenk

Hi Joshua

Joshua D. Drake schrieb:

On Thu, 2009-01-15 at 18:05 +0100, Andreas Wenk wrote:


postgres=# SELECT rolname,rolpassword from pg_authid;
  rolname  | rolpassword
- ---+-
 postgres  |
 pgadmin   | plaintext
 odie  | md5passsorrrd

The user odie was created with:
CREATE ROLE odie LOGIN ENCRYPTED PASSWORD 'feedme';

The user pgadmin was created with:
$ createuser -a -d -P -N -U postgres pgadmin



Per the help. You need to pass -E to have it be an encrypted (md5 hash)
style password. 


Sure - I know  we added -N so that the password is not encrypted

What version of PostgreSQL is this as I recall all newer

versions do this by default.


this was made with a 8.1 version ...


Usage:
  createuser [OPTION]... [ROLENAME]

Options:
  -s, --superuser   role will be superuser
  -S, --no-superuserrole will not be superuser
  -d, --createdbrole can create new databases
  -D, --no-createdb role cannot create databases
  -r, --createrole  role can create new roles
  -R, --no-createrole   role cannot create roles
  -l, --login   role can login (default)
  -L, --no-loginrole cannot login
  -i, --inherit role inherits privileges of roles it is a
member of (default)
  -I, --no-inherit  role does not inherit privileges
  -c, --connection-limit=N  connection limit for role (default: no
limit)
  -P, --pwpromptassign a password to new role
  -E, --encrypted   encrypt stored password
  -N, --unencrypted do not encrypt stored password
  -e, --echoshow the commands being sent to the server
  --helpshow this help, then exit
  --version output version information, then exit

Connection options:
  -h, --host=HOSTNAME   database server host or socket directory
  -p, --port=PORT   database server port
  -U, --username=USERNAME   user name to connect as (not the one to
create)
  -W, --passwordforce password prompt

If one of -s, -S, -d, -D, -r, -R and ROLENAME is not specified, you will
be prompted interactively.


Joshua D. Drake




--

St.Pauli - Hamburg - Germany

Andreas Wenk


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] fastest way to upgrade from 8.2 to 8.3

2009-01-15 Thread Scott Marlowe
On Thu, Jan 15, 2009 at 11:02 AM, Jeremy Kister
 wrote:
> I've got two 300GB databases that I'm going to be upgrading from 8.2.4 (32
> bit) to 8.3.5 (64 bit).
>
> The systems are running on Solaris 10u5 64bit with lots of disks in a zfs
> raid10 and have 32GB ram.
>
> I've read lots of docs and Google, and found a special flavor of
> postgresql.conf that helps the import speed along, but the dump + import
> process still takes about 15 hours.

If you're looking to minimize downtime, and switch over quickly, look
at slony, assuming you have the disk space / I/O bandwidth to do it
that way, the actual downtime can be measured in minutes and seconds.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] fastest way to upgrade from 8.2 to 8.3

2009-01-15 Thread Jeremy Kister
I've got two 300GB databases that I'm going to be upgrading from 8.2.4 (32 
bit) to 8.3.5 (64 bit).


The systems are running on Solaris 10u5 64bit with lots of disks in a zfs 
raid10 and have 32GB ram.


I've read lots of docs and Google, and found a special flavor of 
postgresql.conf that helps the import speed along, but the dump + import 
process still takes about 15 hours.


here are the parameters that i've found help in postgresql.conf during an 
import:

max_connections = 10
shared_buffers = 32MB
work_mem = 2048MB
maintenance_work_mem = 2048MB
max_fsm_pages = 1024000
fsync = off
wal_buffers = 1MB
commit_delay = 1
checkpoint_segments = 128
effective_cache_size = 10GB
autovacuum = off


Can anyone give me tips to get from 8.2 to 8.3 faster ?



--

Jeremy Kister
http://jeremy.kister.net./

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why would I want to use connection pooling middleware?

2009-01-15 Thread Kirk Strauser

On Jan 15, 2009, at 10:20 AM, Bill Moran wrote:

I don't believe that's true.  My understanding of pgpool is that it  
will
reuse an existing connection if it's free, or open a new one if  
required.



Gah!  It just made it worse!

$ ps auxwww | grep pgpool | grep dbuser | wc -l
  30
$ ps auxwww | grep postgres: | grep dbuser | wc -l
  38

So not only is it not sharing connections among clients, but it's  
keeping old ones open too.  This isn't really what I had in mind.


Also, many of the applications are launched in the morning and open a  
connection, then get used all day, then closed at the end of the  
afternoon.  I'm starting to wonder if perhaps pgpool-II isn't what I  
was looking for.

--
Kirk Strauser





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] MD5 password issue

2009-01-15 Thread Tom Lane
Andreas Wenk  writes:
> In pg_hba.conf we have:

> # TYPE  DATABASEUSERCIDR-ADDRESS  METHOD

> # "local" is for Unix domain socket connections only
> local   all all   ident sameuser

> # IPv4 local connections:
> hostall all 127.0.0.1/32  md5
> hostall all 192.168.97.0/24   md5

> Now the question: why is the user pgadmin able to connect to the database 
> using pgAdmin
> III from 192.168.97.30? That sould not be possible ... or am I wrong?

Why shouldn't it be possible?  You've specifically allowed connections
from that IP range.

(If you're wondering why he didn't have to type his password,
it's likely because pgAdmin is getting it out of ~/.pgpass or
some private settings file.)

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] MD5 password issue

2009-01-15 Thread Joshua D. Drake
On Thu, 2009-01-15 at 18:05 +0100, Andreas Wenk wrote:

> postgres=# SELECT rolname,rolpassword from pg_authid;
>   rolname  | rolpassword
> - ---+-
>  postgres  |
>  pgadmin   | plaintext
>  odie  | md5passsorrrd
> 
> The user odie was created with:
> CREATE ROLE odie LOGIN ENCRYPTED PASSWORD 'feedme';
> 
> The user pgadmin was created with:
> $ createuser -a -d -P -N -U postgres pgadmin
> 

Per the help. You need to pass -E to have it be an encrypted (md5 hash)
style password. What version of PostgreSQL is this as I recall all newer
versions do this by default.

Usage:
  createuser [OPTION]... [ROLENAME]

Options:
  -s, --superuser   role will be superuser
  -S, --no-superuserrole will not be superuser
  -d, --createdbrole can create new databases
  -D, --no-createdb role cannot create databases
  -r, --createrole  role can create new roles
  -R, --no-createrole   role cannot create roles
  -l, --login   role can login (default)
  -L, --no-loginrole cannot login
  -i, --inherit role inherits privileges of roles it is a
member of (default)
  -I, --no-inherit  role does not inherit privileges
  -c, --connection-limit=N  connection limit for role (default: no
limit)
  -P, --pwpromptassign a password to new role
  -E, --encrypted   encrypt stored password
  -N, --unencrypted do not encrypt stored password
  -e, --echoshow the commands being sent to the server
  --helpshow this help, then exit
  --version output version information, then exit

Connection options:
  -h, --host=HOSTNAME   database server host or socket directory
  -p, --port=PORT   database server port
  -U, --username=USERNAME   user name to connect as (not the one to
create)
  -W, --passwordforce password prompt

If one of -s, -S, -d, -D, -r, -R and ROLENAME is not specified, you will
be prompted interactively.


Joshua D. Drake


-- 
PostgreSQL - XMPP: jdr...@jabber.postgresql.org
   Consulting, Development, Support, Training
   503-667-4564 - http://www.commandprompt.com/
   The PostgreSQL Company, serving since 1997


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why would I want to use connection pooling middleware?

2009-01-15 Thread Sam Mason
On Thu, Jan 15, 2009 at 11:57:13AM -0500, Guy Rouillier wrote:
> Connections are pooled on the client end, not on the server end.  So, 
> you'd be able to pool connections on your web server, and should, for 
> reasons documented by others.  However, since Abby and Barb are using 
> different computers, you won't achieve anything by introducing pooling 
> into your desktop application.

Connection pooling can function anywhere; you could use it client side
(like the connection pooling built into, say, PHP) where what you say is
true.  You can also use something like pgpool on another box (maybe the
database server itself) and point all clients at it.

-- 
  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] MD5 password issue

2009-01-15 Thread Andreas Wenk
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi everybody,

I posted this allready to the ADMIN list but recieved no reply (what is for 
sure ok in a
way ;-) ). So I thought I'll give it a try here. Sorry for any inconvenience.

We are trying to understand an issue concerning the md5 password encryption. 
The situation
is as follows.

In pg_hba.conf we have:

# TYPE  DATABASEUSERCIDR-ADDRESS  METHOD

# "local" is for Unix domain socket connections only
local   all all   ident sameuser

# IPv4 local connections:
hostall all 127.0.0.1/32  md5
hostall all 192.168.97.0/24   md5

in pg_authid we get:

postgres=# SELECT rolname,rolpassword from pg_authid;
  rolname  | rolpassword
- ---+-
 postgres  |
 pgadmin   | plaintext
 odie  | md5passsorrrd

The user odie was created with:
CREATE ROLE odie LOGIN ENCRYPTED PASSWORD 'feedme';

The user pgadmin was created with:
$ createuser -a -d -P -N -U postgres pgadmin

The -N parameter forces not to encrypt the password - what we can see as a 
result in
pg_authid (if this makes sense or not is another question  ;-)  ).

Now the question: why is the user pgadmin able to connect to the database using 
pgAdmin
III from 192.168.97.30? That sould not be possible ... or am I wrong?

Thanks for any advice

Cheers

Andy

- --
St.Pauli - Hamburg - Germany

Andreas Wenk

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFJb2y+Va7znmSP9AwRAmGlAKCaingauIMGRvIqAqMBVdiBfhkoXwCeM1kR
M/fZSYeJKq9tMe791MhN2J8=
=V7hS
-END PGP SIGNATURE-

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Mac ordering with locales

2009-01-15 Thread David Blewett
On Thu, Feb 21, 2008 at 12:01 PM, Tom Lane  wrote:
> Pascal Cohen  writes:
>> The fact is that works on Linux and win but under Mac I always get the
>> ordering with 'default' C locale (I displayed all the lc_* and all are
>> right set)
>
> Yeah, this has been complained of before, eg here
> http://archives.postgresql.org/pgsql-general/2005-11/msg00047.php
> and here
> http://archives.postgresql.org/pgsql-general/2004-04/msg00564.php
>
> It seems to be a deficiency in Apple's locale support.  The second
> message is interesting since it indicates that "smart" sorting is
> available somewhere/somehow under OS X, but nobody here knows how
> to get at it :-(
>
>regards, tom lane

Here's a "me too" report from Possible (Robert Ivens) on IRC:
http://forum.servoy.com/viewtopic.php?f=4&t=11802


David Blewett

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why would I want to use connection pooling middleware?

2009-01-15 Thread Guy Rouillier

Kirk Strauser wrote:

I understand why pooling within a process itself is a good thing.  
However, say I have two users running the same program on different 
desktop machines.  At present, those applications connect with the same 
username/password that's tied to the program and not the actual user.  
It seems like if Abby and Barb end up sharing the same connection from 
the pool, and Abby runs some giant report query, then Barb would get 
held back while she waits for it to finish.  Is that true?  Even if not, 
what would be the advantage in the two of them sharing a connection?


Connections are pooled on the client end, not on the server end.  So, 
you'd be able to pool connections on your web server, and should, for 
reasons documented by others.  However, since Abby and Barb are using 
different computers, you won't achieve anything by introducing pooling 
into your desktop application.


--
Guy Rouillier

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why would I want to use connection pooling middleware?

2009-01-15 Thread Kirk Strauser

On Jan 15, 2009, at 10:08 AM, Tom Lane wrote:

 As an example, any system catalog update has to be broadcast to all  
live backends, and they all have to dutifully search their catalog  
caches to flush stale entries.  That costs the same whether the  
backend is being put to use or has been sitting idle for minutes.


I didn't realize that.  I wasn't sure what types of overheads were  
involved and didn't think about those sorts of things.



There's no percentage in trying to pool connections from applications
that are constantly doing something; but webserver sessions tend to  
have

lots of user "think time" as well as similar DB environments, so often
they can be pooled profitably.



That makes sense.  Along those lines, how do you actually enable  
connection pooling in pgpool-II?  I've RTFM a few times but it doesn't  
seem to have a flag for "enable_pooling".  Is "num_init_children"  
effectively the same as a hypothetical "max_children?"  If I set it to  
"1" and leave "max_pool" at "4", then clients queue up while one at a  
time gets to connect.


Sorry, I know this isn't the pgpool-II mailing list.  :-)
--
Kirk Strauser





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why would I want to use connection pooling middleware?

2009-01-15 Thread Bill Moran
Kirk Strauser  wrote:

[snip]

> I understand why pooling within a process itself is a good thing.   
> However, say I have two users running the same program on different  
> desktop machines.  At present, those applications connect with the  
> same username/password that's tied to the program and not the actual  
> user.  It seems like if Abby and Barb end up sharing the same  
> connection from the pool, and Abby runs some giant report query, then  
> Barb would get held back while she waits for it to finish.  Is that  
> true?

I don't believe that's true.  My understanding of pgpool is that it will
reuse an existing connection if it's free, or open a new one if required.

-- 
Bill Moran
http://www.potentialtech.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why would I want to use connection pooling middleware?

2009-01-15 Thread Christopher Browne
On Thu, Jan 15, 2009 at 10:54 AM, Kirk Strauser  wrote:
> I have a PostgreSQL 8.3.5 server with max_connections = 400.  At this
> moment, I have 223 open connections, including 64 from a bunch of webserver
> processes and about 100 from desktop machines running a particular
> application.  The rest are from various scheduled processes and other
> assorted things.  Now, I know there are projects like pgpool-II that can
> serve to pool connections to the server.  Why would I want to do that,
> though?
>
> I understand why pooling within a process itself is a good thing.  However,
> say I have two users running the same program on different desktop machines.
>  At present, those applications connect with the same username/password
> that's tied to the program and not the actual user.  It seems like if Abby
> and Barb end up sharing the same connection from the pool, and Abby runs
> some giant report query, then Barb would get held back while she waits for
> it to finish.  Is that true?  Even if not, what would be the advantage in
> the two of them sharing a connection?

There tend to be three effects that provide benefits:

1.  Fewer connections tend to consume less resources on the DBMS server.

Each connection consumes some resources, memory, generates lock
entries, and such, and having fewer connections means that the
aggregate size of the postmaster processes is likely to be smaller.

2.  Pooling connections should mean that you can use and re-use
connections, which should reduce the amount of work done building up
and tearing down connections.

Each PostgreSQL connection is handled by a separate OS process; if the
connection pool is passing the same connection from user to user, your
system is doing less work spawning backend processes, doing
authentication, and otherwise getting from fork() to "ready to handle
queries."

3.  There is only so much *genuine* concurrency that you can actually
get out of your DB server, and there is only limited value to having
more backend processes than this "emergent quantity."

For instance, if you only have a single CPU and a single disk drive,
then your computer is only ever *truly* doing one thing at a time.
Trying to make such a server service 200 connections, each trying to
do work, means that this server will be doing a great deal of work
switching from process to process, doing the context switches.

That's an extreme point, of course, but it should be reasonably
intuitive to consider that...
- If you have 10 CPUs and a RAID array of 10 disk drives, then that
host can likely cope comfortably with doing ~10 things at once;
- Change those numbers to 20/20 and the intuition continues.

If some of your 200 connections are only intermittently used, then if
you had a connection pool with 20 "real" connections, then the 200
users would seldom notice delays due to sharing.  And the connection
pool usage would mean that the DB server would have way fewer
processes kicking around consuming memory.

You might well be better using the process for the extra 180 backends
for shared cache :-).
-- 
http://linuxfinances.info/info/linuxdistributions.html
Joe E. Lewis  - "There's only one thing money won't buy, and that is poverty."

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why would I want to use connection pooling middleware?

2009-01-15 Thread Alvaro Herrera
Tom Lane wrote:
> Kirk Strauser  writes:
> > I have a PostgreSQL 8.3.5 server with max_connections = 400.  At this  
> > moment, I have 223 open connections, including 64 from a bunch of  
> > webserver processes and about 100 from desktop machines running a  
> > particular application.  The rest are from various scheduled processes  
> > and other assorted things.  Now, I know there are projects like pgpool- 
> > II that can serve to pool connections to the server.  Why would I want  
> > to do that, though?
> 
> Idle backends eat resources that would be better spent elsewhere.
> (I'm assuming that the majority of those 223 backends aren't actually
> doing anything at any one instant.)  As an example, any system catalog
> update has to be broadcast to all live backends, and they all have to
> dutifully search their catalog caches to flush stale entries.  That costs
> the same whether the backend is being put to use or has been sitting
> idle for minutes.

Also, memory wasted in per-process memory is memory not used for caches.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Why would I want to use connection pooling middleware?

2009-01-15 Thread Tom Lane
Kirk Strauser  writes:
> I have a PostgreSQL 8.3.5 server with max_connections = 400.  At this  
> moment, I have 223 open connections, including 64 from a bunch of  
> webserver processes and about 100 from desktop machines running a  
> particular application.  The rest are from various scheduled processes  
> and other assorted things.  Now, I know there are projects like pgpool- 
> II that can serve to pool connections to the server.  Why would I want  
> to do that, though?

Idle backends eat resources that would be better spent elsewhere.
(I'm assuming that the majority of those 223 backends aren't actually
doing anything at any one instant.)  As an example, any system catalog
update has to be broadcast to all live backends, and they all have to
dutifully search their catalog caches to flush stale entries.  That costs
the same whether the backend is being put to use or has been sitting
idle for minutes.

There's no percentage in trying to pool connections from applications
that are constantly doing something; but webserver sessions tend to have
lots of user "think time" as well as similar DB environments, so often
they can be pooled profitably.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Why would I want to use connection pooling middleware?

2009-01-15 Thread Kirk Strauser
I have a PostgreSQL 8.3.5 server with max_connections = 400.  At this  
moment, I have 223 open connections, including 64 from a bunch of  
webserver processes and about 100 from desktop machines running a  
particular application.  The rest are from various scheduled processes  
and other assorted things.  Now, I know there are projects like pgpool- 
II that can serve to pool connections to the server.  Why would I want  
to do that, though?


I understand why pooling within a process itself is a good thing.   
However, say I have two users running the same program on different  
desktop machines.  At present, those applications connect with the  
same username/password that's tied to the program and not the actual  
user.  It seems like if Abby and Barb end up sharing the same  
connection from the pool, and Abby runs some giant report query, then  
Barb would get held back while she waits for it to finish.  Is that  
true?  Even if not, what would be the advantage in the two of them  
sharing a connection?


I'm just trying to wrap my head around this.  Thanks!
--
Kirk Strauser





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Question regarding new windowing functions in 8.4devel

2009-01-15 Thread A. Kretschmer
Hi,

first, many thanks to all for the great work, i'm waiting for 8.4.


I have played with the new possibilities:

test=# select typ, ts, rank() over (partition by typ order by ts desc )  from 
foo;
 typ |  ts   | rank
-+---+--
   1 | 2009-01-15 13:03:57.667631+01 |1
   1 | 2009-01-15 13:03:56.554659+01 |2
   1 | 2009-01-15 13:03:55.694803+01 |3
   1 | 2009-01-15 13:03:54.816871+01 |4
   1 | 2009-01-15 13:03:53.521454+01 |5
   2 | 2009-01-15 13:04:02.223655+01 |1
   2 | 2009-01-15 13:04:01.30692+01  |2
   2 | 2009-01-15 13:04:00.05923+01  |3
   3 | 2009-01-15 13:04:14.27154+01  |1
   3 | 2009-01-15 13:04:05.395805+01 |2
   3 | 2009-01-15 13:04:04.365645+01 |3
   4 | 2009-01-15 13:04:11.54897+01  |1
   4 | 2009-01-15 13:04:10.778115+01 |2
   4 | 2009-01-15 13:04:10.013001+01 |3
   4 | 2009-01-15 13:04:09.324396+01 |4
   4 | 2009-01-15 13:04:08.523507+01 |5
   4 | 2009-01-15 13:04:07.375874+01 |6
(17 rows)


Okay, fine.

Now i want only 3 records for every typ:

test=# select typ, ts, rank() over (partition by typ order by ts desc )  from 
foo where rank <= 3;
ERROR:  column "rank" does not exist
LINE 1: ...rtition by typ order by ts desc )  from foo where rank <= 3;


Okay, next try:

test=# select typ, ts, rank() over (partition by typ order by ts desc )  from 
foo where rank() over (partition by typ order by ts desc ) <= 3;
ERROR:  window functions not allowed in WHERE clause
LINE 1: ...rtition by typ order by ts desc )  from foo where rank() ove...



Ouch.


I found a way with a subselect:

test=# select * from (select typ, ts, rank() over (partition by typ order by ts 
desc )  from foo) bla where rank <= 3;
 typ |  ts   | rank
-+---+--
   1 | 2009-01-15 13:03:57.667631+01 |1
   1 | 2009-01-15 13:03:56.554659+01 |2
   1 | 2009-01-15 13:03:55.694803+01 |3
   2 | 2009-01-15 13:04:02.223655+01 |1
   2 | 2009-01-15 13:04:01.30692+01  |2
   2 | 2009-01-15 13:04:00.05923+01  |3
   3 | 2009-01-15 13:04:14.27154+01  |1
   3 | 2009-01-15 13:04:05.395805+01 |2
   3 | 2009-01-15 13:04:04.365645+01 |3
   4 | 2009-01-15 13:04:11.54897+01  |1
   4 | 2009-01-15 13:04:10.778115+01 |2
   4 | 2009-01-15 13:04:10.013001+01 |3
(12 rows)


Is there a better way to do that?


(current 8.4devel, today compiled)

Regards, Andreas
-- 
Andreas Kretschmer
Kontakt:  Heynitz: 035242/47150,   D1: 0160/7141639 (mehr: -> Header)
GnuPG-ID:   0x3FFF606C, privat 0x7F4584DA   http://wwwkeys.de.pgp.net

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Polymorphic "setof record" function?

2009-01-15 Thread Merlin Moncure
On Thu, Jan 15, 2009 at 4:57 AM, Christian Schröder  wrote:
> Merlin Moncure wrote:
>>> them.
>>>  I need something like:
>>>   select * from myfunc('mytable') as x(like mytable)
>>>  or
>>>   select * from myfunc('mytable') as x(mytable%TYPE)
>>>
>>>  Is there any solution for PostgreSQL 8.2?
>>>
>>
>> Unfortunately to the best of my knowledge there is no way to do this.
>>  I think what you want is to have sql functions that specialize on
>> type in the way that templates do in C++.
>>
>
> That would certainly be the best solution, but I would also be happy with
> some syntactic sugar: The function may still be declared as returning a set
> of records, so that I would still have to declare their actual return type
> in the query. However, I would like to have an easy way to express: "the
> record will have the same structure as table x".

There is a circuitous way to do this that sometimes works.  Declare
your function to return text and do this inside the function body (for
example):

create or replace function func() returns text as
$$
  select foo::text from foo limit 5;
$$ language sql;

select func::foo from (select func()) q;

Couple of notes here:
*) obviously, the idea here is to use dynamic-sql to return different
table types based on inputs
*) can only upcast to one table per function call (but can return
varying record types based if left in text)
*) record::text casts I think were introduced in 8.3.  There is a more
complex way to do it in 8.2 that is probably not worth the effort.
*) record::text casts are not really reflexive.  null fields are an
issue or example.

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Select CASE when null ?

2009-01-15 Thread Sam Mason
On Wed, Jan 14, 2009 at 03:56:25PM -0500, Mark Styles wrote:
> SELECT COALESCE(mid,0) AS mid, COALESCE(id_group,0) AS id_group
> FROM users
> WHERE username = 'test'
> UNION
> SELECT 0, 0
> WHERE NOT EXISTS (SELECT 1 FROM users WHERE username = 'test');

An alternative using outer joins would be:

  SELECT COALESCE(mid,0) AS mid, COALESCE(id_group,0) AS id_group
  FROM (SELECT 1) x
LEFT JOIN users ON username = 'test';

Unions tend to preclude various optimisations so I'd tend to stay away
from them where possible.  This query will also only perform only one
index scan of users, rather than two.

-- 

  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Query question

2009-01-15 Thread Sam Mason
On Wed, Jan 14, 2009 at 07:36:03PM -0800, mailingli...@net-virtual.com wrote:
> CREATE TABLE listings (
>   trans_id  SERIAL,
>   mode CHAR(1),
>   listing_id INT,
>   region_id INT,
>   category INT
> );
> 
> "SELECT * FROM listings ORDER BY region_id, category, listing_id,
> trans_id"

> [...] what I want to do is get only the last transaction for
> a given listing_id, because the earlier ones don't matter.

If you have an index on (region_id,category,listing_id,trans_id) you
should be able to do:

  SELECT region_id,category,listing_id,MAX(trans_id)
  FROM listings
  GROUP BY region_id,category,listing_id;

And have PG answer this using the index (it'll only do this if it thinks
there are many transactions for each group though).

> On top of
> that, each region_id and category_id has its own index.  I need to be able
> to process the indexes in-full, one-at-a-time because there are too many
> to hold that many open filehandles/processes at one time.

Not sure what you mean by "index" here; I'm assuming you're talking
about something outside PG, or am I missing some context?

If it's inside PG, then you do know that every index you have will slow
down every modification of the table?

> So, my question is, is there some way to return the rows in a
> deterministic order, without actually having to do an explicit sort on the
> data?  What I mean is, I don't care if category_id 4 / region_id 10 /
> listing_id 1 comes before category_id 1 / region_id 1 / lisitng_id 1
> -- I just need them returned to me in that sort of grouped order (although
> sorted by trans_id).

If you want to know all the transaction ids then you need to do the
sort, if you only want the largest/latest then you're probably better
off telling PG that's what you want (i.e. using GROUP BY and MAX
aggregate and letting it make an appropiate decision).

-- 

  Sam  http://samason.me.uk/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] function to return both table row and varchar

2009-01-15 Thread Kenneth Lundin
Sorry, I should have RTFM(!!!). I found it under 4.2.4 Field selection.
Apparently it works just as I want, but I should have put parenthesis around
the row-name like this:

> select result,(resulting_row).name from verify_record(1234);
name   | result
---|
"Test" | "OK"

I also discovered you can do a
> select result,(resulting_row).* from verify_record(1234);
to combine the both results to a single returning row if needed... sweet!

//Kenneth


On Thu, Jan 15, 2009 at 11:10 AM, Kenneth Lundin wrote:

> Hi,
>
> i'm defining a function in plpqsql and would like it to return one varchar
> and one row from another table. I have defined it like this (this is only a
> test and does not really make sense yet, but it's the principle i'm after):
>
> CREATE OR REPLACE FUNCTION verify_record(IN number_to_verify bigint, OUT
> resulting_row logbook, OUT result character varying)
>   RETURNS record AS
> $BODY$
> BEGIN
>  SELECT * INTO resulting_row FROM logbook WHERE
> id_number=number_to_verify::varchar;
>  SELECT 'OK' INTO result;
> END
> $BODY$
>   LANGUAGE 'plpgsql' VOLATILE SECURITY DEFINER
>   COST 100;
>
> It works fine and i can do a select like this:
>
> > select * from verify_record(1234);
> resulting_row | result
> --|
> (1,"Test","Registered",.) | "OK"
>
> So far so good, but how do I use the the resulting_row further, say if i
> would like to select only a few columns or perhaps define a view that
> returns the 'result' column and only column 2 "Test" from the resulting_row?
> What I'd like to do is a select and sub-address the individual columns of
> the resulting_row, like writing (given 'name' is the name of some column in
> resulting_row):
>
> > select returned_row.name, result from verify_record(1234);
>
> or perhaps
>
> > select returned_row['name'], result from verify_record(1234);
>
> and have it return something like:
> name   | result
> ---|
> "Test" | "OK"
>
> Is this possible or am I on the wrong track here?
>
> //Kenneth
>
>
>


Re: [GENERAL] inconsistency in aliasing

2009-01-15 Thread Grzegorz Jaśkiewicz
as far as I know, this bit (statement evaluation) wasn't implemented
then. It only got there in 8.4, so you can have even subselects
evaluated.
So it isn't a bug, it just wasn't implemented to work that way back than,

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] function to return both table row and varchar

2009-01-15 Thread Kenneth Lundin
Hi,

i'm defining a function in plpqsql and would like it to return one varchar
and one row from another table. I have defined it like this (this is only a
test and does not really make sense yet, but it's the principle i'm after):

CREATE OR REPLACE FUNCTION verify_record(IN number_to_verify bigint, OUT
resulting_row logbook, OUT result character varying)
  RETURNS record AS
$BODY$
BEGIN
 SELECT * INTO resulting_row FROM logbook WHERE
id_number=number_to_verify::varchar;
 SELECT 'OK' INTO result;
END
$BODY$
  LANGUAGE 'plpgsql' VOLATILE SECURITY DEFINER
  COST 100;

It works fine and i can do a select like this:

> select * from verify_record(1234);
resulting_row | result
--|
(1,"Test","Registered",.) | "OK"

So far so good, but how do I use the the resulting_row further, say if i
would like to select only a few columns or perhaps define a view that
returns the 'result' column and only column 2 "Test" from the resulting_row?
What I'd like to do is a select and sub-address the individual columns of
the resulting_row, like writing (given 'name' is the name of some column in
resulting_row):

> select returned_row.name, result from verify_record(1234);

or perhaps

> select returned_row['name'], result from verify_record(1234);

and have it return something like:
name   | result
---|
"Test" | "OK"

Is this possible or am I on the wrong track here?

//Kenneth


Re: [GENERAL] Polymorphic "setof record" function?

2009-01-15 Thread Christian Schröder

Merlin Moncure wrote:

On 1/13/09, Christian Schröder  wrote:
  

Hi list,
 I have written a function that returns a setof record. The function has a
table name as a parameter and the resulting records have the same structure
as this table. Is there any easy way to specify this when I call the
function? If the table has many columns then it's annoying to specify all of
them.
 I need something like:
   select * from myfunc('mytable') as x(like mytable)
 or
   select * from myfunc('mytable') as x(mytable%TYPE)

 Is there any solution for PostgreSQL 8.2?



Unfortunately to the best of my knowledge there is no way to do this.
 I think what you want is to have sql functions that specialize on
type in the way that templates do in C++.
  
That would certainly be the best solution, but I would also be happy 
with some syntactic sugar: The function may still be declared as 
returning a set of records, so that I would still have to declare their 
actual return type in the query. However, I would like to have an easy 
way to express: "the record will have the same structure as table x".


Regards,
   Christian

--
Deriva GmbH Tel.: +49 551 489500-42
Financial IT and Consulting Fax:  +49 551 489500-91
Hans-Böckler-Straße 2  http://www.deriva.de
D-37079 Göttingen

Deriva CA Certificate: http://www.deriva.de/deriva-ca.cer


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Autovacuum daemon terminated by signal 11

2009-01-15 Thread Richard Huxton
Justin Pasher wrote:
> Hello,
> 
> I have a server running PostgreSQL 8.1.15-0etch1 (Debian etch) that was
> recently put into production. Last week a developer started having a problem
> with his psql connection being terminated every couple of minutes when he
> was running a query. When I look through the logs, I noticed this message.
> 
> 2009-01-09 08:09:46 CST LOG:  autovacuum process (PID 15012) was terminated
> by signal 11

Segmentation fault - probably a bug or bad RAM.

> I looked through the logs some more and I noticed that this was occurring
> every minute or so. The database is a pretty heavily utilized system
> (judging by the age(datfrozenxid) from pg_database, the system had run
> approximately 500 million queries in less than a week). I noticed that right
> before every autovacuum termination, it tried to autovacuum a database.
> 
> 2009-01-09 08:09:46 CST LOG:  transaction ID wrap limit is 4563352, limited
> by database "database_name"
> 
> It was always showing the same database, so I decided to manually vacuum the
> database. Once that was done (it was successful the first time without
> errors), the problem seemed to go away. I went ahead and manually vacuumed
> the remaining databases just to take care of the potential xid wraparound
> issue.

I'd be suspicious of possible corruption in autovacuum's internal data.
Can you trace these problems back to a power-outage or system crash? It
doesn't look like "database_name" itself since you vacuumed that
successfully. If autovacuum is running normally now, that might indicate
it was something in the way autovacuum was keeping track of "database_name".

It's also probably worth running some memory tests on the server -
(memtest86 or similar) to see if that shows anything. Was it *always*
the autovacuum process getting sig11? If not then it might just be a
pattern of usage that makes it more likely to use some bad RAM.

-- 
  Richard Huxton
  Archonet Ltd

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] one-click installer postgresql-8.3.5-1-linux.bin failed

2009-01-15 Thread Dave Page
On Thu, Jan 15, 2009 at 9:27 AM, m zyzy  wrote:
>
> Thank you Scott and Dave.
> Dave , the url given , not really helpful unless I go through each and every
> details,

The text at the link I gave reads:


The installer crashes on Linux. What can I do?

BitRock InstallBuilder has been designed to create installers that
will run out of the box in virtually any Linux distribution. In rare
ocassions, a certain combination of system libraries and X-Window
server setup may cause the installer to crash. If that occurs, you may
want to try running in different modes:

./installer-name.bin --mode xwindow

or

./installer-name.bin --mode text

In any case, please report the issue so it can be addressed.


> I dont know -searcg the page for the word segmentation returns
> nothing . another thing ,just to let you know my postGIS installation
> through StackBuilder still failed by returning error after clicking Next
> button to start installation after successfully downloaded. It says 1 skip
> installation because of error , sort of.
>
> Scott , my latest attempt to install in centos 5 this time work well by
> re-downloading .bin installer .

That implies your original download was incomplete or corrupt.

> But , in fc10 the
> ./postgresql-8.3.5-1-linux.bin command still to no avail.
> for now , the text mode ,
> ./postgresql-8.3.5-1-linux.bin --mode text
> is the only way to go in fc10.

Do the files on both systems match if you compare a checksum?


-- 
Dave Page
EnterpriseDB UK:   http://www.enterprisedb.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] one-click installer postgresql-8.3.5-1-linux.bin failed

2009-01-15 Thread Scott Marlowe
On Thu, Jan 15, 2009 at 2:27 AM, m zyzy  wrote:
>
> Scott , my latest attempt to install in centos 5 this time work well by
> re-downloading .bin installer . But , in fc10 the
> ./postgresql-8.3.5-1-linux.bin command still to no avail.
> for now , the text mode ,
> ./postgresql-8.3.5-1-linux.bin --mode text
> is the only way to go in fc10.

Any time I have to install on a distro that doesn't have proper pgsql
packaging for the version I wanna run, I've always built it from
source.

tar xvjf postgresql-8.3.5.tar.bz2
cd postgresql-8.3.5
./configure
make
sudo make install

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] one-click installer postgresql-8.3.5-1-linux.bin failed

2009-01-15 Thread m zyzy
Thank you Scott and Dave.
Dave , the url given , not really helpful unless I go through each and every
details,I dont know -searcg the page for the word segmentation returns
nothing . another thing ,just to let you know my postGIS installation
through StackBuilder still failed by returning error after clicking Next
button to start installation after successfully downloaded. It says 1 skip
installation because of error , sort of.

Scott , my latest attempt to install in centos 5 this time work well by
re-downloading .bin installer . But , in fc10 the
./postgresql-8.3.5-1-linux.bin command still to no avail.
for now , the text mode ,
./postgresql-8.3.5-1-linux.bin --mode text
is the only way to go in fc10.
Thanks.


Re: [GENERAL] one-click installer postgresql-8.3.5-1-linux.bin failed

2009-01-15 Thread Dave Page
On Thu, Jan 15, 2009 at 7:47 AM, m zyzy  wrote:
> I had this weird problem in CentOS 5 and Fedora 10 . the one-click binary
> installer failed
> execute this
> ./postgresql-8.3.5-1-linux.bin
>
> shows
>
> Segmentation fault

Do either of the suggestions at
http://bitrock.com/support_installbuilder_faq.html#A1_6 help?


-- 
Dave Page
EnterpriseDB UK:   http://www.enterprisedb.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] one-click installer postgresql-8.3.5-1-linux.bin failed

2009-01-15 Thread Scott Marlowe
On Thu, Jan 15, 2009 at 12:47 AM, m zyzy  wrote:
> I had this weird problem in CentOS 5 and Fedora 10 . the one-click binary
> installer failed
> execute this
> ./postgresql-8.3.5-1-linux.bin
>
> shows
>
> Segmentation fault

I don't know what's causing it, I use the PGDG RHEL 5 packages for
pgsql on Centos 5.  They seem to work very well.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general