Re: [GENERAL] Corrupt database? 8.1/FreeBSD6.0

2007-01-13 Thread Tom Lane
I wrote:
> ... but I suddenly fear that we've missed a fundamental point about
> pg_clog truncation.  And WAL wraparound for that matter.  To wit, a
> sufficiently long-lived temp table could contain old XIDs, and there's
> no way for anyone except the owning backend to clean them out, or even
> guarantee that they're marked committed.

After further thought I believe this is OK as of 8.2, because a temp
table's relfrozenxid is tracked independently of any other's.  (This
problem puts a stake through the heart of the recently-discussed idea
that a temp table might be able to get along without a globally visible
pg_class entry, however.)

But it seems that we need a band-aid for 8.1 and earlier.  The simplest
fix I can think of is for vacuum not to attempt to advance the
datvacuumxid/datfrozenxid fields if it skipped over any temp tables of
other backends.  That's a bit nasty, since in a database making heavy
use of temp tables, you might do a whole lot of vacuums without ever
meeting that condition.  Anyone have a better idea?

regards, tom lane

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [HACKERS] [GENERAL] Autovacuum Improvements

2007-01-13 Thread elein
On Fri, Jan 12, 2007 at 07:33:05PM -0300, Alvaro Herrera wrote:
> Simon Riggs wrote:
> 
> > Some feedback from initial testing is that 2 queues probably isn't
> > enough. If you have tables with 100s of blocks and tables with millions
> > of blocks, the tables in the mid-range still lose out. So I'm thinking
> > that a design with 3 queues based upon size ranges, plus the idea that
> > when a queue is empty it will scan for tables slightly above/below its
> > normal range.
> 
> Yeah, eventually it occurred to me the fact that as soon as you have 2
> queues, you may as well want to have 3 or in fact any number.  Which in
> my proposal is very easily achieved.
> 
> 
> > Alvaro, have you completed your design?
> 
> No, I haven't, and the part that's missing is precisely the queues
> stuff.  I think I've been delaying posting it for too long, and that is
> harmful because it makes other people waste time thinking on issues that
> I may already have resolved, and delays the bashing that yet others will
> surely inflict on my proposal, which is never a good thing ;-)  So maybe
> I'll put in a stub about the "queues" stuff and see how people like the
> whole thing.

Have you made any consideration of providing feedback on autovacuum to users?
Right now we don't even know what tables were vacuumed when and what was
reaped.  This might actually be another topic.

---elein
[EMAIL PROTECTED]

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] installing 8.2 on solaris 10?

2007-01-13 Thread Devrim GUNDUZ
Hi,

On Fri, 2007-01-12 at 21:21 -0800, Joshua D. Drake wrote:
> > Is there a prebuilt package available for solaris 10 somewhere or
> > should I just follow the instructions here:
> >
> http://www.postgresql.org/docs/8.2/interactive/install-procedure.html
> > ?
> 
> I have only seen up to 8.1.4. I would follow the source procedures. 

I also checked blastwave, and it does not have 8.2, too. Just a FYI.

Regards,

-- 
The PostgreSQL Company - Command Prompt, Inc. 1.503.667.4564
PostgreSQL Replication, Consulting, Custom Development, 24x7 support
Managed Services, Shared and Dedicated Hosting
Co-Authors: plPHP, ODBCng - http://www.commandprompt.com/





signature.asc
Description: This is a digitally signed message part


Re: [HACKERS] [GENERAL] Checkpoint request failed on version 8.2.1.

2007-01-13 Thread Jim C. Nasby
On Thu, Jan 11, 2007 at 06:04:56PM -0500, Andrew Dunstan wrote:
> Please don't. At least not on the PostgreSQL web site nor in the docs. 
> And no, I don't run my production servers on Windows either.
> 
> For good or ill, we made a decision years ago to do a proper Windows 
> port. I think that it's actually worked out reasonably well. All 
> operating systems have warts. Not long ago I tended to advise people not 
> to run mission critical Postgresql on Linux unless they were *very* 
> careful, due to the over-commit issue.
 
Yes, and IIRC we documented the overcommit stuff as well.

This isn't about OS holy wars, it's about providing information so that
people can make an informed decision about what OS to run their database
on.
-- 
Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Corrupt database? 8.1/FreeBSD6.0

2007-01-13 Thread Jeff Amiel


Tom Lane <[EMAIL PROTECTED]> wrote:
Really?  Wow, *that's* an interesting thought.  Is it likely that that
temp table could contain many-hour-old data?

Certainly...our connection pool used by jboss can have connections to postgres 
persisting for multiple days.  (We're still looking for a way to tell it to 
recycle these occasionally).  As each 'user' of our web based app performs some 
action, they acquire one of the connection pool connections and set their 
user_id in the temporary table used by that connection (we use that for our 
audit triggers)  Once they are 'done' with the connection, the connection is 
just released back to the pool but not actually closed...so the temp table 
still contains the  data from a previous iteration.

 
-
TV dinner still cooling?
Check out "Tonight's Picks" on Yahoo! TV.

Re: [GENERAL] Corrupt database? 8.1/FreeBSD6.0

2007-01-13 Thread Tom Lane
Jeff Amiel <[EMAIL PROTECTED]> writes:
> Jan 13 08:27:26 prod-app-1 postgres[92257]: [30259-1] jboss 92257 ERROR:  
> could not access status of transaction 2107200825
> Jan 13 08:27:26 prod-app-1 postgres[92257]: [30259-2] jboss 92257 DETAIL:  
> could not open file "pg_clog/07D9": No such file or directory
> Jan 13 08:27:26 prod-app-1 postgres[92257]: [30259-3] jboss 92257 CONTEXT:  
> SQL statement "DELETE FROM audit_metadata WHERE user_id <> -1"

> pg_clog dir looks like this:
> -rw---  1 pgsql  wheel  262144 Jan 13 05:41 07DA
> -rw---  1 pgsql  wheel  262144 Jan 13 08:06 07DB
> -rw---  1 pgsql  wheel   90112 Jan 13 08:51 07DC

> Now that table, audit_metadata, is a temporary table (when accessed by jboss 
> as it is here).  There is a 'rea'l table with the same name, but only used by 
> batch processes that connect to postgres.

Really?  Wow, *that's* an interesting thought.  Is it likely that that
temp table could contain many-hour-old data?

This seems unrelated to your issue with autovacuum (which should never
touch a temp table, and certainly isn't going to find one in template0)
... but I suddenly fear that we've missed a fundamental point about
pg_clog truncation.  And WAL wraparound for that matter.  To wit, a
sufficiently long-lived temp table could contain old XIDs, and there's
no way for anyone except the owning backend to clean them out, or even
guarantee that they're marked committed.

Thoughts?

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] index type for indexing long texts

2007-01-13 Thread Richard Troy


> Aleksander Kmetec <[EMAIL PROTECTED]> writes:
> > I'm looking for a solution for indexing long TEXT columns. We're currently 
> > using a HASH index, which can handle most
> > situations, but every now and then we need support for even longer texts.
>
> > One solution would be to create a functional index which would only use the 
> > first N chars of mycol, but then we'd have
> > to change several hundred occurences of "mycol = someval" with "(mycol = 
> > someval AND firstN(mycol) = firstN(someval))",
> > as well as update some SQL generators...
>
> > That's why I'd be interested to know if there are any index types available 
> > which store only the first N chars or use
> > some highly compressed form for storing index data, and then recheck any 
> > potential hits against the main table. And if
> > something like that does not exist yet, how difficult would it be to 
> > construct such a solution out of many "spare parts"
> > that come with PG?
>

Try moving where the hash takes place - ie, use your own hash function to
create the key.

RT



-- 
Richard Troy, Chief Scientist
Science Tools Corporation
510-924-1363 or 202-747-1263
[EMAIL PROTECTED], http://ScienceTools.com/


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] index type for indexing long texts

2007-01-13 Thread Tom Lane
Aleksander Kmetec <[EMAIL PROTECTED]> writes:
> I'm looking for a solution for indexing long TEXT columns. We're currently 
> using a HASH index, which can handle most 
> situations, but every now and then we need support for even longer texts.

> One solution would be to create a functional index which would only use the 
> first N chars of mycol, but then we'd have 
> to change several hundred occurences of "mycol = someval" with "(mycol = 
> someval AND firstN(mycol) = firstN(someval))", 
> as well as update some SQL generators...

> That's why I'd be interested to know if there are any index types available 
> which store only the first N chars or use 
> some highly compressed form for storing index data, and then recheck any 
> potential hits against the main table. And if 
> something like that does not exist yet, how difficult would it be to 
> construct such a solution out of many "spare parts" 
> that come with PG?

I think you could do it with GiST.  Look at contrib/btree_gist, and
make a variant version of its text support in which only the first N
characters are stored/compared, then declare all the operators as
RECHECK.  (Actually, I'm not sure that "<" etc would work correctly in
such a scenario, especially in non-C locales; it might be best to
declare the operator class as containing only "=".)

I don't think you can do it in btree or hash because they assume that
the operators are strictly consistent with the support functions, and
in fact apply the operators directly in some code paths.  So you
couldn't use true text equality as the "=" operator, which ruins your
chances of not changing your queries.  But GiST never touches the
operators at all, just the support procs, and you can make the support
procs do anything.  So it should be possible to make the index work with
just the first N characters, and then the RECHECK with true equality
would filter out the wrong matches.

BTW, if you get something that works well, stick it up on pgfoundry;
you're not the first to have asked this ...

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] FK Constraint on index not PK

2007-01-13 Thread Tom Lane
Alvaro Herrera <[EMAIL PROTECTED]> writes:
> The problem is that the constraint was defined with a dependence on the
> second index.  I guess what you could do is drop the constraint, drop
> the second index, and then recreate the constraint.  Try it within a
> transaction block, just in case it doesn't work (but I don't see why it
> wouldn't)

That's certainly the safest way, but if this is a live production
database then the time spent holding exclusive locks while
re-establishing the FK constraint may be a problem.  If that's not
tolerable then the solution is manually mucking with the system
catalogs as I suggested in an earlier reply.

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Problems with unique restrictions

2007-01-13 Thread Tom Lane
"Marcel Gsteiger" <[EMAIL PROTECTED]> writes:
> Now since I upgraded to 8.2 I have problems inserting data into tables that 
> have unique indexes. Ugly enough, I get the message 'duplicate key violates 
> unique constraint' when inserting the very first record into a table. This 
> happens everytime when the new tuple references another tuple that has been 
> inserted just before this one in the same transaction.

> Putting a "SET CONSTRAINTS ALL DEFERRED" in my procedure does not help.

> To me it looks that something with referential integrity checking goes wrong, 
> but in this case the error message would be misleading.

RI would not have anything to do with a duplicate-key error.

Do you have any SERIAL-type columns in these tables?  My first thought
is of a sequence that hasn't been updated to be above the existing ID
values.  It's fairly easy to get into such a state if you do anything
but a plain vanilla dump-all-and-reload-all update process ...

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] FK Constraint on index not PK

2007-01-13 Thread Tom Lane
=?UTF-8?B?U3TDqXBoYW5lIFNjaGlsZGtuZWNodA==?= <[EMAIL PROTECTED]> writes:
> My goal is to migrate to 8.2.1. definitely. But as you said it, I do not
> want to recreate unwanted index when migrating. I want to drop them BEFORE.
> But, I can't just do a "drop index" command. It fails.

Right, because the FK constraints by chance seized on those indexes as
being the matching ones for them to depend on.

What you want to do is (1) update the relevant pg_depend entries to
reference the desired PK indexes instead of the undesired ones; then
(2) drop the undesired indexes.

I don't have a script to do (1) but it should be relatively
straightforward: in the rows with objid = OID of FK constraint
and refobjid = OID of unwanted index, update refobjid to be the
OID of the wanted index.  (To be truly correct, make sure that
classid and refclassid are the right values; but the odds of a
false match are probably pretty low.)

Needless to say, test and debug your process for this in a scratch
database ... and when you do it on the production DB, start with
BEGIN so you can roll back if you realize you blew it.

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Corrupt database? 8.1/FreeBSD6.0

2007-01-13 Thread Jeff Amiel


Tom Lane <[EMAIL PROTECTED]> wrote:
BTW, please don't do anything to try to correct the problem until we're
pretty sure we understand how this happened --- we might ask you for
more info.  AFAICS this isn't having any bad effects except for bleats
in your log file, so you can wait.

Happened again...however this time not appearingly related to an autovacuum.  I 
have a past example of this almost identical event a few weeks ago.  Here's 
what the logs look like with debug2 as the log_min level:

Jan 13 08:26:53 prod-app-1 postgres[41795]: [40171-1]  41795 LOG:  autovacuum: 
processing database "template1"
Jan 13 08:26:53 prod-app-1 postgres[560]: [40171-1]  560 DEBUG:  server process 
(PID 41794) exited with exit code 0
Jan 13 08:26:57 prod-app-1 postgres[563]: [915-1]  563 DEBUG:  checkpoint 
starting
Jan 13 08:26:57 prod-app-1 postgres[563]: [916-1]  563 DEBUG:  recycled 
transaction log file "0001005D0069"
Jan 13 08:26:57 prod-app-1 postgres[563]: [917-1]  563 DEBUG:  checkpoint 
complete; 0 transaction log file(s) added, 0 removed, 1 recycled
Jan 13 08:27:02 prod-app-1 postgres[560]: [40172-1]  560 DEBUG:  forked new 
backend, pid=42368 socket=8
Jan 13 08:27:02 prod-app-1 postgres[560]: [40173-1]  560 DEBUG:  forked new 
backend, pid=42369 socket=8
Jan 13 08:27:02 prod-app-1 postgres[560]: [40174-1]  560 DEBUG:  forked new 
backend, pid=42370 socket=8
Jan 13 08:27:02 prod-app-1 postgres[560]: [40175-1]  560 DEBUG:  server process 
(PID 42370) exited with exit code 0
Jan 13 08:27:02 prod-app-1 postgres[560]: [40176-1]  560 DEBUG:  forked new 
backend, pid=42371 socket=8
Jan 13 08:27:02 prod-app-1 postgres[560]: [40177-1]  560 DEBUG:  server process 
(PID 42369) exited with exit code 0
Jan 13 08:27:02 prod-app-1 postgres[560]: [40178-1]  560 DEBUG:  server process 
(PID 42371) exited with exit code 0
Jan 13 08:27:02 prod-app-1 postgres[560]: [40179-1]  560 DEBUG:  forked new 
backend, pid=42372 socket=8
Jan 13 08:27:02 prod-app-1 postgres[560]: [40180-1]  560 DEBUG:  server process 
(PID 42372) exited with exit code 0
Jan 13 08:27:02 prod-app-1 postgres[560]: [40181-1]  560 DEBUG:  forked new 
backend, pid=42373 socket=8
Jan 13 08:27:02 prod-app-1 postgres[560]: [40182-1]  560 DEBUG:  server process 
(PID 42373) exited with exit code 0
Jan 13 08:27:02 prod-app-1 postgres[560]: [40183-1]  560 DEBUG:  forked new 
backend, pid=42374 socket=8
Jan 13 08:27:02 prod-app-1 postgres[560]: [40184-1]  560 DEBUG:  server process 
(PID 42374) exited with exit code 0
Jan 13 08:27:02 prod-app-1 postgres[560]: [40185-1]  560 DEBUG:  server process 
(PID 42368) exited with exit code 0
Jan 13 08:27:08 prod-app-1 postgres[560]: [40186-1]  560 DEBUG:  forked new 
backend, pid=42375 socket=8
Jan 13 08:27:08 prod-app-1 postgres[560]: [40187-1]  560 DEBUG:  server process 
(PID 42375) exited with exit code 0
Jan 13 08:27:23 prod-app-1 postgres[560]: [40188-1]  560 DEBUG:  forked new 
backend, pid=42376 socket=8
Jan 13 08:27:23 prod-app-1 postgres[560]: [40189-1]  560 DEBUG:  server process 
(PID 42376) exited with exit code 0
Jan 13 08:27:26 prod-app-1 postgres[92257]: [30259-1] jboss 92257 ERROR:  could 
not access status of transaction 2107200825
Jan 13 08:27:26 prod-app-1 postgres[92257]: [30259-2] jboss 92257 DETAIL:  
could not open file "pg_clog/07D9": No such file or directory
Jan 13 08:27:26 prod-app-1 postgres[92257]: [30259-3] jboss 92257 CONTEXT:  SQL 
statement "DELETE FROM audit_metadata WHERE user_id <> -1"
Jan 13 08:27:26 prod-app-1 postgres[92257]: [30259-4] PL/pgSQL function 
"disable_auditing" line 2 at SQL statement
Jan 13 08:27:26 prod-app-1 postgres[92257]: [30259-5] jboss 92257 STATEMENT:  
select disable_auditing()
Jan 13 08:27:38 prod-app-1 postgres[560]: [40190-1]  560 DEBUG:  forked new 
backend, pid=42377 socket=8
Jan 13 08:27:38 prod-app-1 postgres[560]: [40191-1]  560 DEBUG:  server process 
(PID 42377) exited with exit code 0
Jan 13 08:27:49 prod-app-1 postgres[560]: [40192-1]  560 DEBUG:  forked new 
backend, pid=42378 socket=8
Jan 13 08:27:50 prod-app-1 postgres[560]: [40193-1]  560 DEBUG:  forked new 
backend, pid=42379 socket=8
Jan 13 08:27:50 prod-app-1 postgres[560]: [40194-1]  560 DEBUG:  forked new 
backend, pid=42380 socket=8
Jan 13 08:27:53 prod-app-1 postgres[560]: [40195-1]  560 DEBUG:  forked new 
backend, pid=42381 socket=8
Jan 13 08:27:53 prod-app-1 postgres[42382]: [40196-1]  42382 LOG:  autovacuum: 
processing database "postgres"
Jan 13 08:27:53 prod-app-1 postgres[560]: [40196-1]  560 DEBUG:  server process 
(PID 42381) exited with exit code 0
Jan 13 08:28:02 prod-app-1 postgres[560]: [40197-1]  560 DEBUG:  forked new 
backend, pid=42951 socket=8
Jan 13 08:28:02 prod-app-1 postgres[560]: [40198-1]  560 DEBUG:  forked new 
backend, pid=42952 socket=8

pg_clog dir looks like this:
-rw---  1 pgsql  wheel  262144 Jan 13 05:41 07DA
-rw---  1 pgsql  wheel  262144 Jan 13 08:06 07DB
-rw---  1 pgsql  wheel   90112 Jan 13 08:51 07DC

Now that table, audit_meta

[GENERAL] index type for indexing long texts

2007-01-13 Thread Aleksander Kmetec

(I'm reposting this because the original message didn't make it through in the 
last ~20 hours)

Hi,

I'm looking for a solution for indexing long TEXT columns. We're currently using a HASH index, which can handle most 
situations, but every now and then we need support for even longer texts.


One solution would be to create a functional index which would only use the first N chars of mycol, but then we'd have 
to change several hundred occurences of "mycol = someval" with "(mycol = someval AND firstN(mycol) = firstN(someval))", 
as well as update some SQL generators...


That's why I'd be interested to know if there are any index types available which store only the first N chars or use 
some highly compressed form for storing index data, and then recheck any potential hits against the main table. And if 
something like that does not exist yet, how difficult would it be to construct such a solution out of many "spare parts" 
that come with PG?


All we really need is index support for the "=" operator; everything else is 
optional.
We're currently using PG 8.1.

I'd be grateful for any suggestions.

Regards,
Aleksander

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] FK Constraint on index not PK

2007-01-13 Thread Alvaro Herrera
Stéphane Schildknecht wrote:
> Joshua D. Drake a écrit :
> > On Fri, 2007-01-12 at 17:50 +0100, Stéphane Schildknecht wrote:
> >   
> >> Dear community members,
> >>
> >> I'm having a quite strange behaviour while trying to drop some index.
> >>
> >> We have some tables with two indexes on a primary key. The first one was
> >> automatically created by the primary constraint. The second one was
> >> manually created on the same column. Don't know why, but I would now
> >> want to suppress it.
> >> 
> >
> > Drop the second index. It is redundant.
> 
> I know it. But I can't.

The problem is that the constraint was defined with a dependence on the
second index.  I guess what you could do is drop the constraint, drop
the second index, and then recreate the constraint.  Try it within a
transaction block, just in case it doesn't work (but I don't see why it
wouldn't)

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] FK Constraint on index not PK

2007-01-13 Thread Richard Huxton

Stéphane Schildknecht wrote:


My goal is to migrate to 8.2.1. definitely. But as you said it, I do not
want to recreate unwanted index when migrating. I want to drop them BEFORE.

But, I can't just do a "drop index" command. It fails.

That's why I asked for an advice to drop them or not recreate them. I
would really prefer not to parse the all dump (some GB).


Try the --list and --use-list options on pg_restore. IIRC indexes are 
mentioned as items in that, so you should be able to comment out the one 
you don't want.


HTH
--
  Richard Huxton
  Archonet Ltd

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] XEON familiy 5000, 5100 or 5300?

2007-01-13 Thread Shane Ambler

Philippe Lang wrote:


If I'm not wrong, a single postgresql sql query cannot be spread over
two processors, but can it be spread over multiple cores? If that's


No - a *core* is another cpu, basically you will have 2 or 4 cpu's in 
the one physical package.


HT creates 2 virtual cpu's sharing the same cpu resources but the cores 
are seperate cpu's in themselves.


The Quad-core will only benefit you more if you have more users running 
queries at the same time. Each core can run a query at the same time 
without slowing the others down (allowing for disk access/FSB limits).




--

Shane Ambler
[EMAIL PROTECTED]

Get Sheeky @ http://Sheeky.Biz

---(end of broadcast)---
TIP 6: explain analyze is your friend


[GENERAL] automatic build of return type

2007-01-13 Thread dfx
Hi all

I am coverting a database with several stored procedures from MS SQL Server
to PostgreSQL 8.2 and I have the following doubt:
With MS Sql a stored procedure containing the statement "SELECT * FROM
TABLE_A INNER JOIN TABLE_B" automatically creates and return a recordset
with all the fields of TABLE_A followed by all the fields of  TABLE,
instead, in my opinion, in a Postgresql function (language plpgsql) I MUST
create by hand a type containing all the fields (or create BY hand a list in
the declare space); The question is: It exist a method to create
automatically the return type to avoid to write it by hand?

Thank you.

Domenico


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] PostgreSQL Connections?

2007-01-13 Thread Shane Ambler

Ashish Karalkar wrote:

Thanks Shane for your replay,
It was by mistake , I have multiple clients,my server
IP is 155, having a web server, what we are doing is
using a java pool. and yes we are following the method
to close the connection immediatly after its work and
for next work pick up the new connection from pool, we
are using jboss as web application server. Point of
worry is that my server has many idle process on its
own IP . 


That is the whole point of the connection pools.
When a client connection is created there are things like allocating 
memory and authentication etc that take a certain amount of time to do. 
The connection pool opens a number of connections (or opens as needed 
until limit is reached) and then gives the client one of these 
connections when it asks for a new connection. The connection to 
PostgerSQL is established and resources are already allocated so it is 
quicker to start using the existing connection than if the connection 
was established when asked for.
The connection pool keeps the connection active (in an idle state) when 
the client disconnects and uses it to give to the next connection request.


When generating web pages the time overhead of creating new connections 
can more than double the time to generate the page making for a very 
slow site.


Unless you are hitting a performance limit (needing more concurrent 
connections than your pool settings allow) then you will have some 
client connections from the pool that are idle.


If the number of idle connections is excessive and you don't expect 
traffic to increase/peak then feel free to reduce the number of 
connections in your pool settings to reduce the number of idle 
connections kept open.


The connections from the server IP would indicate that JBoss is running 
on the same machine as Postgresql - that is the machine connecting to 
Postgresql or there is some kind of tunnel (such as through ssh) that 
means the connections are established locally.




With regards
Ashish Karalkar


--- Shane Ambler <[EMAIL PROTECTED]> wrote:


Ashish Karalkar wrote:

Hi Shoaib

Following is the output for ps auxwww | grep

^postgres

IP address of my server is 172.18.5.155
postgres 12846  0.0  0.8 45328 4164 ?Ss  
Jan12   0:00 postgres: qsweb qsweb06jan07

172.18.4.61(4272) idle
postgres 23335  0.0  0.9 45336 4800 ?Ss  
11:38   0:00 postgres: qsweb postgres

172.18.4.16(1313) idle
postgres 23665  0.0  0.8 45204 4260 ?Ss  
12:13   0:00 postgres: qsweb qsweb12jan2007

172.18.5.197(4799) idle
postgres 23753  0.0  1.0 45336 5216 ?Ss  
12:18   0:00 postgres: qsweb postgres

172.18.4.58(1140) idle
postgres 23761  0.0  1.0 45336 5216 ?Ss  
12:20   0:00 postgres: qsweb postgres

172.18.4.135(1214) idle
postgres 23868  0.0  0.8 45204 4260 ?Ss  
12:30   0:00 postgres: qsweb qsweb12jan2007

172.18.5.155(37415) idle

Will probably need a bit more information here but
what I can figure is-

You say you have a server and a single client -
there are at least 6 
different ip addresses in the process list you sent.


With the multiple client connections from other
machines - one is 
obviously the ip address that you know you are
using, can you account 
for the others?

If not then you should be fixing your security
settings - either in your 
pg_hba.conf or a firewall on the server.


 From the multiple connections from your server IP I
would guess that 
you are running a web server and using persistent
connections of some 
sort. This will keep each connection open so the
next page request 
doesn't have the overhead of establishing a new
connection. The ones 
that say idle in transaction would be linked to a
page being 
constructed. The idle ones will be waiting for the

next page request.
This is not a bad thing, although some methods of
achieving this are 
better than others.


What client software are you using on your client
machine?

Does this program exit normally or is it crashing
before it closes the 
connection it has established?


Does the one program open more than one connection
when it is running?

Was this list taken while the client program was
running?



--- Shoaib Mir <[EMAIL PROTECTED]> wrote:


Can you show us the output for:

ps auxwww | grep ^postgres

that should explain a little more...

-
Shoaib Mir
EnterpriseDB (www.enterprisedb.com)

On 1/10/07, Ashish Karalkar
<[EMAIL PROTECTED]> wrote:

Hello All,
I am running PostgresSQL 8.2 on Redhat Linux 4.
When I look for the processes that are using

postgres

using ps aux|more
I got lots of Idle processes with servers own IP
address. Can anybody please tell me why this is
happening.

Also there are multiple processes for my single
client(same IP) of which maximum are idle. are

these

processes means connection or there is a single
connection starting  many processes.


--

Shane Ambler
[EMAIL PROTECTED]

Get Sheeky @ http://Sheeky.Biz

---(end of
broadcast)

[GENERAL] XEON familiy 5000, 5100 or 5300?

2007-01-13 Thread Philippe Lang
Hi,

I'm about to buy a few new servers, and I'm trying to determine if I
should buy XEON family 5000, 5100 or 5300 processors.

For about the same price, I can have:

2 Dual-Core Intel Xeon 5060, 3.2 GHz, 4MB
2 Dual-Core Intel Xeon 5130, 2.0 GHz, 4MB
2 Quad-Core Intel Xeon 5310, 1.6 GHz, 4MB

I have a few queries that take about 4 minutes each to complete on a
single Pentium 4, and all of them are CPU-bound, with the whole database
in RAM. With the new system, I expect a performance boost, of course!

If I'm not wrong, a single postgresql sql query cannot be spread over
two processors, but can it be spread over multiple cores? If that's
true, does that mean the best CPU would be the last one, although the
clock is lower that the one of the other processors?

Thanks for the infos,

Cheers,

---
Philippe Lang
Attik System

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] PostgreSQL Connections?

2007-01-13 Thread Ashish Karalkar
Thanks Shane for your replay,
It was by mistake , I have multiple clients,my server
IP is 155, having a web server, what we are doing is
using a java pool. and yes we are following the method
to close the connection immediatly after its work and
for next work pick up the new connection from pool, we
are using jboss as web application server. Point of
worry is that my server has many idle process on its
own IP . 

With regards
Ashish Karalkar


--- Shane Ambler <[EMAIL PROTECTED]> wrote:

> Ashish Karalkar wrote:
> > Hi Shoaib
> > 
> > Following is the output for ps auxwww | grep
> ^postgres
> > 
> > IP address of my server is 172.18.5.155
> 
> > postgres 12846  0.0  0.8 45328 4164 ?Ss  
> > Jan12   0:00 postgres: qsweb qsweb06jan07
> > 172.18.4.61(4272) idle
> 
> > postgres 23335  0.0  0.9 45336 4800 ?Ss  
> > 11:38   0:00 postgres: qsweb postgres
> > 172.18.4.16(1313) idle
> 
> > postgres 23665  0.0  0.8 45204 4260 ?Ss  
> > 12:13   0:00 postgres: qsweb qsweb12jan2007
> > 172.18.5.197(4799) idle
> 
> > postgres 23753  0.0  1.0 45336 5216 ?Ss  
> > 12:18   0:00 postgres: qsweb postgres
> > 172.18.4.58(1140) idle
> 
> > postgres 23761  0.0  1.0 45336 5216 ?Ss  
> > 12:20   0:00 postgres: qsweb postgres
> > 172.18.4.135(1214) idle
> 
> > postgres 23868  0.0  0.8 45204 4260 ?Ss  
> > 12:30   0:00 postgres: qsweb qsweb12jan2007
> > 172.18.5.155(37415) idle
> 
> Will probably need a bit more information here but
> what I can figure is-
> 
> You say you have a server and a single client -
> there are at least 6 
> different ip addresses in the process list you sent.
> 
> With the multiple client connections from other
> machines - one is 
> obviously the ip address that you know you are
> using, can you account 
> for the others?
> If not then you should be fixing your security
> settings - either in your 
> pg_hba.conf or a firewall on the server.
> 
>  From the multiple connections from your server IP I
> would guess that 
> you are running a web server and using persistent
> connections of some 
> sort. This will keep each connection open so the
> next page request 
> doesn't have the overhead of establishing a new
> connection. The ones 
> that say idle in transaction would be linked to a
> page being 
> constructed. The idle ones will be waiting for the
> next page request.
> This is not a bad thing, although some methods of
> achieving this are 
> better than others.
> 
> What client software are you using on your client
> machine?
> 
> Does this program exit normally or is it crashing
> before it closes the 
> connection it has established?
> 
> Does the one program open more than one connection
> when it is running?
> 
> Was this list taken while the client program was
> running?
> 
> 
> > 
> > --- Shoaib Mir <[EMAIL PROTECTED]> wrote:
> > 
> >> Can you show us the output for:
> >>
> >> ps auxwww | grep ^postgres
> >>
> >> that should explain a little more...
> >>
> >> -
> >> Shoaib Mir
> >> EnterpriseDB (www.enterprisedb.com)
> >>
> >> On 1/10/07, Ashish Karalkar
> >> <[EMAIL PROTECTED]> wrote:
> >>> Hello All,
> >>> I am running PostgresSQL 8.2 on Redhat Linux 4.
> >>> When I look for the processes that are using
> >> postgres
> >>> using ps aux|more
> >>> I got lots of Idle processes with servers own IP
> >>> address. Can anybody please tell me why this is
> >>> happening.
> >>>
> >>> Also there are multiple processes for my single
> >>> client(same IP) of which maximum are idle. are
> >> these
> >>> processes means connection or there is a single
> >>> connection starting  many processes.
> >>>
> 
> -- 
> 
> Shane Ambler
> [EMAIL PROTECTED]
> 
> Get Sheeky @ http://Sheeky.Biz
> 
> ---(end of
> broadcast)---
> TIP 4: Have you searched our list archives?
> 
>http://archives.postgresql.org/
> 



 

Don't get soaked.  Take a quick peak at the forecast
with the Yahoo! Search weather shortcut.
http://tools.search.yahoo.com/shortcuts/#loc_weather

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[GENERAL] Problems with unique restrictions

2007-01-13 Thread Marcel Gsteiger
Hi all

I have a PL/PGSQL conversion procedure that reads a "source table" and then 
inserts tuples into several related tables. Recently I upgraded from 8.1 to 
8.2.0, then to 8.2.1.

With 8.1 everything worked fine.

Now since I upgraded to 8.2 I have problems inserting data into tables that 
have unique indexes. Ugly enough, I get the message 'duplicate key violates 
unique constraint' when inserting the very first record into a table. This 
happens everytime when the new tuple references another tuple that has been 
inserted just before this one in the same transaction.

Putting a "SET CONSTRAINTS ALL DEFERRED" in my procedure does not help.

To me it looks that something with referential integrity checking goes wrong, 
but in this case the error message would be misleading.

Is there any known change/problem in this area? If not, I will try to strip 
down my procedure to something simple that documents my problems.

Regards
--Marcel


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] PostgreSQL Connections?

2007-01-13 Thread Shane Ambler

Ashish Karalkar wrote:

Hi Shoaib

Following is the output for ps auxwww | grep ^postgres

IP address of my server is 172.18.5.155


postgres 12846  0.0  0.8 45328 4164 ?Ss  
Jan12   0:00 postgres: qsweb qsweb06jan07

172.18.4.61(4272) idle


postgres 23335  0.0  0.9 45336 4800 ?Ss  
11:38   0:00 postgres: qsweb postgres

172.18.4.16(1313) idle


postgres 23665  0.0  0.8 45204 4260 ?Ss  
12:13   0:00 postgres: qsweb qsweb12jan2007

172.18.5.197(4799) idle


postgres 23753  0.0  1.0 45336 5216 ?Ss  
12:18   0:00 postgres: qsweb postgres

172.18.4.58(1140) idle


postgres 23761  0.0  1.0 45336 5216 ?Ss  
12:20   0:00 postgres: qsweb postgres

172.18.4.135(1214) idle


postgres 23868  0.0  0.8 45204 4260 ?Ss  
12:30   0:00 postgres: qsweb qsweb12jan2007

172.18.5.155(37415) idle


Will probably need a bit more information here but what I can figure is-

You say you have a server and a single client - there are at least 6 
different ip addresses in the process list you sent.


With the multiple client connections from other machines - one is 
obviously the ip address that you know you are using, can you account 
for the others?
If not then you should be fixing your security settings - either in your 
pg_hba.conf or a firewall on the server.


From the multiple connections from your server IP I would guess that 
you are running a web server and using persistent connections of some 
sort. This will keep each connection open so the next page request 
doesn't have the overhead of establishing a new connection. The ones 
that say idle in transaction would be linked to a page being 
constructed. The idle ones will be waiting for the next page request.
This is not a bad thing, although some methods of achieving this are 
better than others.


What client software are you using on your client machine?

Does this program exit normally or is it crashing before it closes the 
connection it has established?


Does the one program open more than one connection when it is running?

Was this list taken while the client program was running?




--- Shoaib Mir <[EMAIL PROTECTED]> wrote:


Can you show us the output for:

ps auxwww | grep ^postgres

that should explain a little more...

-
Shoaib Mir
EnterpriseDB (www.enterprisedb.com)

On 1/10/07, Ashish Karalkar
<[EMAIL PROTECTED]> wrote:

Hello All,
I am running PostgresSQL 8.2 on Redhat Linux 4.
When I look for the processes that are using

postgres

using ps aux|more
I got lots of Idle processes with servers own IP
address. Can anybody please tell me why this is
happening.

Also there are multiple processes for my single
client(same IP) of which maximum are idle. are

these

processes means connection or there is a single
connection starting  many processes.



--

Shane Ambler
[EMAIL PROTECTED]

Get Sheeky @ http://Sheeky.Biz

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/