Re: [GENERAL] opened connection

2012-10-01 Thread Levente Kovacs
On Sun, 30 Sep 2012 20:24:47 -0700
Darren Duncan dar...@darrenduncan.net wrote:


 Unless you have very unique needs, keeping an open connection for
 days is just wrong anyway; if its for the sake of some user GUI or
 shell, there probably should be safeguards there to encourage users
 to not keep long-running transactions or connections.

Okay. Thanks for the answers. I implemented an alarm for the timeout, and I
close the connection when it is not needed in certain amount of time.

However, I have an other question.
Calling PQfinish() on an already closed connection makes my program
segfault. Is this normal?

Thank you,
Levente

-- 
Levente Kovacs
CTO, CSO
http://levente.logonex.eu


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Planner choice on NULLs (9.1.5) with 2 conditional indexes

2012-10-01 Thread Marc Mamin
Hello,

here is a small test case to reproduce an issue from our production
system:


- one table with 2 columns (sid, ua) containing a lot of nulls
- two indexes with the same condition:
i_sid : on (sid)WHERE sid IS NOT NULL
i_ua  : on (ua,sid) WHERE sid IS NOT NULL


and a query with the clause  WHERE sid IS NOT NULL and ua IS NULL

It is quite evident that the second index is better  as it allows to
resolve the 2 conditions of the queries, 
but this seems to be an issue for the planner that prefers the first
index


beat regards,

Marc Mamin




create table ptest (sid int, ua int);
  
insert into ptest 
  select null,null from generate_series (1,10);

insert into ptest 
   select s%100,s%50 from generate_series (1,1) s;

insert into ptest 
   select s%100,null from generate_series (1,1000) s;

create index i_sid on ptest(sid) WHERE sid IS NOT NULL;

create index i_ua on ptest(ua,sid) WHERE sid IS NOT NULL;

-- BEFORE ANALYZE:
--
explain analyze
select count(*) from ptest WHERE sid IS NOT NULL and ua IS NULL

http://explain.depesz.com/s/1n3 

Aggregate  (cost=402.71..402.72 rows=1 width=0) (actual
time=0.297..0.297 rows=1 loops=1)
  -  Bitmap Heap Scan on ptest  (cost=11.91..401.33 rows=552 width=0)
(actual time=0.146..0.235 rows=1000 loops=1)
Recheck Cond: ((ua IS NULL) AND (sid IS NOT NULL))
-  Bitmap Index Scan on i_ua  (cost=0.00..11.77 rows=552
width=0) (actual time=0.140..0.140 rows=1000 loops=1)
  Index Cond: ((ua IS NULL) AND (sid IS NOT NULL))
Total runtime: 0.331 ms


-- ANALYZED:

analyze ptest

explain analyze
select count(*) from ptest WHERE sid IS NOT NULL and ua IS NULL

http://explain.depesz.com/s/s6c 

Aggregate  (cost=711.59..711.60 rows=1 width=0) (actual
time=1.842..1.842 rows=1 loops=1)
  -  Bitmap Heap Scan on ptest  (cost=184.00..686.67 rows=9970 width=0)
(actual time=1.677..1.780 rows=1000 loops=1)
Recheck Cond: (sid IS NOT NULL)
Filter: (ua IS NULL)
-  Bitmap Index Scan on i_sid  (cost=0.00..181.50 rows=10967
width=0) (actual time=0.826..0.826 rows=11000 loops=1)
  Index Cond: (sid IS NOT NULL)
Total runtime: 1.873 ms


I have also tried it with the best possible statistics, but the planner
still choose the single column index:

ALTER TABLE ptest ALTER sid SET STATISTICS 1;
ALTER TABLE ptest ALTER ua SET STATISTICS 1;
analyze ptest;

explain analyze
select count(*) from ptest WHERE sid IS NOT NULL and ua IS NULL

http://explain.depesz.com/s/Vjy9 


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Planner choice on NULLs (9.1.5) with 2 conditional indexes

2012-10-01 Thread Marc Mamin
I've rechecked it on Postgres 9.2 and the issue seems to be fixed by now
:-)
sorry for the spam.

Marc

 -Original Message-
 From: pgsql-general-ow...@postgresql.org [mailto:pgsql-general-
 ow...@postgresql.org] On Behalf Of Marc Mamin
 Sent: Montag, 1. Oktober 2012 09:34
 To: pgsql-general@postgresql.org
 Subject: [GENERAL] Planner choice on NULLs (9.1.5) with 2 conditional
 indexes
 
 Hello,
 
 here is a small test case to reproduce an issue from our production
 system:
 
 
 - one table with 2 columns (sid, ua) containing a lot of nulls
 - two indexes with the same condition:
 i_sid : on (sid)WHERE sid IS NOT NULL
 i_ua  : on (ua,sid) WHERE sid IS NOT NULL
 
 
 and a query with the clause  WHERE sid IS NOT NULL and ua IS NULL
 
 It is quite evident that the second index is better  as it allows to
 resolve the 2 conditions of the queries,
 but this seems to be an issue for the planner that prefers the first
 index
 
 
 beat regards,
 
 Marc Mamin
 
 
 
 
 create table ptest (sid int, ua int);
 
 insert into ptest
   select null,null from generate_series (1,10);
 
 insert into ptest
select s%100,s%50 from generate_series (1,1) s;
 
 insert into ptest
select s%100,null from generate_series (1,1000) s;
 
 create index i_sid on ptest(sid) WHERE sid IS NOT NULL;
 
 create index i_ua on ptest(ua,sid) WHERE sid IS NOT NULL;
 
 -- BEFORE ANALYZE:
 --
 explain analyze
 select count(*) from ptest WHERE sid IS NOT NULL and ua IS NULL
 
 http://explain.depesz.com/s/1n3
 
 Aggregate  (cost=402.71..402.72 rows=1 width=0) (actual
 time=0.297..0.297 rows=1 loops=1)
   -  Bitmap Heap Scan on ptest  (cost=11.91..401.33 rows=552 width=0)
 (actual time=0.146..0.235 rows=1000 loops=1)
 Recheck Cond: ((ua IS NULL) AND (sid IS NOT NULL))
 -  Bitmap Index Scan on i_ua  (cost=0.00..11.77 rows=552
 width=0) (actual time=0.140..0.140 rows=1000 loops=1)
   Index Cond: ((ua IS NULL) AND (sid IS NOT NULL))
 Total runtime: 0.331 ms
 
 
 -- ANALYZED:
 
 analyze ptest
 
 explain analyze
 select count(*) from ptest WHERE sid IS NOT NULL and ua IS NULL
 
 http://explain.depesz.com/s/s6c
 
 Aggregate  (cost=711.59..711.60 rows=1 width=0) (actual
 time=1.842..1.842 rows=1 loops=1)
   -  Bitmap Heap Scan on ptest  (cost=184.00..686.67 rows=9970
 width=0)
 (actual time=1.677..1.780 rows=1000 loops=1)
 Recheck Cond: (sid IS NOT NULL)
 Filter: (ua IS NULL)
 -  Bitmap Index Scan on i_sid  (cost=0.00..181.50 rows=10967
 width=0) (actual time=0.826..0.826 rows=11000 loops=1)
   Index Cond: (sid IS NOT NULL)
 Total runtime: 1.873 ms
 
 
 I have also tried it with the best possible statistics, but the
planner
 still choose the single column index:
 
 ALTER TABLE ptest ALTER sid SET STATISTICS 1;
 ALTER TABLE ptest ALTER ua SET STATISTICS 1;
 analyze ptest;
 
 explain analyze
 select count(*) from ptest WHERE sid IS NOT NULL and ua IS NULL
 
 http://explain.depesz.com/s/Vjy9
 
 
 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] strange permission error

2012-10-01 Thread Mr Dash Four



You caused it yourself, then.  Don't do that.  (Or if you must,
it's your own responsibility to fix things when they break.  But
preventing read access to pg_catalog seems pretty crippling.)
  
I don't want arbitrary program to have access to the system catalogue 
and read willy-nilly, thanks.



FWIW, it's probably the 'user: ' || u_name expressions that result
in this specific failure.
  
I found what is the cause of this - I had to add an explicit cast on all 
text expressions, like user: ::text as well as u_name::text. That way 
the problem goes away, so it should.




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_upgrade: out of memory

2012-10-01 Thread Carrington, Matthew (Produban)
Tom,

I traced through the problem with 'no symbol table' being created and managed 
to create a version of Postgres 9.2.1 with a symbol table. Here's the trace ...

Rebuild postgres with debugging ...

export OBJECT_MODE=64
export CFLAGS=-maix64 -g 
export LDFLAGS=-maix64 -Wl,-bbigtoc
export AR=ar -X64
export CC=/opt/freeware/bin/gcc -maix64 

./configure --enable-debug --prefix=/opt/serviceMonitoring/postgres_9.2.1 
--disable-thread-safety --enable-cassert

make
cd contrib
make
cd ..
make install
cd contrib
make install

... get the backtrace ...

gdb /opt/serviceMonitoring/postgres_9.2.1/bin/pg_dump
GNU gdb (GDB) 7.5
Copyright (C) 2012 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later http://gnu.org/licenses/gpl.html
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type show copying
and show warranty for details.
This GDB was configured as powerpc-ibm-aix5.1.0.0.
For bug reporting instructions, please see:
http://www.gnu.org/software/gdb/bugs/...
Reading symbols from 
/ukmetmon/data/dataCollection/postgres_9.2.1/bin/pg_dump...done.
(gdb) b exit_horribly
Breakpoint 1 at 0x1002e3b4: file dumputils.c, line 1314.
(gdb) run --port 65432 --username postgres --verbose --schema-only 
--binary-upgrade -f dump.out template1
Starting program: /ukmetmon/data/dataCollection/postgres_9.2.1/bin/pg_dump 
--port 65432 --username postgres --verbose --schema-only --binary-upgrade -f 
dump.out template1
pg_dump: reading schemas
pg_dump: reading user-defined tables
pg_dump: reading extensions
pg_dump: reading user-defined functions
pg_dump: reading user-defined types
pg_dump: reading procedural languages
pg_dump: reading user-defined aggregate functions

Breakpoint 1, exit_horribly (modulename=0x0, fmt=0x10006a590 out of memory\n)
at dumputils.c:1314
1314dumputils.c: A file or directory in the path name does not exist..
(gdb) bt
#0  exit_horribly (modulename=0x0, fmt=0x10006a590 out of memory\n) at 
dumputils.c:1314
#1  0x00010003247c in pg_malloc (size=0) at dumpmem.c:47
#2  0x00018f54 in getAggregates (fout=0x11000bad0, 
numAggs=0x73c)
at pg_dump.c:3614
#3  0x00010002fcec in getSchemaData (fout=0x11000bad0, 
numTablesPtr=0x8a4)
at common.c:145
#4  0x00011370 in main (argc=11, argv=0x950) at 
pg_dump.c:683
(gdb) quit
A debugging session is active.

Inferior 1 [process 483438] will be killed.

Quit anyway? (y or n) y

Hope that helps.

Matthew

-Original Message-
From: Tom Lane [mailto:t...@sss.pgh.pa.us] 
Sent: 28 September 2012 16:27
To: Carrington, Matthew (Produban)
Subject: Re: [GENERAL] pg_upgrade: out of memory

Carrington, Matthew (Produban) matthew.carring...@produban.co.uk writes:
 ... presumably pg_extension is a 9.2 thing and I only have 9.0.1 installed.

I thought you were doing this test with 9.2?  Oh wait, this is 9.2
pg_dump against 9.0 server.  In that case leave off the EXISTS check:

SELECT tableoid, oid, proname AS aggname, 
pronamespace AS aggnamespace, 
pronargs, proargtypes, 
(SELECT rolname FROM pg_catalog.pg_roles WHERE oid = proowner) AS rolname, 
proacl AS aggacl 
FROM pg_proc p 
WHERE proisagg AND (
pronamespace != 
(SELECT oid FROM pg_namespace WHERE nspname = 'pg_catalog')
);

regards, tom lane
Emails aren't always secure, and they may be intercepted or changed
after they've been sent. Produban doesn't accept liability if this
happens. If you think someone may have interfered with this email,
please get in touch with the sender another way. This message and any
documents attached to it do not create or change any contract unless
otherwise specifically stated. Any views or opinions contained in this
message are solely those of the author, and do not necessarily represent
those of Produban, unless otherwise specifically stated and the sender
is authorised to do so. Produban doesn't accept responsibility for
damage caused by any viruses contained in this email or its attachments.
Emails may be monitored. If you've received this email by mistake,
please let the sender know at once that it's gone to the wrong person
and then destroy it without copying, using, or telling anyone about its
contents. Produban Servicios Informaticos Generales, S.L. (UK Branch).
Registered office: Shenley Wood House, Chalkdell Drive, Shenley Wood,
Milton Keynes MK5 6LA. Branch registration number BR 008486.
Ref:[PDB#014]



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] strange permission error

2012-10-01 Thread John R Pierce

On 10/01/12 1:20 AM, Mr Dash Four wrote:

FWIW, it's probably the 'user: ' || u_name expressions that result
in this specific failure.
I found what is the cause of this - I had to add an explicit cast on 
all text expressions, like user: ::text as well as u_name::text. 
That way the problem goes away, so it should.


that doesn't make any sense at all.

'user: '   *is* text by default.I didn't notice you displaying your 
table definitions, but assuming u_name is TExT or VARCHAR(...)   it 
should have worked without any explicit casts


if you broke the permissions on the pg_catalog so badly that the SQL 
planner can't look up the data types of the fields of your own tables, 
well, thats just wrong.




--
john r pierceN 37, W 122
santa cruz ca mid-left coast



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] opened connection

2012-10-01 Thread Jasen Betts
On 2012-10-01, Levente Kovacs leventel...@gmail.com wrote:
 On Sun, 30 Sep 2012 20:24:47 -0700
 Darren Duncan dar...@darrenduncan.net wrote:


 Unless you have very unique needs, keeping an open connection for
 days is just wrong anyway; if its for the sake of some user GUI or
 shell, there probably should be safeguards there to encourage users
 to not keep long-running transactions or connections.

 Okay. Thanks for the answers. I implemented an alarm for the timeout, and I
 close the connection when it is not needed in certain amount of time.

 However, I have an other question.
 Calling PQfinish() on an already closed connection makes my program
 segfault. Is this normal?

Absolutely. Calling PQfinish on any other pointer to unallocated
heap is likely to cause a segfault too. same as calling fclose()
on a closed, (or unopened), FILE*.




-- 
⚂⚃ 100% natural



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Again, problem with pgbouncer

2012-10-01 Thread Phoenix Kiula
Hi,

  - PG 9.0.10
  - Pgbouncer version 1.4.2

Not long ago, during the last server reboot for us, we had fixed the
really painful (and largely mysterious) process of setting up
pgbouncer.

File permissions and other mysteries were solved with help from Raghavendra:
http://permalink.gmane.org/gmane.comp.db.postgresql.pgbouncer.general/854

After a long we rebooted our server today and again, as if on cue,
pgbouncer has problems yet again :(

PG itself is running without problems.

The Pgbouncer process starts properly too. All the auth file, log file
etc are setup as mentioned in that URL above. We haven't changed
anything at all!

At first, just connecting via pgbouncer port was giving the no user
error. Which is funny, because the authfile has been working without
problems forever. The .pgpass file had the same problems, and is still
the same all this time.

So, upon reading that old thread again, I guessed that the postgres
user permissions were needed, so I did this:

  chown -R postgres:postgres /etc/pgbouncer
  chown -R postgres:postgres /var/run/pgbouncer/
  chown postgres:postgres /var/log/pgbouncer.log
  chown postgres:postgres /var/lib/pgsql/pgbouncer.txt

Then restarted both PG and Pgbouncer.

Now pgbouncer won't do anything at all. Trying to connect to psql via
the pgbouncer port gives this error:

 psql: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket /tmp/.s.PGSQL.6789?


And in the log is this line:


 2012-10-01 06:12:00.703  3754 FATAL @src/main.c:553 in function
write_pidfile(): /var/run/pgbouncer/pgbouncer.pid: Permission denied
[13]


What now? Would appreciate some pointers.

Thanks.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Again, problem with pgbouncer

2012-10-01 Thread raghu ram
On Mon, Oct 1, 2012 at 3:56 PM, Phoenix Kiula phoenix.ki...@gmail.comwrote:

 Hi,

   - PG 9.0.10
   - Pgbouncer version 1.4.2

 Not long ago, during the last server reboot for us, we had fixed the
 really painful (and largely mysterious) process of setting up
 pgbouncer.

 File permissions and other mysteries were solved with help from
 Raghavendra:
 http://permalink.gmane.org/gmane.comp.db.postgresql.pgbouncer.general/854

 After a long we rebooted our server today and again, as if on cue,
 pgbouncer has problems yet again :(

 PG itself is running without problems.

 The Pgbouncer process starts properly too. All the auth file, log file
 etc are setup as mentioned in that URL above. We haven't changed
 anything at all!

 At first, just connecting via pgbouncer port was giving the no user
 error. Which is funny, because the authfile has been working without
 problems forever. The .pgpass file had the same problems, and is still
 the same all this time.

 So, upon reading that old thread again, I guessed that the postgres
 user permissions were needed, so I did this:

   chown -R postgres:postgres /etc/pgbouncer
   chown -R postgres:postgres /var/run/pgbouncer/
   chown postgres:postgres /var/log/pgbouncer.log
   chown postgres:postgres /var/lib/pgsql/pgbouncer.txt

 Then restarted both PG and Pgbouncer.

 Now pgbouncer won't do anything at all. Trying to connect to psql via
 the pgbouncer port gives this error:

  psql: could not connect to server: No such file or directory
 Is the server running locally and accepting
 connections on Unix domain socket /tmp/.s.PGSQL.6789?


 And in the log is this line:


  2012-10-01 06:12:00.703  3754 FATAL @src/main.c:553 in function
 write_pidfile(): /var/run/pgbouncer/pgbouncer.pid: Permission denied
 [13]


 What now? Would appreciate some pointers.

 Thanks.


Could you please check permission of /var/run/pgbouncer/ directory. If
pgbouncer directory does not have postgres user permissions,please assign
it and then start the pgbouncer.

Looking to the error thrown by pgbouncer the port shown up as 6789, but
whereas the link of pgbouncer.ini file it has 6389. Please mention
appropriate port while connecting via pgbouncer and give port number which
is in pgbouncer.ini file.

-- 

Thanks  Regards,

Raghu Ram

EnterpriseDB Corporation

skypeid: raghu.ramedb

Blog:http://raghurc.blogspot.in/


Re: [GENERAL] pg_upgrade: out of memory

2012-10-01 Thread Andrew Hastie

Hi Tom/Matthew,

Just to chime in on this thread - I'm currently validating Postgres on 
AIXv7.1 and confirm that I also see the same error. I can reproduce the 
error with 9.2.1 and 9.2.0 but unlike Matthew I'm using a built from 
source build using the IBM xcl compiler rather than gcc.


I don't believe this is data limit related as I see the error when 
dumping database template1 under user postgres. Here's the output:-


[eg17ph01:ahastie] /ahastie $ pg_dump  -v -U postgres template1  test.psql
Password:
pg_dump: reading schemas
pg_dump: reading user-defined tables
pg_dump: reading extensions
pg_dump: reading user-defined functions
pg_dump: reading user-defined types
pg_dump: reading procedural languages
pg_dump: reading user-defined aggregate functions
pg_dump: out of memory
[eg17ph01:ahastie] /ahastie $ xlc -qversion
IBM XL C/C++ for AIX, V12.1 (5765-J02, 5725-C72)
Version: 12.01..0001

I've tried the requested SQL query which returns zero rows. Is this as 
expected ?
I will try the same with release 9.1.6 to see if we can pinpoint this as 
a potential AIX only issue or a 9.2.n issue. Also to confirm what 
Matthew has observed.


Regards,
Andrew



On 28/09/12 16:12, Tom Lane wrote:

Carrington, Matthew (Produban) matthew.carring...@produban.co.uk writes:

Reading symbols from 
/ukmetmon/data/dataCollection/postgres_9.2.1/bin/pg_dump...(no debugging 
symbols found)...done.

... hm, not sure why that didn't work, but anyway:


(gdb) bt
#0  0x00010002e354 in exit_horribly ()
#1  0x00010003243c in pg_malloc ()
#2  0x00018f14 in getAggregates ()
#3  0x00010002fcac in getSchemaData ()
#4  0x00011330 in main ()

getAggregates() doesn't do that much.  Can we see the results of the
query it would have been executing, namely

SELECT tableoid, oid, proname AS aggname,
pronamespace AS aggnamespace,
pronargs, proargtypes,
(SELECT rolname FROM pg_catalog.pg_roles WHERE oid = proowner) AS rolname,
proacl AS aggacl
FROM pg_proc p
WHERE proisagg AND (
pronamespace !=
(SELECT oid FROM pg_namespace WHERE nspname = 'pg_catalog')
  OR EXISTS(SELECT 1 FROM pg_depend WHERE
classid = 'pg_proc'::regclass AND
objid = p.oid AND
refclassid = 'pg_extension'::regclass AND
deptype = 'e'));


regards, tom lane





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] What's faster? BEGIN ... EXCEPTION or CREATE TEMP TABLE IF NOT EXISTS?

2012-10-01 Thread Moshe Jacobson
I am working on an audit logging trigger that gets called for every row
inserted, updated or deleted on any table.
For this, I need to store a couple of temporary session variables such as
the ID of the user performing the change, which can be set at the start of
the session.
Until now I have been using a permanent table to store the session
variables, but it has been difficult to wipe the data properly at the end
of the session.
So I have decided to try to implement them using temporary tables.

The problem now is that for every row now, I need to check for the
existence of the temporary table before I access it, in order to avoid
exceptions.
Either I can do all such accesses within a BEGIN...EXCEPTION block, or I
can precede any such accesses with CREATE TEMP TABLE IF NOT EXISTS.
Is one of these much faster than the other? Will I be slowing things down
inordinately by doing this for every row?

Thanks.

-- 
Moshe Jacobson
Nead Werx, Inc. | Senior Systems Engineer
2323 Cumberland Parkway, Suite 201 | Atlanta, GA 30339
mo...@neadwerx.com | www.neadwerx.com


Re: [GENERAL] pg_upgrade: out of memory

2012-10-01 Thread Tom Lane
Carrington, Matthew (Produban) matthew.carring...@produban.co.uk writes:
 pg_dump: reading user-defined aggregate functions

 Breakpoint 1, exit_horribly (modulename=0x0, fmt=0x10006a590 out of 
 memory\n)
 at dumputils.c:1314
 1314dumputils.c: A file or directory in the path name does not exist..
 (gdb) bt
 #0  exit_horribly (modulename=0x0, fmt=0x10006a590 out of memory\n) at 
 dumputils.c:1314
 #1  0x00010003247c in pg_malloc (size=0) at dumpmem.c:47
 #2  0x00018f54 in getAggregates (fout=0x11000bad0, 
 numAggs=0x73c)
 at pg_dump.c:3614

Oh!  Given your previous comment about there not being any user-defined
aggregates, I see what the problem is.  AIX must be one of the platforms
where malloc(0) is defined to return NULL rather than a pointer to a
zero-size block.  pg_malloc is not coping with that.

A quick fix would be

pg_malloc(size_t size)
{
void   *tmp;

tmp = malloc(size);
-   if (!tmp)
+   if (!tmp  size)
{
psql_error(out of memory\n);
exit(EXIT_FAILURE);
}

but I'm not sure if that's the best answer overall.  Will take it up in
-hackers.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_upgrade: out of memory

2012-10-01 Thread Carrington, Matthew (Produban)
Tom,

Yes, that look right for AIX . Here's the relevant section from the malloc man 
page ...

Return Values

   Upon successful completion, the malloc subroutine returns a pointer to 
space
   suitably aligned for the storage of any type of object. If the size 
requested is
   0, malloc returns NULL in normal circumstances. However, if the program 
was
   compiled with the defined _LINUX_SOURCE_COMPAT macro, malloc returns a 
valid
   pointer to a space of size 0.

   If the request cannot be satisfied for any reason, the malloc subroutine 
returns

   NULL.

Hope that helps.

Matthew

-Original Message-
From: Tom Lane [mailto:t...@sss.pgh.pa.us] 
Sent: 01 October 2012 14:39
To: Carrington, Matthew (Produban)
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] pg_upgrade: out of memory

Carrington, Matthew (Produban) matthew.carring...@produban.co.uk writes:
 pg_dump: reading user-defined aggregate functions

 Breakpoint 1, exit_horribly (modulename=0x0, fmt=0x10006a590 out of 
 memory\n)
 at dumputils.c:1314
 1314dumputils.c: A file or directory in the path name does not exist..
 (gdb) bt
 #0  exit_horribly (modulename=0x0, fmt=0x10006a590 out of memory\n) at 
 dumputils.c:1314
 #1  0x00010003247c in pg_malloc (size=0) at dumpmem.c:47
 #2  0x00018f54 in getAggregates (fout=0x11000bad0, 
 numAggs=0x73c)
 at pg_dump.c:3614

Oh!  Given your previous comment about there not being any user-defined
aggregates, I see what the problem is.  AIX must be one of the platforms
where malloc(0) is defined to return NULL rather than a pointer to a
zero-size block.  pg_malloc is not coping with that.

A quick fix would be

pg_malloc(size_t size)
{
void   *tmp;

tmp = malloc(size);
-   if (!tmp)
+   if (!tmp  size)
{
psql_error(out of memory\n);
exit(EXIT_FAILURE);
}

but I'm not sure if that's the best answer overall.  Will take it up in
-hackers.

regards, tom lane
Emails aren't always secure, and they may be intercepted or changed
after they've been sent. Produban doesn't accept liability if this
happens. If you think someone may have interfered with this email,
please get in touch with the sender another way. This message and any
documents attached to it do not create or change any contract unless
otherwise specifically stated. Any views or opinions contained in this
message are solely those of the author, and do not necessarily represent
those of Produban, unless otherwise specifically stated and the sender
is authorised to do so. Produban doesn't accept responsibility for
damage caused by any viruses contained in this email or its attachments.
Emails may be monitored. If you've received this email by mistake,
please let the sender know at once that it's gone to the wrong person
and then destroy it without copying, using, or telling anyone about its
contents. Produban Servicios Informaticos Generales, S.L. (UK Branch).
Registered office: Shenley Wood House, Chalkdell Drive, Shenley Wood,
Milton Keynes MK5 6LA. Branch registration number BR 008486.
Ref:[PDB#014]



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres error when adding new page

2012-10-01 Thread Peter Geoghegan
On 1 October 2012 14:47, Marco Craveiro marco.crave...@gmail.com wrote:
 Hello Postgres general

 We're experiencing a lot of errors when using CDash on PostgreSQL 9.1,
 hosted on Mac OSX 10.6.8. The actual error message is as follows:

 SQL error in Cannot insert test:
 utility/asserter/assert_file_returns_true_for_empty_files into the
 database():ERROR:  failed to add old item to the right sibling while
 splitting block 191 of index crc323br

A call to PageAddItem(), made within _bt_pgaddtup(), is where this
failure seems to ultimately originate from. What we're missing here is
the reason for PageAddItem() returning InvalidOffsetNumber. That is
usually, though not necessarily, separately available within a WARNING
log message, which you haven't included here. Could you please let us
know if there is a WARNING that you didn't include just prior to the
ERROR?

-- 
Peter Geoghegan   http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training and Services


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] opened connection

2012-10-01 Thread Tom Lane
Scott Marlowe scott.marl...@gmail.com writes:
 I've had problems with ssl connections dying on me.  For slony
 replication I had to make sure the connections were NOT ssl or they'd
 die and subscriptions would just keep repeating after getting 80%
 through and getting a connect error. This was with 8.4 on debian
 lenny.

That sounds like an artifact of the kluge solution some vendors used for
the SSL renegotiation security bug a couple years back: their patched
openssl libraries would simply kill the connection when a key
renegotiation was requested, which PG would do after transferring a
couple hundred megabytes.

We put in a workaround whereby you could prevent that by setting a GUC
variable to disable the renegotiation requests ... but if you're still
seeing such a problem today, you really need to complain to your distro
vendor.  Nobody should still be shipping such lobotomized libraries.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres error when adding new page

2012-10-01 Thread Marco Craveiro
Peter,

Thanks for your prompt reply.

A call to PageAddItem(), made within _bt_pgaddtup(), is where this
 failure seems to ultimately originate from. What we're missing here is
 the reason for PageAddItem() returning InvalidOffsetNumber. That is
 usually, though not necessarily, separately available within a WARNING
 log message, which you haven't included here. Could you please let us
 know if there is a WARNING that you didn't include just prior to the
 ERROR?


No warning I'm afraid. These are the statements I see on the Postgres log
file:

2012-10-01 13:09:12 WEST ERROR:  failed to add old item to the right
sibling while splitting block 191 of index crc323
2012-10-01 13:09:12 WEST STATEMENT:  INSERT INTO test
(projectid,crc32,name,path,command,details,output)
  VALUES
('2','2548249718','utility/xml/closing_an_open_text_reader_does_not_throw','./projects/utility/spec','e:\cmake\bin\cmake.exe
-E chdir
E:/mingw/msys/1.0/home/ctest/build/Continuous/dogen/mingw-1.0.17-i686-gcc-4.7/build/stage/bin
E:/mingw/msys/1.0/home/ctest/build/Continuous/dogen/mingw-1.0.17-i686-gcc-4.7/build/stage/bin/dogen_utility_spec
--run_test=xml/closing_an_open_text_reader_does_not_throw','Completed','UnVubmluZyAxIHRlc3QgY2FzZS4uLgoKKioqIE5vIGVycm9ycyBkZXRlY3RlZAo=')

These are repeated several times as CDash keeps on retrying. After a few
retries we succeed (the actual number of retries is variable - 8, 10, etc).

Cheers

Marco
-- 
So young, and already so unknown -- Pauli

blog: http://mcraveiro.blogspot.com


Re: [GENERAL] What's faster? BEGIN ... EXCEPTION or CREATE TEMP TABLE IF NOT EXISTS?

2012-10-01 Thread Merlin Moncure
On Mon, Oct 1, 2012 at 8:36 AM, Moshe Jacobson mo...@neadwerx.com wrote:
 I am working on an audit logging trigger that gets called for every row
 inserted, updated or deleted on any table.
 For this, I need to store a couple of temporary session variables such as
 the ID of the user performing the change, which can be set at the start of
 the session.
 Until now I have been using a permanent table to store the session
 variables, but it has been difficult to wipe the data properly at the end of
 the session.
 So I have decided to try to implement them using temporary tables.

 The problem now is that for every row now, I need to check for the existence
 of the temporary table before I access it, in order to avoid exceptions.
 Either I can do all such accesses within a BEGIN...EXCEPTION block, or I can
 precede any such accesses with CREATE TEMP TABLE IF NOT EXISTS.
 Is one of these much faster than the other? Will I be slowing things down
 inordinately by doing this for every row?

Couple points:
*) Functions without exception blocks are faster than those with.
*) Therefore, CREATE/IF NOT EXISTS is probably faster (test to be sure)
*) Carefully consider if you you will ever in the future introduce
connection pooling.  If you do, relying on session scoped objects like
temp tables is probably not a good idea.
*) You can rig permanent tables around pg_backend_pid().  On session
login, clear session private records that have your pid (if any).
Transaction temporary data can be similarly rigged around
txid_current() with an even simpler maintenance process.

merlin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] strange hot_standby behaviour

2012-10-01 Thread pfote

Hi,

I had a very strange effect on the weekend that smells like a bug, so 
i'd like so share it.


Setup:
machine A: 16 CPU Cores (modern), 128GB RAM, nice 6-drive SAS Raid-10
machines B, C: 8 Cores (substantially older than A), 48GB Ram, some scsi 
Raid, substantially slower than A


The workload is about 80% - 90% SELECTs with heavy sorting and grouping, 
the remaining are INSERTs/UPDATEs/DELETEs.
So In the original setup A  is the master, B and C are hot standby's 
that process some of the SELECTs, but by far the most processing is done 
on the master (A). pg version is 9.0.6. CPU utilization is about 80% on 
the master and between 90-100% in the standby's, so it's decided to 
upgrade to the latest 9.2 to profit from the latest performance 
enhancements.


So B gets upgraded to 9.2.1-1.pgdg60+1 (from pgapt.debian.org) and 
becomes master, then A becomes a hot_standby slave that takes all the 
SELECTs (and C becomes another hot_standby). In the beginning everything 
works as expected, CPU utilization drops from 80% to about 50-60%, 
selects run faster, everything looks smoother (some queries drop from 
5s to 1s due to 9.2s index-only-scan feature). Its friday, everyone 
is happy.


About 16 hours later, saturday morning around 6:00, A suddenly goes wild 
and has a CPU utilization of 100% without a change in the workload, out 
of the blue. Queries that used to take 1s suddenly take 5-10s, explain 
analyze plans of these queries havn't change a bit though. Switching 
the workload off causes the server to become idle. (while I'm writing 
this I realize we haven't tried to restart A). Instead, $boss decides to 
twitch back to the original setup, so B gets dropped, A becomes master 
and gets 100% of the workload (all SELECTs/INSERTs/UPDATEs/DELETEs), and 
everything becomes just like friday, CPU usage drops to 50-60%, 
everything runs smothly.


I'm not sure yet if this is replication related or a 9.2.1 problem. Any 
Ideas?


regards
Andreas Pfotenhauer
Ypsilon.NET AG


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres error when adding new page

2012-10-01 Thread Tom Lane
Marco Craveiro marco.crave...@gmail.com writes:
 We're experiencing a lot of errors when using CDash on PostgreSQL 9.1,
 hosted on Mac OSX 10.6.8. The actual error message is as follows:

 SQL error in Cannot insert test:
 utility/asserter/assert_file_returns_true_for_empty_files into the
 database():ERROR:  failed to add old item to the right sibling while
 splitting block 191 of index crc323br

 I'm a bit stuck from here on. Is the likely reason for this problem
 filesystem corruption or am I barking at the wrong tree?

This definitely looks like index corruption, but blaming it on the
filesystem might be premature.  I'm wondering if this could be an
artifact of the WAL-replay bug fixed in 9.1.6.  I'd suggest updating
and then reindexing the index ...

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] strange hot_standby behaviour

2012-10-01 Thread Lonni J Friedman
On Mon, Oct 1, 2012 at 7:28 AM, pfote pf...@ypsilon.net wrote:
 Hi,

 I had a very strange effect on the weekend that smells like a bug, so i'd
 like so share it.

 Setup:
 machine A: 16 CPU Cores (modern), 128GB RAM, nice 6-drive SAS Raid-10
 machines B, C: 8 Cores (substantially older than A), 48GB Ram, some scsi
 Raid, substantially slower than A

 The workload is about 80% - 90% SELECTs with heavy sorting and grouping, the
 remaining are INSERTs/UPDATEs/DELETEs.
 So In the original setup A  is the master, B and C are hot standby's that
 process some of the SELECTs, but by far the most processing is done on the
 master (A). pg version is 9.0.6. CPU utilization is about 80% on the master
 and between 90-100% in the standby's, so it's decided to upgrade to the
 latest 9.2 to profit from the latest performance enhancements.

 So B gets upgraded to 9.2.1-1.pgdg60+1 (from pgapt.debian.org) and becomes
 master, then A becomes a hot_standby slave that takes all the SELECTs (and C
 becomes another hot_standby). In the beginning everything works as expected,
 CPU utilization drops from 80% to about 50-60%, selects run faster,
 everything looks smoother (some queries drop from 5s to 1s due to 9.2s
 index-only-scan feature). Its friday, everyone is happy.

 About 16 hours later, saturday morning around 6:00, A suddenly goes wild and
 has a CPU utilization of 100% without a change in the workload, out of the
 blue. Queries that used to take 1s suddenly take 5-10s, explain analyze
 plans of these queries havn't change a bit though. Switching the workload
 off causes the server to become idle. (while I'm writing this I realize we
 haven't tried to restart A). Instead, $boss decides to twitch back to the
 original setup, so B gets dropped, A becomes master and gets 100% of the
 workload (all SELECTs/INSERTs/UPDATEs/DELETEs), and everything becomes just
 like friday, CPU usage drops to 50-60%, everything runs smothly.

 I'm not sure yet if this is replication related or a 9.2.1 problem. Any
 Ideas?

This could be just about anything.  Which OS are you running?  Did you
check any logs when everything went crazy?


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_upgrade: out of memory

2012-10-01 Thread Carrington, Matthew (Produban)
Tom,

Following on from that man page extract I tried a build using the suggested 
compiler macro (_LINUX_SOURCE_COMPAT) ...

export CC=/opt/freeware/bin/gcc -maix64 
export OBJECT_MODE=64
export CFLAGS=-D_LINUX_SOURCE_COMPAT -maix64 -g 
export LDFLAGS=-maix64 -Wl,-bbigtoc
export AR=ar -X64
export CC=/opt/freeware/bin/gcc -maix64 

./configure --enable-debug --prefix=/opt/serviceMonitoring/postgres_9.2.1 
--disable-thread-safety --enable-cassert

make
cd contrib
make
cd ..
make install
cd contrib
make install

... and tried it out ...

/opt/serviceMonitoring/postgres_9.2.1/bin/pg_dump --port 65432 --username 
postgres --verbose --schema-only --binary-upgrade -f dump.out template1

.. and the full dump as per pg_upgrade ...

/opt/serviceMonitoring/postgres_9.2.1/bin/pg_dumpall --port 65432 --username 
postgres --schema-only --binary-upgrade -f pg_upgrade_dump_all.sql

.. both of which worked without any problems.

Hope that helps.

Matthew

-Original Message-
From: Tom Lane [mailto:t...@sss.pgh.pa.us] 
Sent: 01 October 2012 14:39
To: Carrington, Matthew (Produban)
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] pg_upgrade: out of memory

Carrington, Matthew (Produban) matthew.carring...@produban.co.uk writes:
 pg_dump: reading user-defined aggregate functions

 Breakpoint 1, exit_horribly (modulename=0x0, fmt=0x10006a590 out of 
 memory\n)
 at dumputils.c:1314
 1314dumputils.c: A file or directory in the path name does not exist..
 (gdb) bt
 #0  exit_horribly (modulename=0x0, fmt=0x10006a590 out of memory\n) at 
 dumputils.c:1314
 #1  0x00010003247c in pg_malloc (size=0) at dumpmem.c:47
 #2  0x00018f54 in getAggregates (fout=0x11000bad0, 
 numAggs=0x73c)
 at pg_dump.c:3614

Oh!  Given your previous comment about there not being any user-defined
aggregates, I see what the problem is.  AIX must be one of the platforms
where malloc(0) is defined to return NULL rather than a pointer to a
zero-size block.  pg_malloc is not coping with that.

A quick fix would be

pg_malloc(size_t size)
{
void   *tmp;

tmp = malloc(size);
-   if (!tmp)
+   if (!tmp  size)
{
psql_error(out of memory\n);
exit(EXIT_FAILURE);
}

but I'm not sure if that's the best answer overall.  Will take it up in
-hackers.

regards, tom lane
Emails aren't always secure, and they may be intercepted or changed
after they've been sent. Produban doesn't accept liability if this
happens. If you think someone may have interfered with this email,
please get in touch with the sender another way. This message and any
documents attached to it do not create or change any contract unless
otherwise specifically stated. Any views or opinions contained in this
message are solely those of the author, and do not necessarily represent
those of Produban, unless otherwise specifically stated and the sender
is authorised to do so. Produban doesn't accept responsibility for
damage caused by any viruses contained in this email or its attachments.
Emails may be monitored. If you've received this email by mistake,
please let the sender know at once that it's gone to the wrong person
and then destroy it without copying, using, or telling anyone about its
contents. Produban Servicios Informaticos Generales, S.L. (UK Branch).
Registered office: Shenley Wood House, Chalkdell Drive, Shenley Wood,
Milton Keynes MK5 6LA. Branch registration number BR 008486.
Ref:[PDB#014]



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] shared memory settings

2012-10-01 Thread Vick Khera
On Wed, Sep 26, 2012 at 5:39 AM, Alexander Shutyaev shuty...@gmail.comwrote:

 SHMALL = 2097152


SHMALL * PAGE_SIZE = 2097152 * 4096 = 8589934592

which is smaller than your requested allocation.

SHMALL and SHMMAX need to be sized together.  You likely want 4314090 as
your SHMALL.

Bug as Devrim says, are you sure you can handle that?  Perhaps you want a
connection pooler instead.


Re: [GENERAL] What's faster? BEGIN ... EXCEPTION or CREATE TEMP TABLE IF NOT EXISTS?

2012-10-01 Thread Moshe Jacobson
Merlin,

On Mon, Oct 1, 2012 at 10:28 AM, Merlin Moncure mmonc...@gmail.com wrote:


 Couple points:
 *) Functions without exception blocks are faster than those with.


Clearly.


 *) Therefore, CREATE/IF NOT EXISTS is probably faster (test to be sure)


I don't think that can be assumed by your premise above. Essentially we are
comparing the price of starting an exception block against checking the
catalog for a table.


 *) Carefully consider if you you will ever in the future introduce
 connection pooling.  If you do, relying on session scoped objects like
 temp tables is probably not a good idea.


We already use connection pooling with pgbouncer, but upon disconnect, it
issues a DISCARD ALL statement, which should take care of this.


 *) You can rig permanent tables around pg_backend_pid().  On session
 login, clear session private records that have your pid (if any).
 Transaction temporary data can be similarly rigged around
 txid_current() with an even simpler maintenance process.


We currently do use permanent tables using pg_backend_pid(). It's because
of the connection pooling specifically that we are having problems with
stale data. I have been unable to find a way to automatically clear that
data upon start or end of a session, or at least check if it's been set in
this session or not.

-- 
Moshe Jacobson
Nead Werx, Inc. | Senior Systems Engineer
2323 Cumberland Parkway, Suite 201 | Atlanta, GA 30339
mo...@neadwerx.com | www.neadwerx.com


[GENERAL] Securing .pgpass File?

2012-10-01 Thread Shaun Thomas

Hey,

So, I've searched around through the archives, and it seems this has 
come up a couple times in the past. But one scenario that was never 
explored was when using one .pgpass file in a cluster of servers, in 
which case it makes sense to save it in source control, or something 
like puppet/bcfg. So my question is this:


Has anyone come up with a good solution for distributing a .pgpass file 
that doesn't expose it to anyone who has access to the distribution 
mechanism?


I ask because several people can access and make pull requests to our 
configuration management system, but except for .pgpass, none of these 
files contain plain-text passwords. We have dozens of systems running 
PostgreSQL, and manually setting up each one is a waste of time; we have 
configuration management for a reason.


Am I just missing something, here?

Thanks, everyone!

--
Shaun Thomas
OptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604
312-444-8534
stho...@optionshouse.com

__

See http://www.peak6.com/email_disclaimer/ for terms and conditions related to 
this email


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres error when adding new page

2012-10-01 Thread Marco Craveiro
Hello Tom,

This definitely looks like index corruption, but blaming it on the
 filesystem might be premature.  I'm wondering if this could be an
 artifact of the WAL-replay bug fixed in 9.1.6.  I'd suggest updating
 and then reindexing the index ...


We are running 9.1.2 it seems:

select version();
version


 PostgreSQL 9.1.2 on x86_64-apple-darwin, compiled by
i686-apple-darwin10-gcc-4.2.1 (GCC) 4.2.1 (Apple Inc. build 5666) (dot 3),
64-bit

We'll look into upgrading to 9.1.6.

Cheers

Marco
-- 
So young, and already so unknown -- Pauli

blog: http://mcraveiro.blogspot.com


Re: [GENERAL] What's faster? BEGIN ... EXCEPTION or CREATE TEMP TABLE IF NOT EXISTS?

2012-10-01 Thread Merlin Moncure
On Mon, Oct 1, 2012 at 10:21 AM, Moshe Jacobson mo...@neadwerx.com wrote:
 Merlin,

 On Mon, Oct 1, 2012 at 10:28 AM, Merlin Moncure mmonc...@gmail.com wrote:


 Couple points:
 *) Functions without exception blocks are faster than those with.


 Clearly.


 *) Therefore, CREATE/IF NOT EXISTS is probably faster (test to be sure)


 I don't think that can be assumed by your premise above. Essentially we are
 comparing the price of starting an exception block against checking the
 catalog for a table.

A vanilla create table has to scan the catalogs also.

 *) Carefully consider if you you will ever in the future introduce
 connection pooling.  If you do, relying on session scoped objects like
 temp tables is probably not a good idea.

 We already use connection pooling with pgbouncer, but upon disconnect, it
 issues a DISCARD ALL statement, which should take care of this.

Especially if you're using pgbouncer transaction mode, using temporary
tables is probably not a good idea.  When you DISCARD them, it
invalidates all your function plans which is going to be painful if
you have a lot of pl/pgsql (test any non-trivial pl/pgsql routine and
you'll see it's much slower on the first invocation).  Also, if you
need to share data between transactions, it's not clear how you're
organizing such that different application sessions are going to tromp
over each other's data.

 *) You can rig permanent tables around pg_backend_pid().  On session
 login, clear session private records that have your pid (if any).
 Transaction temporary data can be similarly rigged around
 txid_current() with an even simpler maintenance process.

 We currently do use permanent tables using pg_backend_pid(). It's because of
 the connection pooling specifically that we are having problems with stale
 data. I have been unable to find a way to automatically clear that data upon
 start or end of a session, or at least check if it's been set in this
 session or not.

IMO the right way to do it is to generate a unique application token
(sequence is ok if you're not worried about it being guessed) when
your application session logs in.  That token should be passed into
*all* your session specific backend functions and can be used to
organize session specific temporary data in your permanent tables.

To deal with ungraceful application client exit, you can consider
implementing an on_proc_exit handler to close the session down so that
it can be appropriately cleaned up (there are severe limits to the SQL
you can execute in the handler but you can make dblink calls).  If
some of them still sneak through,  periodic sweep on stale pids
against pg_stat_activity should take care of them.

Note, if your users have some type of unique identifier (like a login
or an email) and if they are only allowed to have one active session
at a time, you can organize your session data around that instead of
generating a token.

merlin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] What's faster? BEGIN ... EXCEPTION or CREATE TEMP TABLE IF NOT EXISTS?

2012-10-01 Thread Merlin Moncure
On Mon, Oct 1, 2012 at 11:22 AM, Merlin Moncure mmonc...@gmail.com wrote:
 We currently do use permanent tables using pg_backend_pid(). It's because of
 the connection pooling specifically that we are having problems with stale
 data. I have been unable to find a way to automatically clear that data upon
 start or end of a session, or at least check if it's been set in this
 session or not.

 IMO the right way to do it is to generate a unique application token
 (sequence is ok if you're not worried about it being guessed) when
 your application session logs in.  That token should be passed into
 *all* your session specific backend functions and can be used to
 organize session specific temporary data in your permanent tables.

 To deal with ungraceful application client exit, you can consider
 implementing an on_proc_exit handler to close the session down so that

actually, you can't do that (on_proc_exit or scan for pids) if you're
using transaction mode connection pooling.   In our case, we modified
pgbouncer to pass async notifications and would have used that to
periodically scan connected clients if we didn't have the luxury of
one client/session only.

merlin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Again, problem with pgbouncer

2012-10-01 Thread Phoenix Kiula
 Could you please check permission of /var/run/pgbouncer/ directory. If
 pgbouncer directory does not have postgres user permissions,please assign
 it and then start the pgbouncer.


The /var/run/pgbouncer/ directory has

   chown -R postgres:postgres ..

The port number everywhere is already 6789.

What else?

Thanks.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Securing .pgpass File?

2012-10-01 Thread Darren Duncan
You should never put your passwords (or private keys) in source control; it 
would be better to use the puppet/bcfg option.


Generally you want to keep your sensitive and less sensitive info separate.  If 
you have passwords in version control and later want to delete them, you have to 
mess with your version control history, and backups, and so on.  You really 
don't want to compromise the quality of your version control.


And if you ever put passwords in version control, you should change those 
passwords, so the copies in version control can be left alone, but no attacker 
can use them (assuming those aren't human-chosen and provide clues to discerning 
how they choose other newer passwords).


-- Darren Duncan

Shaun Thomas wrote:

Hey,

So, I've searched around through the archives, and it seems this has 
come up a couple times in the past. But one scenario that was never 
explored was when using one .pgpass file in a cluster of servers, in 
which case it makes sense to save it in source control, or something 
like puppet/bcfg. So my question is this:


Has anyone come up with a good solution for distributing a .pgpass file 
that doesn't expose it to anyone who has access to the distribution 
mechanism?


I ask because several people can access and make pull requests to our 
configuration management system, but except for .pgpass, none of these 
files contain plain-text passwords. We have dozens of systems running 
PostgreSQL, and manually setting up each one is a waste of time; we have 
configuration management for a reason.


Am I just missing something, here?

Thanks, everyone!





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Securing .pgpass File?

2012-10-01 Thread Shaun Thomas

On 10/01/2012 12:19 PM, Darren Duncan wrote:


You should never put your passwords (or private keys) in source control;
it would be better to use the puppet/bcfg option.


That was kind of my point. Puppet / Bcfg2 have the same problem. About a 
dozen people have access to our bcfg2 repo than I would want to know the 
contents of .pgpass.


We have twenty machines. If I ever change that file, I have to change it 
in 20 places. I'd love to put it in bcfg2, but that necessitates 
allowing anyone with access to bcfg2 the ability to read it. No go.


You basically just reiterated my question back to me. ;) I'd like to 
*stop* manually copying the files around, but can't because they're 
completely plain text. It doesn't matter if it's source control, puppet, 
bcfg2, cfengine, or anything else; unauthorized people can read them, 
and I rather they didn't.


Encrypted passwords would be nice, but apparently this isn't an option.

--
Shaun Thomas
OptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604
312-444-8534
stho...@optionshouse.com

__

See http://www.peak6.com/email_disclaimer/ for terms and conditions related to 
this email


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] strange hot_standby behaviour

2012-10-01 Thread Andreas Pfotenhauer

This could be just about anything.  Which OS are you running?  Did you
check any logs when everything went crazy?

Sorry, should have been more verbose.

OS is debian squeeze, pg installed is the latest 9.2.1 from 
pgapt.debian.net. Logs where checked, no errors/warnings at all. Query 
plans have been checked, no changes before/after. The changed behavior 
only manifested in the drop of the number of transactions/second 
executed and the massively increased load in the machine. No increased 
I/O or anything.


Since A has been promoted to master, everything behaves like expected, 
the 9.2 outperformes the 9.0 easily, as expected.



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_upgrade: out of memory

2012-10-01 Thread Andrew Hastie

Tom/Matthew,

I also tried the same macro with the xlc compiler with similar results 
in that pg_dump now works as expected :-)


For info here's my build setup:-
./configure CC=xlc LIBS=-lssl -lcrypto -lz -lreadline -lcurses -lld 
-lmass -lm CFLAGS=-qlanglvl=extc89 -D_LINUX_SOURCE_COMPAT

--with-template=aix --prefix=/home/ahastie/pgbuild
--with-includes=/opt/freeware/include
--with-libraries=/opt/freeware/lib
gmake
gmake check
gmake install

Tom: Is this something we should get added into the AIX Platform 
specific notes?


Regards,
Andrew



On 01/10/12 15:50, Carrington, Matthew (Produban) wrote:

Tom,

Following on from that man page extract I tried a build using the suggested 
compiler macro (_LINUX_SOURCE_COMPAT) ...

export CC=/opt/freeware/bin/gcc -maix64
export OBJECT_MODE=64
export CFLAGS=-D_LINUX_SOURCE_COMPAT -maix64 -g
export LDFLAGS=-maix64 -Wl,-bbigtoc
export AR=ar -X64
export CC=/opt/freeware/bin/gcc -maix64

./configure --enable-debug --prefix=/opt/serviceMonitoring/postgres_9.2.1 
--disable-thread-safety --enable-cassert

make
cd contrib
make
cd ..
make install
cd contrib
make install

... and tried it out ...

/opt/serviceMonitoring/postgres_9.2.1/bin/pg_dump --port 65432 --username 
postgres --verbose --schema-only --binary-upgrade -f dump.out template1

.. and the full dump as per pg_upgrade ...

/opt/serviceMonitoring/postgres_9.2.1/bin/pg_dumpall --port 65432 --username 
postgres --schema-only --binary-upgrade -f pg_upgrade_dump_all.sql

.. both of which worked without any problems.

Hope that helps.

Matthew

-Original Message-
From: Tom Lane [mailto:t...@sss.pgh.pa.us]
Sent: 01 October 2012 14:39
To: Carrington, Matthew (Produban)
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] pg_upgrade: out of memory

Carrington, Matthew (Produban) matthew.carring...@produban.co.uk writes:

pg_dump: reading user-defined aggregate functions
Breakpoint 1, exit_horribly (modulename=0x0, fmt=0x10006a590 out of memory\n)
 at dumputils.c:1314
1314dumputils.c: A file or directory in the path name does not exist..
(gdb) bt
#0  exit_horribly (modulename=0x0, fmt=0x10006a590 out of memory\n) at 
dumputils.c:1314
#1  0x00010003247c in pg_malloc (size=0) at dumpmem.c:47
#2  0x00018f54 in getAggregates (fout=0x11000bad0, 
numAggs=0x73c)
 at pg_dump.c:3614

Oh!  Given your previous comment about there not being any user-defined
aggregates, I see what the problem is.  AIX must be one of the platforms
where malloc(0) is defined to return NULL rather than a pointer to a
zero-size block.  pg_malloc is not coping with that.

A quick fix would be

pg_malloc(size_t size)
{
void   *tmp;

tmp = malloc(size);
-   if (!tmp)
+   if (!tmp  size)
{
psql_error(out of memory\n);
exit(EXIT_FAILURE);
}

but I'm not sure if that's the best answer overall.  Will take it up in
-hackers.

regards, tom lane
Emails aren't always secure, and they may be intercepted or changed
after they've been sent. Produban doesn't accept liability if this
happens. If you think someone may have interfered with this email,
please get in touch with the sender another way. This message and any
documents attached to it do not create or change any contract unless
otherwise specifically stated. Any views or opinions contained in this
message are solely those of the author, and do not necessarily represent
those of Produban, unless otherwise specifically stated and the sender
is authorised to do so. Produban doesn't accept responsibility for
damage caused by any viruses contained in this email or its attachments.
Emails may be monitored. If you've received this email by mistake,
please let the sender know at once that it's gone to the wrong person
and then destroy it without copying, using, or telling anyone about its
contents. Produban Servicios Informaticos Generales, S.L. (UK Branch).
Registered office: Shenley Wood House, Chalkdell Drive, Shenley Wood,
Milton Keynes MK5 6LA. Branch registration number BR 008486.
Ref:[PDB#014]






--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Re: Need help in reclaiming disk space by deleting the selected records

2012-10-01 Thread Yelai, Ramkumar IN BLR STS


-Original Message-
From: Albe Laurenz [mailto:laurenz.a...@wien.gv.at] 
Sent: Friday, September 28, 2012 1:07 PM
To: Yelai, Ramkumar IN BLR STS; pgsql-general@postgresql.org
Cc: scrawf...@pinpointresearch.com; and...@2ndquadrant.com
Subject: RE: [GENERAL] Re: Need help in reclaiming disk space by deleting the 
selected records

Yelai, Ramkumar wrote:
 Anything exceeding a few hundred partitions is not considered a good
idea.
 The system needs to keep track of all the tables, and query planning
for such a partitioned table
 might be expensive.
 
 1440 is probably pushing the limits, but maybe somebody with more
experience can say more.

 By mistake I added 1440 tables, but it is incorrect, below is the
total number of tables
 
 7 base tables X 120 months = 840 child tables.
 
 As per your statement, If I create these many table then it will
affect the performance. But as per
 the document
(http://www.postgresql.org/docs/9.1/static/ddl-partitioning.html)
 constraint_exclusion will improve query performance. Please clarify me
here how query planning will be
 expensive?

The planner will have to decide which of the 840 tables to access.

 I have one more strategy that instead of creating 10 years, I'd like
to use batch processing like
 create 2 years of tables ( 240 tables ) and when we are going above 2
years we will create next 2 year
 table and update the trigger or use 5 years (480 tables ) instead of 2
years.
 
 The above approach will not create a more partitioned table and if
user wants space they can truncate
 the old tables. Please let me know is this good approach?

I don't understand that in detail.

I would recommend that you prototype some of these variants and run some 
performance tests.  That's the only good way to know what will perform well in 
your environment.

Yours,
Laurenz Albe

Thanks Laurenz Albe.

After I went through the below articles, I understand query plans about 
partition table and its limits.

http://stackoverflow.com/questions/6104774/how-many-table-partitions-is-too-many-in-postgres
http://postgresql.1045698.n5.nabble.com/Table-partitioning-td3410542.html


As per our functionality ( 7 tables are represents 7 Unique archiving logs ), 
we will not be querying 840 tables or 7 base tables at same time. i.e each 
unique archiving logs table will have only 120 child tables, hence planner will 
have to device which of 120 table to access.

In addition to this, at any time I will be reading only one partition table 
among 120 tables as per our computations.  In this computation we will not use 
any joins or combining the partition tables. 

As Laurenz said, I will do some prototype and I will check the query plans 
based on our queries. 

Please let me know if you have any points are suggestions.

Thanks  regards,
Ramkumar






-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Securing .pgpass File?

2012-10-01 Thread Steve Atkins

On Oct 1, 2012, at 10:26 AM, Shaun Thomas stho...@optionshouse.com wrote:

 On 10/01/2012 12:19 PM, Darren Duncan wrote:
 
 You should never put your passwords (or private keys) in source control;
 it would be better to use the puppet/bcfg option.
 
 That was kind of my point. Puppet / Bcfg2 have the same problem. About a 
 dozen people have access to our bcfg2 repo than I would want to know the 
 contents of .pgpass.
 
 We have twenty machines. If I ever change that file, I have to change it in 
 20 places. I'd love to put it in bcfg2, but that necessitates allowing anyone 
 with access to bcfg2 the ability to read it. No go.
 
 You basically just reiterated my question back to me. ;) I'd like to *stop* 
 manually copying the files around, but can't because they're completely plain 
 text. It doesn't matter if it's source control, puppet, bcfg2, cfengine, or 
 anything else; unauthorized people can read them, and I rather they didn't.
 
 Encrypted passwords would be nice, but apparently this isn't an option.

If the passwords were encrypted, you'd also need to distribute the password to 
decrypt the password.

You could obfuscate the passwords (with something that's somewhat equivalent to 
rot13) which would help with shoulder surfing, but you'd still be distributing 
a secret that's equivalent to a password. That's something you could do without 
any support from postgresql though - just deobfuscate as part of the 
distribution process.

Authentication that isn't based on a secret token would be one way to sidestep 
the issue - source IP based, for instance.

Cheers,
  Steve



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_upgrade: out of memory

2012-10-01 Thread Tom Lane
Andrew Hastie and...@ahastie.net writes:
 Tom: Is this something we should get added into the AIX Platform 
 specific notes?

No, it's something we need to fix.  See hackers thread:
http://archives.postgresql.org/pgsql-hackers/2012-10/msg00029.php

That #define isn't a bad quick-workaround if you don't want to touch
the code, but it's not a useful solution from our standpoint because
it only fixes this on AIX.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Securing .pgpass File?

2012-10-01 Thread Jeff Janes
On Mon, Oct 1, 2012 at 10:26 AM, Shaun Thomas stho...@optionshouse.com wrote:
 On 10/01/2012 12:19 PM, Darren Duncan wrote:

 You should never put your passwords (or private keys) in source control;
 it would be better to use the puppet/bcfg option.


 That was kind of my point. Puppet / Bcfg2 have the same problem. About a
 dozen people have access to our bcfg2 repo than I would want to know the
 contents of .pgpass.

 We have twenty machines. If I ever change that file, I have to change it in
 20 places. I'd love to put it in bcfg2, but that necessitates allowing
 anyone with access to bcfg2 the ability to read it. No go.


Who are those people?  Do they have administrative access to the 20
machines?  If so, it seems to me that the game is already over.  If
not, what mechanism do you use to keep them out?  Perhaps that
mechanism could be extended to cover this case as well; or use
host-based authentication on the PG server.

 You basically just reiterated my question back to me. ;) I'd like to *stop*
 manually copying the files around, but can't because they're completely
 plain text. It doesn't matter if it's source control, puppet, bcfg2,
 cfengine, or anything else; unauthorized people can read them, and I rather
 they didn't.

I'm not familiar with those tools, at least not at an administrative
level.  Don't they allow tiered access so that some things can have
stricter access controls?

 Encrypted passwords would be nice, but apparently this isn't an option.

I don't see how that can work.  It sounds like an infinite regress.
How do you distribute the key without exposing it?

Cheers,

Jeff


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Pg, Netezza, and... Sybase?

2012-10-01 Thread yary
Is there a link between Sybase and Postgres?

I ask because I came across a bug in Netezza, and Netezza has a
well-known Postgres lineage, but when web-searching the bug, the first
thing I found was a Sybase reference-

http://geekswithblogs.net/marocanu2001/archive/2011/08/11/nasty-bug-in-sybase-iq-analytical-function-last_value-over-partition.aspx

and indeed that is the exact bug I found in Netezza! first_value
works great, last_value has window size 1 unless you also say rows
between unbounded preceding and unbounded following.

Two unrelated commercial products with the same bug makes me wonder...
does/did Postgres also have this issue, with both NZ and Sybase
importing the same buggy code?


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Thousands of schemas and ANALYZE goes out of memory

2012-10-01 Thread Hugo Nabble
Hi everyone,

We have two postgresql 9.0 databases (32-bits) with more than 10,000
schemas. When we try to run ANALYZE in those databases we get errors like
this (after a few hours):

2012-09-14 01:46:24 PDT ERROR:  out of memory 
2012-09-14 01:46:24 PDT DETAIL:  Failed on request of size 421. 
2012-09-14 01:46:24 PDT STATEMENT:  analyze; 

(Note that we do have plenty of memory available for postgresql:
shared_buffers=2048MB, work_mem=128MB, maintenance_work_mem=384MB,
effective_cache_size = 3072MB, etc.)

We have other similar databases with less than 10,000 schemas and ANALYZE
works fine with them (they run on similar machines and configs). For now, we
had to create shell scripts to run ANALYZE per schema, table by table. It
works that way, so at least we have an alternative solution. But what
exactly causes the out of memory? Is postgresql trying to run everything in
a single transaction? Maybe this should be improved for the future releases.
Please let me know what you guys think.

Thanks in advance,
Hugo



--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Thousands-of-schemas-and-ANALYZE-goes-out-of-memory-tp5726198.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Securing .pgpass File?

2012-10-01 Thread Shaun Thomas

On 10/01/2012 02:05 PM, Jeff Janes wrote:


Who are those people?  Do they have administrative access to the 20
machines?  If so, it seems to me that the game is already over.  If
not, what mechanism do you use to keep them out?  Perhaps that
mechanism could be extended to cover this case as well; or use
host-based authentication on the PG server.


Anyone from a NoC guy, DBA, or systems admin. In the case of the systems 
admins, yeah, they have root and could see everything anyway. But 
someone just administering Nagios settings would get access to .pgpass 
if we put it into bcfg2. Yuck. :)


Host-based works if you want everything to work under the super-user 
admin user. That's not necessarily true for some one-off utility that 
should have a better sandbox around. Creating a whole user on the box to 
run one or two tools seems a bit silly as a work-around. It doesn't 
solve connections that *must* be remote, such as replication, or 
third-party tools which connect over TCP like Bucardo, or Slony, either. 
Each of these have a specialized user on our systems, and use .pgpass to 
avoid unnecessary password proliferation.



I'm not familiar with those tools, at least not at an administrative
level.  Don't they allow tiered access so that some things can have
stricter access controls?


No. :(

I can't remember about Puppet since I haven't used it in so long, but 
bcfg2 is basically just a giant directory structure, and we put ours in 
GIT for safekeeping and to track changes. Implementing ACLs in GIT is a 
bit of a PITA, so we're avoiding that as a last resort.



I don't see how that can work.  It sounds like an infinite regress.
How do you distribute the key without exposing it?


No idea. That's why I asked. ;)

I figured we can't be the only company out there with a bunch of servers 
who is tired of manually copying a .pgpass file everywhere, and someone 
has since devised something workable. It doesn't hurt to ask before I 
spend a bunch of time building something. I try to avoid NIH syndrome 
when possible.


I'm fine with that answer, but I had to at least broach the subject.

Thanks, Jeff. :)

--
Shaun Thomas
OptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604
312-444-8534
stho...@optionshouse.com

__

See http://www.peak6.com/email_disclaimer/ for terms and conditions related to 
this email


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Pg, Netezza, and... Sybase?

2012-10-01 Thread Bruce Momjian
On Mon, Oct  1, 2012 at 03:49:14PM -0400, yary wrote:
 Is there a link between Sybase and Postgres?
 
 I ask because I came across a bug in Netezza, and Netezza has a
 well-known Postgres lineage, but when web-searching the bug, the first
 thing I found was a Sybase reference-
 
 http://geekswithblogs.net/marocanu2001/archive/2011/08/11/nasty-bug-in-sybase-iq-analytical-function-last_value-over-partition.aspx
 
 and indeed that is the exact bug I found in Netezza! first_value
 works great, last_value has window size 1 unless you also say rows
 between unbounded preceding and unbounded following.
 
 Two unrelated commercial products with the same bug makes me wonder...
 does/did Postgres also have this issue, with both NZ and Sybase
 importing the same buggy code?

There is no code connection between Sybase and Postgres.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Thousands of schemas and ANALYZE goes out of memory

2012-10-01 Thread Tom Lane
Hugo Nabble hugo.t...@gmail.com writes:
 We have two postgresql 9.0 databases (32-bits) with more than 10,000
 schemas. When we try to run ANALYZE in those databases we get errors like
 this (after a few hours):

 2012-09-14 01:46:24 PDT ERROR:  out of memory 
 2012-09-14 01:46:24 PDT DETAIL:  Failed on request of size 421. 
 2012-09-14 01:46:24 PDT STATEMENT:  analyze; 

I doubt that the number of *schemas* is a big deal here, but the number
of *tables* might well be.  How many?  Also, 9.0.what?

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Pg, Netezza, and... Sybase?

2012-10-01 Thread Tom Lane
yary not@gmail.com writes:
 Is there a link between Sybase and Postgres?

Not that I've ever heard of.

 I ask because I came across a bug in Netezza, and Netezza has a
 well-known Postgres lineage, but when web-searching the bug, the first
 thing I found was a Sybase reference-

 http://geekswithblogs.net/marocanu2001/archive/2011/08/11/nasty-bug-in-sybase-iq-analytical-function-last_value-over-partition.aspx

 and indeed that is the exact bug I found in Netezza! first_value
 works great, last_value has window size 1 unless you also say rows
 between unbounded preceding and unbounded following.

That isn't a bug, it's the behavior required by the SQL standard.  The
blogger you cite has apparently not bothered to read same (or much of
any documentation), or he would know that the default window frame is
NOT unbounded preceding to unbounded following.  Our own docs point out
specifically that you probably want a nondefault frame for last_value
--- see http://www.postgresql.org/docs/9.2/static/functions-window.html
towards the bottom of the page.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Pg, Netezza, and... Sybase?

2012-10-01 Thread David Johnston
 
 and indeed that is the exact bug I found in Netezza! first_value
 works great, last_value has window size 1 unless you also say rows
 between unbounded preceding and unbounded following.
 

What Tom said but:

...has window size 1 is not correct.  The window size is larger but the
returned value is always just going to be the current (last) row so it only
appears to be size 1 superficially.

The query's frame/range/window specification is independent of the functions
that use the window.

If you are confused as to what is currently in the frame specification you
should use something like

ARRAY_AGG(...) OVER (...) 

To capture the relevant frame data into a single value.

David J.




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] What's faster? BEGIN ... EXCEPTION or CREATE TEMP TABLE IF NOT EXISTS?

2012-10-01 Thread Moshe Jacobson
On Mon, Oct 1, 2012 at 12:22 PM, Merlin Moncure mmonc...@gmail.com wrote:

  *) Functions without exception blocks are faster than those with.
  *) Therefore, CREATE/IF NOT EXISTS is probably faster (test to be sure)
 
  I don't think that can be assumed by your premise above. Essentially we
 are
  comparing the price of starting an exception block against checking the
  catalog for a table.

 A vanilla create table has to scan the catalogs also.


Yes but that is irrelevant to the discussion. I am comparing the speed of
repeated table existence checks with the speed of repeated exception blocks
that access said table.

 We already use connection pooling with pgbouncer, but upon disconnect, it
  issues a DISCARD ALL statement  [...]

 Especially if you're using pgbouncer transaction mode, using temporary
 tables is probably not a good idea.


We are using it in session mode, so none of that is relevant to my
situation.

  *) You can rig permanent tables around pg_backend_pid(). [...]
 
  We currently do use permanent tables using pg_backend_pid(). It's
 because of
  the connection pooling specifically that we are having problems with
 stale
  data. I have been unable to find a way to automatically clear that data
 upon
  start or end of a session, or at least check if it's been set in this
  session or not.

 IMO the right way to do it is to generate a unique application token
 [...] when your application session logs in.  That token should be passed
 into
 *all* your session specific backend functions [...]


No, this will not work because the backend functions are trigger functions,
so they cannot be passed this data.

Thanks.

-- 
Moshe Jacobson
Nead Werx, Inc. | Senior Systems Engineer
2323 Cumberland Parkway, Suite 201 | Atlanta, GA 30339
mo...@neadwerx.com | www.neadwerx.com


Re: [GENERAL] Thousands of schemas and ANALYZE goes out of memory

2012-10-01 Thread Hugo Nabble
 I doubt that the number of *schemas* is a big deal here, but the number
 of *tables* might well be.  How many?  Also, 9.0.what?

Each schema has 22 tables, so we can count at least 22 x 10,000 = 220,000
tables.
The postgresql version is 9.0.7-1.

Regards,
Hugo




--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Thousands-of-schemas-and-ANALYZE-goes-out-of-memory-tp5726198p5726212.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Securing .pgpass File?

2012-10-01 Thread Alan Hodgson
On Monday, October 01, 2012 03:10:43 PM Shaun Thomas wrote:
 I can't remember about Puppet since I haven't used it in so long, but
 bcfg2 is basically just a giant directory structure, and we put ours in
 GIT for safekeeping and to track changes. Implementing ACLs in GIT is a
 bit of a PITA, so we're avoiding that as a last resort.

You could maybe put the pgpass stuff in a separate repo with different 
permissions and set it up as as submodule. Haven't tried it.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] How to search for composite type array

2012-10-01 Thread ChoonSoo Park
Hello postgresql gurus,

I want to have an array of composite type in a table. Retrieving/saving
value work fine.

I just wonder how I can search against composite type array.

CREATE TYPE CompXYZ AS (
 attr1 integer,
 attr2 text,
 attr3 inet
);

CREATE TABLE sample (
 id integer not null primary key,
 list   CompXYZ[]
);

insert into sample values (1, '{(1,abc,127.0.0.1), (5,def,10.0.1.2)}');
insert into sample values (2, '{(10,hello,127.0.0.1),
(20,def,10.0.1.2)}');
insert into sample values (3, '{(20,hello,10.1.1.1),
(30,there,10.1.1.2)}');

How I can search a row containing hello for attr2?

I know if I have a separate table for saving 3 attributes along with
foreign key to sample table, then I can achieve my goal.
I just want to know if there is a way to do the same thing using composite
array.

Thank you,
Choon Park


Re: [GENERAL] Pg, Netezza, and... Sybase?

2012-10-01 Thread yary
Thanks to all for the education! The bug was in my understanding (and
that bloggers)... and the diverse SQL implementations are doing what
they're meant to. I'll read up more.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Thousands of schemas and ANALYZE goes out of memory

2012-10-01 Thread Jeff Janes
On Mon, Oct 1, 2012 at 12:52 PM, Hugo Nabble hugo.t...@gmail.com wrote:
 Hi everyone,

 We have two postgresql 9.0 databases (32-bits)

Why 32 bits?  Is that what your hardware is?

 with more than 10,000
 schemas. When we try to run ANALYZE in those databases we get errors like
 this (after a few hours):

 2012-09-14 01:46:24 PDT ERROR:  out of memory
 2012-09-14 01:46:24 PDT DETAIL:  Failed on request of size 421.
 2012-09-14 01:46:24 PDT STATEMENT:  analyze;

 (Note that we do have plenty of memory available for postgresql:
 shared_buffers=2048MB, work_mem=128MB, maintenance_work_mem=384MB,
 effective_cache_size = 3072MB, etc.)

That might be the problem.  I think with 32 bits, you only 2GB of
address space available to any given process, and you just allowed
shared_buffers to grab all of it.

Cheers,

Jeff


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to search for composite type array

2012-10-01 Thread David Johnston
From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of ChoonSoo Park
Sent: Monday, October 01, 2012 5:50 PM
To: pgsql-general@postgresql.org
Subject: [GENERAL] How to search for composite type array

 

Hello postgresql gurus,

 

I want to have an array of composite type in a table. Retrieving/saving
value work fine.

 

I just wonder how I can search against composite type array.

 

CREATE TYPE CompXYZ AS (

 attr1 integer,

 attr2 text,

 attr3 inet

);

 

CREATE TABLE sample (

 id integer not null primary key,

 list   CompXYZ[]

);

 

insert into sample values (1, '{(1,abc,127.0.0.1), (5,def,10.0.1.2)}');

insert into sample values (2, '{(10,hello,127.0.0.1),
(20,def,10.0.1.2)}');

insert into sample values (3, '{(20,hello,10.1.1.1),
(30,there,10.1.1.2)}');

 

How I can search a row containing hello for attr2?

 

I know if I have a separate table for saving 3 attributes along with foreign
key to sample table, then I can achieve my goal.

I just want to know if there is a way to do the same thing using composite
array.

 

Thank you,

Choon Park

 


=

 

SELECT * FROM sample WHERE id IN (

SELECT id FROM (

SELECT id, unnest(list) AS list_item FROM sample

) explode  --need to unnest the array so you can address individual parts of
the composite type in the where clause

WHERE (explode.list_item).text = 'hello' -note the () are required around
(table.column), even if table is omitted; i.e., (list_item).text

) --/IN

 

In may be worth it to define a 

 

text = CompXYZ   

 

custom equality function+operator then you could do this (in theory.):

 

. WHERE 'hello' = ANY(list)

 

Your main issue is that the ANY/ALL array operators operating on whole
elements.

 

Tweak the above to output whatever specific data you need as written it
outputs a single record from sample if any of the contained array elements
matches.

 

David J.

 



Re: [GENERAL] Securing .pgpass File?

2012-10-01 Thread Greg Sabino Mullane

-BEGIN PGP SIGNED MESSAGE-
Hash: RIPEMD160


 Has anyone come up with a good solution for distributing a .pgpass file 
 that doesn't expose it to anyone who has access to the distribution 
 mechanism?

No, you cannot easily keep it in version control/puppet securely.

One way is to have an external script that does the jobs of 
puppet, e.g. for $server in @list do cp pgpass $server/...

Alternatively, use gpg to encrypt the pgpass file, then put *that* 
into version control and distribute it. Then have a script on the 
server that decrypts it into place. Yes, you have to manually 
distribute the encryption key to the servers, but it is a one-time 
event, and you can push out changes to the pgpass file easily, and 
automate the decrypt-on-the-server bit, including by puppet itself.

It's not clear what the exact threat model is here, but you could 
also simply not use pgpass, and find some other means to authenticate.

- -- 
Greg Sabino Mullane g...@turnstep.com
End Point Corporation http://www.endpoint.com/
PGP Key: 0x14964AC8 201210011859
http://biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8
-BEGIN PGP SIGNATURE-

iEYEAREDAAYFAlBqIOsACgkQvJuQZxSWSshUhgCgtRGVCRLs9F+KPu2RR+rmOVeq
7T8An1ZPdvlEkciRuLiioi2LbSJUTl2f
=GEi7
-END PGP SIGNATURE-




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] pg_typeof equivalent for numeric scale, numeric/timestamp precision?

2012-10-01 Thread Craig Ringer

Hi all

While examining a reported issue with the JDBC driver I'm finding myself 
wanting SQL-level functions to get the scale and precision of a numeric 
result from an operation like:


  select NUMERIC(8,4) '1.'
  union
  select INTEGER 4;

I can write:

  SELECT pg_typeof(a), a FROM (
select NUMERIC(8,4) '1.'
union
select 4::integer
  ) x(a);

but I didn' t see any SQL-level way to get the scale and precision. The 
output of `pg_typeof` is a `regtype` so it doesn't have any given scale 
and precision, it's just the raw type. I didn't find any functions with 
scale or precision in their name, nor any functions matching 
*numeric* that looked promising. *typmod* only found in- and out- 
functions. Nothing matching *type* looked good.


There's `format_type`, but it requires you to supply the typomod, it 
can't get it from a result for you. Worse, it doesn't seem to offer a 
way to set scale, only precision, so it's of limited utility for numeric 
anyway, since every numeric it produces is invalid (numeric precision 
must be between 1 and 1000).


Will I need to do this from C with a custom function, or via libpq's 
metadata APIs? And re format_type, am I misunderstanding it or is it 
just busted for numeric?


--
Craig Ringer


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Can I force a query plan to materialise part?

2012-10-01 Thread Toby Corkindale

Hi,
Is there any way to force the query planner to do a materialisation stage?

I have a query that joins two views, and takes 28 seconds to run.
However if I create temporary tables that contain the contents of each 
view, and then join them, the total time is 1.3 seconds.


Is there a way to hint to the query planner that it should be taking 
that approach?


Thanks,
Toby


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Can I force a query plan to materialise part?

2012-10-01 Thread David Johnston
On Oct 1, 2012, at 22:33, Toby Corkindale 
toby.corkind...@strategicdata.com.au wrote:

 Hi,
 Is there any way to force the query planner to do a materialisation stage?
 
 I have a query that joins two views, and takes 28 seconds to run.
 However if I create temporary tables that contain the contents of each view, 
 and then join them, the total time is 1.3 seconds.
 
 Is there a way to hint to the query planner that it should be taking that 
 approach?
 
 Thanks,
 Toby
 
 

Have you tried?

With v1 as (), v2 as () select v1 join v2

David J.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Can I force a query plan to materialise part?

2012-10-01 Thread Ondrej Ivanič
Hi,

On 2 October 2012 12:33, Toby Corkindale
toby.corkind...@strategicdata.com.au wrote:
 I have a query that joins two views, and takes 28 seconds to run.
 However if I create temporary tables that contain the contents of each view,
 and then join them, the total time is 1.3 seconds.

try offset 0 (or you can tweak statistics collector to get better estimates):
select ... from (select * from view offset 0) as v 

http://blog.endpoint.com/2009/04/offset-0-ftw.html

-- 
Ondrej Ivanic
(ondrej.iva...@gmail.com)


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Can I force a query plan to materialise part?

2012-10-01 Thread Toby Corkindale

On 02/10/12 12:44, David Johnston wrote:

On Oct 1, 2012, at 22:33, Toby Corkindale 
toby.corkind...@strategicdata.com.au wrote:


Hi,
Is there any way to force the query planner to do a materialisation stage?

I have a query that joins two views, and takes 28 seconds to run.
However if I create temporary tables that contain the contents of each view, 
and then join them, the total time is 1.3 seconds.

Is there a way to hint to the query planner that it should be taking that 
approach?

Thanks,
Toby


Have you tried?

With v1 as (), v2 as () select v1 join v2


Hi David,
I just tried that now, and it ran in 350ms; much faster than even the 
temporary-table-creating method.


thanks!
Toby


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Can I force a query plan to materialise part?

2012-10-01 Thread Toby Corkindale

On 02/10/12 13:02, Ondrej Ivanič wrote:

Hi,

On 2 October 2012 12:33, Toby Corkindale
toby.corkind...@strategicdata.com.au wrote:

I have a query that joins two views, and takes 28 seconds to run.
However if I create temporary tables that contain the contents of each view,
and then join them, the total time is 1.3 seconds.


try offset 0 (or you can tweak statistics collector to get better estimates):
select ... from (select * from view offset 0) as v 

http://blog.endpoint.com/2009/04/offset-0-ftw.html


Thanks, that ran in 820ms. (Same query with subselects but without the 
OFFSET 0 ran in 28370ms)


David's method is also running in 820ms. I erroneously stated it was 
349ms before, but that was actually how long it took to display the 
results of EXPLAIN.



I wish I could work out what's wrong with the statistics that cause the 
query plan to go awry.. the tables aren't actually very large and I've 
played with the statistics setup previously and it seemed right..



many thanks,
Toby


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Again, problem with pgbouncer

2012-10-01 Thread Phoenix Kiula
On Tue, Oct 2, 2012 at 12:59 AM, Phoenix Kiula phoenix.ki...@gmail.com wrote:
 Could you please check permission of /var/run/pgbouncer/ directory. If
 pgbouncer directory does not have postgres user permissions,please assign
 it and then start the pgbouncer.


 The /var/run/pgbouncer/ directory has

chown -R postgres:postgres ..

 The port number everywhere is already 6789.

 What else?



And just to be safe, I also added pgbouncer user to postgres group:


usermod -a -G postgres pgbouncer


Now when I restart the pgbouncess service, it fails. The log has this message:


2012-10-01 23:25:24.004 21037 FATAL
Cannot open logfile: '/var/log/pgbouncer.log':
Permission denied


That file is owned by postgres:postgres as indicated in a gazillion
threads and documentation online (none of which is comprehensive) but
just to be sure I also did this:


chown :postgres /var/log/pgbouncer.log


Still the same permission error. Seriously, why can't the log message
be a little more useful? Why can't it say clearly WHICH USER is
looking for permission to the log file? Both pgbouncer and
postgres have permissions (through the group postgres) on that
file. So which is it?

Much appreciate any pointers.

Thanks.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Can I force a query plan to materialise part?

2012-10-01 Thread Ondrej Ivanič
Hi,

On 2 October 2012 13:28, Toby Corkindale
toby.corkind...@strategicdata.com.au wrote:
 I have a query that joins two views, and takes 28 seconds to run.
 However if I create temporary tables that contain the contents of each
 view,
 and then join them, the total time is 1.3 seconds.


 try offset 0 (or you can tweak statistics collector to get better
 estimates):
 select ... from (select * from view offset 0) as v 

 I wish I could work out what's wrong with the statistics that cause the
 query plan to go awry.. the tables aren't actually very large and I've
 played with the statistics setup previously and it seemed right..

Try this (in single session):
explain analyze your query
set default_statistics_target = 1000 (or 500 or 250; 1000 might take ages)
analyze table_1; analyze table_2; ..., analyze table_N; (all involved
tables in your query)
explain analyze your query

and compare explains outputs. If estimates are very different
(magnitude or two) then you should tweak autovacuum frequency and set
per column statistics (ie. keep  default_statistics_target = 100
(default), and change it on per column basis) but this could be
tedious:

Although per-column tweaking of ANALYZE frequency might not be very
productive, you might find it worthwhile to do per-column adjustment
of the level of detail of the statistics collected by ANALYZE. Columns
that are heavily used in WHERE clauses and have highly irregular data
distributions might require a finer-grain data histogram than other
columns. See ALTER TABLE SET STATISTICS, or change the database-wide
default using the default_statistics_target configuration parameter.
Also, by default there is limited information available about the
selectivity of functions. However, if you create an expression index
that uses a function call, useful statistics will be gathered about
the function, which can greatly improve query plans that use the
expression index.
http://www.postgresql.org/docs/9.1/static/routine-vacuuming.html#VACUUM-FOR-STATISTICS

-- 
Ondrej Ivanic
(ondrej.iva...@gmail.com)


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general