Re: [GENERAL] timestamp with timezone and time zone name

2012-08-07 Thread Shridhar Daithankar
On Tuesday 07 Aug 2012 12:21:04 AM Tom Lane wrote:
> Shridhar Daithankar  writes:
> > I am wondering, why following two values result in a shift by 3.5 hours. I
> > would expect them to be identical.
> > 
> > I understand that canonical time zone names could be ambiguous at times
> > but I think IST is not one of them.
> 
> I don't know why you'd think that ...
> 
> src/timezone/tznames/Asia.txt:IST 19800# Indian Standard Time
> src/timezone/tznames/Asia.txt:IST  7200# Israel Standard Time

My bad.. should have searched a bit more.
> 
> ... and there's some references to "Irish Summer Time" in the Olson
> database, as well.  IIRC, IST was one of the primary problems that
> forced us to invent the "timezone_abbreviations" configuration
> mechanism.  Try setting that to "India" if you want the 05:30 meaning.

Thanks. I will stick to the numerical offsets for uniformity.
-- 
Regards
 Shridhar

Re: [GENERAL] PostgreSQL 9.1 product code

2012-08-07 Thread Craig Ringer

On 08/08/2012 08:17 AM, Haiming Zhang wrote:

Hi all,

I am Haiming, a software engineer.

One of our product is depending on PostgreSQL. We know the product code
for 8.2 and 8.3 is {B823632F-3B72-4514-8861-B961CE263224}.  Anyone who
knows the product code for postresql 9.1 could you please provide it and
how can we find the product code for the future version?


Would I be correct in guessing that you were using PgInstaller (the 
MSI-based installer) for 8.2 and 8.3?


If so, the EnterpriseDB installer is a completely different system. 
You'll need to find the appropriate registry entries it creates. I don't 
work with the EDB installer and can't really help with that. the 
installer documentation is here:


  http://www.enterprisedb.com/resources-community/pginst-guide

though it's more of a tutorial/howto than reference documentation.


--
Craig Ringer

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] JSON in 9.2: limitations

2012-08-07 Thread Craig Ringer

On 08/08/2012 03:45 AM, Merlin Moncure wrote:


Given that you can do that, if you had the ability to emit json from
an hstore the OP's problem would be trivially handled.


That's where my thinking went at first too, but there's a wrinkle with 
that: json represents the number 1 and the string "1" differently. 
hstore doesn't. The input data would need to be JSON-escaped before 
being added to hstore to preserve that difference - so at minimum some 
kind of scalar json_escape(...) function is still needed.


I was also trying to avoid the need to use *two* extensions for the job.

--
Craig Ringer

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Problem running "ALTER TABLE...", ALTER TABLE waiting

2012-08-07 Thread Sergey Konoplev
On Wed, Aug 8, 2012 at 3:03 AM, Brian McNally  wrote:
> [root@gvsdb-dev tmp]# gdb /usr/pgsql-9.0/bin/postmaster 1160
> GNU gdb (GDB) Red Hat Enterprise Linux (7.0.1-32.el5_6.2)
> (gdb) bt
> #0  0x00378f8d5497 in semop () from /lib64/libc.so.6
> #1  0x005bc1c3 in PGSemaphoreLock (sema=0x2b1e695789e8,
> interruptOK=1 '\001') at pg_sema.c:420
> #2  0x005ec8a1 in ProcSleep (locallock=0x95e00d0,
> lockMethodTable=) at proc.c:973
> #3  0x005eb45c in WaitOnLock (locallock=0x95e00d0, owner=0x95573b0)
> at lock.c:1223
> #4  0x005ebb8c in LockAcquireExtended (locktag=0x7fffd9671d40,
> lockmode=8, sessionLock=, dontWait=0 '\000',
> reportMemoryError=1 '\001') at lock.c:848
> #5  0x005e988b in LockRelationOid (relid=17211, lockmode=8) at
> lmgr.c:79
> #6  0x00467ee5 in relation_open (relationId=17211,
> lockmode=-647554384) at heapam.c:906

What kernel version is on this machine and which one is on the server
where everything works fine? As I understand Red Hat is installed on
both of them, am I correct?

I have found several mentions of similar situations related to a
possible kernel bug.

> #7  0x004f057c in transformAlterTableStmt (stmt=0x9558c70,
> queryString=0x95e3310 "alter table samples add column esp_race text;")
> at parse_utilcmd.c:1948
> #8  0x005fae4c in standard_ProcessUtility (parsetree=0x95e3f48,
> queryString=0x95e3310 "alter table samples add column esp_race text;",
> params=0x0, isTopLevel=1 '\001', dest=0x95e4288,
> completionTag=0x7fffd9672110 "") at utility.c:706
> #9  0x005f81e9 in PortalRunUtility (portal=0x9636c20,
> utilityStmt=0x95e3f48, isTopLevel=1 '\001', dest=0x95e4288,
> completionTag=0x7fffd9672110 "") at pquery.c:1191
> #10 0x005f9228 in PortalRunMulti (portal=0x9636c20, isTopLevel=1
> '\001', dest=0x95e4288, altdest=0x95e4288,
> completionTag=0x7fffd9672110 "") at pquery.c:1296
> #11 0x005f9c45 in PortalRun (portal=0x9636c20,
> count=9223372036854775807, isTopLevel=1 '\001', dest=0x95e4288,
> altdest=0x95e4288,
> completionTag=0x7fffd9672110 "") at pquery.c:822
> #12 0x005f6745 in exec_simple_query (query_string=0x95e3310 "alter
> table samples add column esp_race text;") at postgres.c:1060
> #13 0x005f6ff4 in PostgresMain (argc=,
> argv=, username=)
> at postgres.c:3978
> #14 0x005c6e35 in ServerLoop () at postmaster.c:3565
> #15 0x005c7b3c in PostmasterMain (argc=5, argv=0x951dbb0) at
> postmaster.c:1097
> #16 0x005714be in main (argc=5, argv=) at
> main.c:188
> ===
>
> --
> Brian McNally
>
>
> On 08/02/2012 05:57 AM, Sergey Konoplev wrote:
>>
>> Hi Brian,
>>
>> On Wed, Aug 1, 2012 at 10:21 PM, Brian McNally  wrote:
>>>
>>> I was able to upgrade Postgres to 9.0.8 today and the ALTER TABLE...
>>> command
>>> still hangs. Stracing the hung PID doesn't reveal much:
>>>
>>> [root@gvsdb-dev ~]# strace -fp 13107
>>> Process 13107 attached - interrupt to quit
>>> semop(843382828, 0x7fffd9671ab0, 1
>>>
>>> ps still shows the same waiting process:
>>>
>>> postgres 13107 13067  0 11:14 ?00:00:00 postgres: postgres
>>> exomeSNP
>>> [local] ALTER TABLE waiting
>>
>>
>> Can you attach to the hanging process with gdb and show a backtrace?
>>
>>
>> http://wiki.postgresql.org/wiki/Getting_a_stack_trace_of_a_running_PostgreSQL_backend_on_Linux/BSD
>>
>>>
>>> --
>>> Brian McNally
>>>
>>>
>>> On 07/20/2012 12:06 AM, Sergey Konoplev wrote:


 On Fri, Jul 20, 2012 at 10:42 AM, Brian McNally  wrote:
>
>
> Thanks for the help. I don't get any results from that query either
> though.



 Okay, it looks like a bug for me.

 What I would do is to upgrade Pg to the latest minor release 9.0.8.
 Probably this issue has already been solved.

 If it will not help run strace -p  and show its output.


>>>
>>
>>
>>
>



-- 
Sergey Konoplev

a database architect, software developer at PostgreSQL-Consulting.com
http://www.postgresql-consulting.com

Jabber: gray...@gmail.com Skype: gray-hemp Phone: +79160686204

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] PostgreSQL 9.1 product code

2012-08-07 Thread Haiming Zhang
Hi all,

I am Haiming, a software engineer.

One of our product is depending on PostgreSQL. We know the product code for 8.2 
and 8.3 is {B823632F-3B72-4514-8861-B961CE263224}.  Anyone who knows the 
product code for postresql 9.1 could you please provide it and how can we find 
the product code for the future version?

I tried to put postgresRegSubkey = 
'SOFTWARE\PostgreSQL\Installations\{B823632F-3B72-4514-8861-B961CE263224}' for 
version 9.1.4, our product failed to run psql.exe. However if I put 
postgresRegSubkey = 'SOFTWARE\PostgreSQL\Installations\postgresql-x64-9.1', our 
product runs without any error. I don not want to change the registry manually 
in the future. Would anyone please help with how to manage this?

Thanks a lot.

Haiming Zhang
Software Engineer

Redflex Traffic Systems
Tel: +61 3 9674 1868
Mob:
Email: haiming.zh...@redflex.com.au
Web:   www.redflex.com
Addr:  31 Market Street South Melbourne VIC 3205
Twitter:   http://twitter.com/redflexsafety

[cid:image001.jpg@01CD754D.A9001960]


If you are not an authorised recipient of this e-mail, please contact me at 
Redflex immediately by return phone call or by email. In this case, you should 
not read, print, retransmit, store or act in reliance on this e-mail or any 
attachments, and should destroy all copies of them. This e-mail and any 
attachments are confidential and may contain privileged information and/or 
copyright material of Redflex or third parties. You should only retransmit, 
distribute or commercialise the material if you are authorised to do so. This 
notice should not be removed.

<>

Re: [GENERAL] Using Insert with case

2012-08-07 Thread David Johnston
From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Bob Pawley
Sent: Tuesday, August 07, 2012 6:26 PM
To: Postgresql
Subject: [GENERAL] Using Insert with case

Hi
 
select
case when somevariable = 2
    then (insert into pipe (line)
    select bob.edge_data.edge_id
    from bob.edge_data, bob.node, pipe
    where st_intersects(st_startpoint(bob.edge_data.geom),
bob.node.geom)
    and bob.node.node_id = 415
    and pipe.id = 1)
 
I am attempting to use the above. However, with or without the enclosing
brackets I get a syntax error on the word into.
 
Help will be appreciated.
 
Bob

==

Your statement is syntactically wrong.

If you provide your version and a better idea of what you are trying to
accomplish someone may be able to provide meaningful advice.

You should also provide the entire query.  I assume you are not due to the
missing "END" after your CASE construct.  There is also no location for
"somevariable" to actual come from.

As thought starters you can place the INSERT statement into a function OR
you can try using WITH ( INSERT RETURNING ); which one if either is workable
depends on more information than you have provided.

You can also use plpgsql to encapsulate the procedural logic.  You can
either write a named function or you can use "DO".

David J.
 




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Using Insert with case

2012-08-07 Thread Bob Pawley

Hi Alban

Probably no difference except I have four cases and I was trying, in an 
attempt to save processing time, to compact commands a little.


Bob

-Original Message- 
From: Alban Hertroys

Sent: Tuesday, August 07, 2012 3:40 PM
To: Bob Pawley
Cc: Postgresql
Subject: Re: [GENERAL] Using Insert with case

On 8 Aug 2012, at 24:26, Bob Pawley wrote:


Hi

select
case when somevariable = 2
then (insert into pipe (line)
select bob.edge_data.edge_id
from bob.edge_data, bob.node, pipe
where st_intersects(st_startpoint(bob.edge_data.geom), 
bob.node.geom)

and bob.node.node_id = 415
and pipe.id = 1)

I am attempting to use the above. However, with or without the enclosing 
brackets I get a syntax error on the word into.


Is that somehow different from this?

insert into pipe (line)
   select bob.edge_data.edge_id
   from bob.edge_data, bob.node, pipe
   where st_intersects(st_startpoint(bob.edge_data.geom), 
bob.node.geom)

   and bob.node.node_id = 415
   and pipe.id = 1
and somevariable = 2


Alban Hertroys

--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest. 



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Using Insert with case

2012-08-07 Thread Alban Hertroys
On 8 Aug 2012, at 24:26, Bob Pawley wrote:

> Hi
>  
> select
> case when somevariable = 2
> then (insert into pipe (line)
> select bob.edge_data.edge_id
> from bob.edge_data, bob.node, pipe
> where st_intersects(st_startpoint(bob.edge_data.geom), bob.node.geom)
> and bob.node.node_id = 415
> and pipe.id = 1)
>  
> I am attempting to use the above. However, with or without the enclosing 
> brackets I get a syntax error on the word into.

Is that somehow different from this?

insert into pipe (line)
select bob.edge_data.edge_id
from bob.edge_data, bob.node, pipe
where st_intersects(st_startpoint(bob.edge_data.geom), bob.node.geom)
and bob.node.node_id = 415
and pipe.id = 1
and somevariable = 2


Alban Hertroys

--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Using Insert with case

2012-08-07 Thread Chris Angelico
On Wed, Aug 8, 2012 at 8:26 AM, Bob Pawley  wrote:
> Hi
>
> select
> case when somevariable = 2
> then (insert into pipe (line) ...
>
> I am attempting to use the above. However, with or without the enclosing
> brackets I get a syntax error on the word into.

Utterly untested, but does it work if you put a RETURNING clause onto
the INSERT? That would make it functionally similar to a SELECT, a
technique that works for me in other situations.

ChrisA

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Using Insert with case

2012-08-07 Thread Bob Pawley
Hi

select
case when somevariable = 2
then (insert into pipe (line)
select bob.edge_data.edge_id
from bob.edge_data, bob.node, pipe
where st_intersects(st_startpoint(bob.edge_data.geom), bob.node.geom)
and bob.node.node_id = 415
and pipe.id = 1)

I am attempting to use the above. However, with or without the enclosing 
brackets I get a syntax error on the word into.

Help will be appreciated.

Bob

Re: [GENERAL] JSON in 9.2: limitations

2012-08-07 Thread Merlin Moncure
On Tue, Aug 7, 2012 at 11:31 AM, Merlin Moncure  wrote:
> On Tue, Aug 7, 2012 at 3:20 AM, Craig Ringer  wrote:
>> (Reposted as the list manager appears to have eaten the first copy):
>>
>> Hey all
>>
>> It seems to be surprisingly hard to build JSON structures with PostgreSQL
>> 9.2's json features, because:
>>
>> - There's no aggregate, function or operator that merges two or more
>> objects; and
>> - there's no single-value "json_escape" or equivalent.
>>
>> Take this example from the SO question
>> http://stackoverflow.com/questions/11813976/in-postgres-is-there-an-easy-way-to-select-several-attr-val-rows-into-one-recor/11814255#11814255
>>
>> Given:
>>
>> |create  table  t1(  attr textprimary  key,  val text);
>> insert  into  t1values(  'attr1',  'val1'  );
>>
>> insert  into  t1values(  'attr2',  'val3'  );
>>
>> insert  into  t1values(  'attr3',  'val3'  );
>>
>> |
>>
>> Produce:
>>
>> |{ "attr1": "val1",  "attr2" :"val2",  "attr3" : "val3" }
>> |
>>
>>
>> It's very basic, but I couldn't work out a way of doing it that was safe if
>> you also:
>>
>> insert into t1 (attr,val) values ('at"tr', 'v"a"l');
>>
>> which I found quite interesting.
>>
>> With hstore there are several approaches that work:
>>
>> |select  hstore(  array_agg(attr),  array_agg(val)  )  from  t1;
>> |
>>
>> or
>>
>> CREATE AGGREGATE hstore_agg ( basetype = hstore, sfunc = hs_concat, stype =
>> hstore );
>>
>> SELECT hstore_agg( attr => val ) FROM t1;
>> hstore_agg
>> 
>>  "at\"tr"=>"v\"a\"l", "attr1"=>"val1", "attr2"=>"val3", "attr3"=>"val3"
>> (1 row)
>>
>>
>> ... but neither of these appear to be possible with json. Seems like there's
>> a need for a:
>>
>> json( text[], json[] )
>>
>> and/or:
>>
>> json_agg( json )
>>
>> to allow the construction of json values. Both of these would also need
>> funcs to create single json literals, a:
>>
>> json_esc(anyelement) -> json
>>
>> or at least:
>>
>> json_esc(text) -> json
>>
>>
>> I'm not saying "... some some coding fairy should go and magically create
>> those". I'm interested in opinions. Am I missing something obvious? Is this
>> sort of thing supposed to be done via PL/v8 ? Is it just that the json
>> feature needed to get finished so it was kept small for the first release?
>>
>> Do such functions exist outside the merged patch? If not, would it be
>> helpful to have them written?
>
> why not crosstab the set first then use standard row_to_json?

Well, crosstab is a headache because it requires making an explicit
description of the row fields in the query, which is a  headache if
you don't know the list at the time when the query is made (this is
why I usually wrap crosstab queries with a query generator).

I think this problem could be characterized with general difficulties
in terms of dealing with rowtypes in sql.  The hstore extension is
superior to rowtypes in just about every way (except maybe
performance).  We could really use a hstore_to_json (and maybe
json_to_hstore) feature for the hstore type.   hstores can be
concatenated:

postgres=# select hstore( array_agg(attr), array_agg(val)) ||
hstore('attr4=>val4') from t1;
  ?column?

 "attr1"=>"val1", "attr2"=>"val3", "attr3"=>"val3", "attr4"=>"val4"

Given that you can do that, if you had the ability to emit json from
an hstore the OP's problem would be trivially handled.

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] can we avoid pg_basebackup on planned switches?

2012-08-07 Thread Sergey Konoplev
On Sun, Aug 5, 2012 at 10:12 PM, Fujii Masao  wrote:
>> Have we just avoided running pg_basebackup, or have we just given ourselves
>> data corruption?
>
> If you change your operations in the above-mentioned way, I think you can
> avoid pg_basebackup on the planned switch. I've not tested your operations.
> So please test them carefully before applying them to your system.

It is really hopeful.

So are there any thoughts of how to make sure that after performing
the changed process there are no data corruption on the new replica
(ex-master)?

ps. BTW do not we need to CHECKPOINT the old replica after copying all
the WAL files from the stopped master and before promoting it (the old
replica) to a new master?

>
>> Because we're using wal archiving, can we simplify and
>> leave out step 3?
>
> Yes.
>
> Regards,
>
> --
> Fujii Masao
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general

-- 
Sergey Konoplev

a database architect, software developer at PostgreSQL-Consulting.com
http://www.postgresql-consulting.com

Jabber: gray...@gmail.com Skype: gray-hemp Phone: +79160686204

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] can we avoid pg_basebackup on planned switches?

2012-08-07 Thread Ben Chobot

On Aug 7, 2012, at 9:32 AM, Fujii Masao wrote:

> On Mon, Aug 6, 2012 at 3:29 AM, Ben Chobot  wrote:
>> 
>> Oh, I would have though that doing a clean shutdown of the old master (step 
>> 1) would have made sure that all the unstreamed wal records would be flushed 
>> to any connected slaves as part of the master shutting down. In retrospect, 
>> I don't remember reading that anywhere, so I must have made that up because 
>> I wanted it to be that way. Is it wishful thinking?
> 
> When clean shutdown is requested, the master sends all WAL records to
> the standby,
> but it doesn't wait for the standby to receive them. So there is no
> guarantee that all WAL
> records have been flushed to the standby. Walreceiver process in the
> standby might
> detect the termination of replication connection and exit before
> receiving all WAL records.
> Unfortunately I've encountered that case some times.


Oh, I see. Well, that's unfortunate. Thanks for the help though! It shouldn't 
be too hard to script up what you suggest.
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Feature Request - Postgres FDW

2012-08-07 Thread David Greco
Great thanks. I see there is talk of 9.3 including autonomous transaction 
support as well.



-Original Message-
From: Kevin Grittner [mailto:kevin.gritt...@wicourts.gov] 
Sent: Tuesday, August 07, 2012 1:04 PM
To: David Greco; pgsql-general@postgresql.org
Subject: Re: [GENERAL] Feature Request - Postgres FDW

David Greco  wrote:
 
> Surprised to see this isn't offered as a Foreign Data Wrapper- one to 
> other Postgres servers.
 
People have been working on it.  It seems quite likely to be included in the 
9.3 release next year.
 
-Kevin



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Feature Request - Postgres FDW

2012-08-07 Thread Andreas Kretschmer
Kevin Grittner  wrote:

> David Greco  wrote:
>  
> > Surprised to see this isn't offered as a Foreign Data Wrapper- one
> > to other Postgres servers.
>  
> People have been working on it.  It seems quite likely to be
> included in the 9.3 release next year.

That's cool ;-)


Andreas
-- 
Really, I'm not out to destroy Microsoft. That will just be a completely
unintentional side effect.  (Linus Torvalds)
"If I was god, I would recompile penguin with --enable-fly."   (unknown)
Kaufbach, Saxony, Germany, Europe.  N 51.05082°, E 13.56889°

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Feature Request - Postgres FDW

2012-08-07 Thread Kevin Grittner
David Greco  wrote:
 
> Surprised to see this isn't offered as a Foreign Data Wrapper- one
> to other Postgres servers.
 
People have been working on it.  It seems quite likely to be
included in the 9.3 release next year.
 
-Kevin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] can we avoid pg_basebackup on planned switches?

2012-08-07 Thread Fujii Masao
On Mon, Aug 6, 2012 at 3:29 AM, Ben Chobot  wrote:
>
> On Aug 5, 2012, at 11:12 AM, Fujii Masao wrote:
>
>> On Sat, Jul 28, 2012 at 2:00 AM, Ben Chobot  wrote:
>>> We make heavy use of streaming replication on PG 9.1 and it's been great for
>>> us. We do have one issue with it, though, and that's when we switch master
>>> nodes - currently, the documentation says that you must run pg_basebackup on
>>> your old master to turn it into a slave. That makes sense when the old
>>> master had crashed, but it seems that in the case of a planned switch, we
>>> could do better. Here's what we tried that seemed to work... are we shooting
>>> ourselves in the foot?
>>>
>>> 1. Cleanly shut down the current master.
>>> 2. Pick a slave, turn it into the new master.
>>
>> Before promoting the standby, you have to confirm that all WAL files
>> the old master generated have been shipped to the standby which you'll 
>> promote. Because the
>> standby might terminate the replication before receiving all WAL
>> files. Note that there is no clean way to confirm that. For example, to 
>> confirm that, you need to
>> execute CHECKPOINT in the standby, run pg_controldata in both old master and
>> standby, and check whether their latest checkpoint locations are the same. 
>> You
>> may think to compare the latest checkpoint location in the old master and
>> pg_last_xlog_replay_location in the standby. But the former indicates
>> the *starting* location of the last WAL record (i.e., shutdown checkpoint 
>> WAL record). OTOH,
>> the latter indicates the *ending* location of it. So you should not compare 
>> them
>> without taking into consideration the above mismatch.
>>
>> If the standby failed to receive some WAL files, you need to manually copy 
>> them
>> in pg_xlog from the old master to the standby.
>
> Oh, I would have though that doing a clean shutdown of the old master (step 
> 1) would have made sure that all the unstreamed wal records would be flushed 
> to any connected slaves as part of the master shutting down. In retrospect, I 
> don't remember reading that anywhere, so I must have made that up because I 
> wanted it to be that way. Is it wishful thinking?

When clean shutdown is requested, the master sends all WAL records to
the standby,
but it doesn't wait for the standby to receive them. So there is no
guarantee that all WAL
records have been flushed to the standby. Walreceiver process in the
standby might
detect the termination of replication connection and exit before
receiving all WAL records.
Unfortunately I've encountered that case some times.

Regards,

-- 
Fujii Masao

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] JSON in 9.2: limitations

2012-08-07 Thread Merlin Moncure
On Tue, Aug 7, 2012 at 3:20 AM, Craig Ringer  wrote:
> (Reposted as the list manager appears to have eaten the first copy):
>
> Hey all
>
> It seems to be surprisingly hard to build JSON structures with PostgreSQL
> 9.2's json features, because:
>
> - There's no aggregate, function or operator that merges two or more
> objects; and
> - there's no single-value "json_escape" or equivalent.
>
> Take this example from the SO question
> http://stackoverflow.com/questions/11813976/in-postgres-is-there-an-easy-way-to-select-several-attr-val-rows-into-one-recor/11814255#11814255
>
> Given:
>
> |create  table  t1(  attr textprimary  key,  val text);
> insert  into  t1values(  'attr1',  'val1'  );
>
> insert  into  t1values(  'attr2',  'val3'  );
>
> insert  into  t1values(  'attr3',  'val3'  );
>
> |
>
> Produce:
>
> |{ "attr1": "val1",  "attr2" :"val2",  "attr3" : "val3" }
> |
>
>
> It's very basic, but I couldn't work out a way of doing it that was safe if
> you also:
>
> insert into t1 (attr,val) values ('at"tr', 'v"a"l');
>
> which I found quite interesting.
>
> With hstore there are several approaches that work:
>
> |select  hstore(  array_agg(attr),  array_agg(val)  )  from  t1;
> |
>
> or
>
> CREATE AGGREGATE hstore_agg ( basetype = hstore, sfunc = hs_concat, stype =
> hstore );
>
> SELECT hstore_agg( attr => val ) FROM t1;
> hstore_agg
> 
>  "at\"tr"=>"v\"a\"l", "attr1"=>"val1", "attr2"=>"val3", "attr3"=>"val3"
> (1 row)
>
>
> ... but neither of these appear to be possible with json. Seems like there's
> a need for a:
>
> json( text[], json[] )
>
> and/or:
>
> json_agg( json )
>
> to allow the construction of json values. Both of these would also need
> funcs to create single json literals, a:
>
> json_esc(anyelement) -> json
>
> or at least:
>
> json_esc(text) -> json
>
>
> I'm not saying "... some some coding fairy should go and magically create
> those". I'm interested in opinions. Am I missing something obvious? Is this
> sort of thing supposed to be done via PL/v8 ? Is it just that the json
> feature needed to get finished so it was kept small for the first release?
>
> Do such functions exist outside the merged patch? If not, would it be
> helpful to have them written?

why not crosstab the set first then use standard row_to_json?

merlin

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Feature Request - Postgres FDW

2012-08-07 Thread David Greco
Surprised to see this isn't offered as a Foreign Data Wrapper- one to other 
Postgres servers. I was attempting to replace some uses I have of dbilink, and 
found a couple places where I am using it to connect to Postgres. One is for 
pseudo "Autonomous Transactions"- a db link to the same postgres server with 
autocommit turned on. Another is to a different database server, simply for 
separation of concerns, load distribution, etc, where data is rarely but 
sometimes needed from the originating server.

Has there been any talk of providing a Postgres FDW? What are your thoughts?

~Dave Greco



Re: [GENERAL] Interval to months

2012-08-07 Thread Steve Atkins

On Aug 7, 2012, at 8:41 AM, Aram Fingal  wrote:

> I have a field which contains an interval value and I sometimes need to 
> represent the full interval (not a part) as a decimal number of months.  For 
> example, "5 years 6 mons 3 days" as "66.1 months".  I've been trying to 
> figure out how to do this and haven't found a definitive answer.  
> 
> The following gives an approximation:
> round(cast(extract(epoch from time_interval)/2592000 as numeric), 2) || ' 
> months'
> 
> The number 2592000 is seconds in a 30 day month.  Accounting for leap years, 
> etc. Google calculates it as 2629743.83.  The thing is that the 30 day month 
> number gives the right answer for short intervals while the Google number 
> gives the right answer for longer intervals (several years or more.) Is there 
> a better way?


Something like this?

select 12 * extract(year from ?) + extract(month from ?) + extract(epoch from ? 
- date_trunc('month', ?)) / 2592000

Ugly, but likely closer to accurate. You can't get actually accurate, of 
course, as you don't know how long a month is.

Cheers,
  Steve
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Interval to months

2012-08-07 Thread Aram Fingal
I have a field which contains an interval value and I sometimes need to 
represent the full interval (not a part) as a decimal number of months.  For 
example, "5 years 6 mons 3 days" as "66.1 months".  I've been trying to figure 
out how to do this and haven't found a definitive answer.  

The following gives an approximation:
round(cast(extract(epoch from time_interval)/2592000 as numeric), 2) || ' 
months'

The number 2592000 is seconds in a 30 day month.  Accounting for leap years, 
etc. Google calculates it as 2629743.83.  The thing is that the 30 day month 
number gives the right answer for short intervals while the Google number gives 
the right answer for longer intervals (several years or more.) Is there a 
better way?

--Aram
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Memory error in user-defined aggregation function

2012-08-07 Thread Adriaan Joubert
Hi,

Finally got this running under the debugger and figured out what is
going on. I had been under the impression that

 if (PG_ARGISNULL(0))
 PG_RETURN_NULL();

 state = (quartile_state *) PG_GETARG_POINTER(0);

would ensure that state was never a null pointer. However this is not
the case, and an additional check for state==0x0 solved the problem.
Somewhat unexpected, I have to say.

I would still be interested in any ways in which this implementation
could be improved. It would be good if there were some model
implementations for this type of thing - without orafce to guide me I
would have had a hard time figuring any of this out from the docs. I'd
gladly make the quartile implementation available for this purpose if
there is interest.

Adriaan


On 7 August 2012 15:04, Adriaan Joubert  wrote:
> Hi,
>
> I've implemented an aggregation function to compute quartiles in C
> borrowing liberally from orafce code. I uses this code in a windowing
> context and it worked fine until today - and I'm not sure what
> changed. This is on 9.1.2 and I have also tried it on 9.1.4.
>
> What I have determined so far (by sprinkling a lot of elog's
> throughout the code) is that it does not seem to be data specific,
> although it seems to depend on the number of aggregations I do (up to
> about 1250 seems to be fine, beyond that it chokes). I also
> established that there does not seem to be a problem with the transfer
> function, and the data is accumulated without any issues. The error I
> see is in the call to first_quartile_final (listed below). The pointer
> to the transfer data structure is not null, but accessing the field
> mstate->nelems causes a segflt. So the transfer data structure pointer
> is bogus.
>
> I've recompiled postgres with debugging enabled and have connected to
> the backend with gdb, but haven't had any joy in persuading gdb to
> actually stop in the correct file so that I can step through. I'll
> keep on trying to make some headway with that.
>
> In the meantime I would appreciate any comments as to whether the
> approach taken is the right one, and whether additional checks can be
> inserted to avoid this segmentation faults.
>
> Many thanks,
>
> Adriaan
>
>
> My transfer data structure is
>
> typedef struct
> {
>   int len; /* allocated length */
>   int nextlen; /* next allocated length */
>   int nelems; /* number of valid entries */
>   float8  *values;
> } quartile_state;
>
> On the first call to the aggregate function this data structure is
> allocated as follows:
>
> static quartile_state *
> quartile_accummulate(quartile_state *mstate, float8 value,
> MemoryContext aggcontext)
> {
> MemoryContext oldcontext;
>
> if (mstate == NULL)
> {
> /* First call - initialize */
> oldcontext = MemoryContextSwitchTo(aggcontext);
> mstate = palloc(sizeof(quartile_state));
> mstate->len = 512;
> mstate->nextlen = 2 * 512;
> mstate->nelems = 0;
> mstate->values = palloc(mstate->len * sizeof(float8));
> MemoryContextSwitchTo(oldcontext);
> }
> else
> {
> if (mstate->nelems >= mstate->len)
> {
> int newlen = mstate->nextlen;
>
> oldcontext = MemoryContextSwitchTo(aggcontext);
> mstate->nextlen += mstate->len;
> mstate->len = newlen;
> mstate->values = repalloc(mstate->values, mstate->len 
> * sizeof(float8));
> MemoryContextSwitchTo(oldcontext);
> }
> }
>
> mstate->values[mstate->nelems++] = value;
>
> return mstate;
> }
>
>
> And the transfer function itself is
>
> PG_FUNCTION_INFO_V1(quartile_transfer);
> Datum
> quartile_transfer(PG_FUNCTION_ARGS) {
> MemoryContext   aggcontext;
> quartile_state *state = NULL;
> float8 elem;
>
> if (!AggCheckCallContext(fcinfo, &aggcontext))
> {
> elog(ERROR, "quartile_transform called in non-aggregate 
> context");
> }
>
> state = PG_ARGISNULL(0) ? NULL : (quartile_state *) 
> PG_GETARG_POINTER(0);
> if (PG_ARGISNULL(1))
> PG_RETURN_POINTER(state);
>
> elem = PG_GETARG_FLOAT8(1);
>
> state = quartile_accummulate(state, elem, aggcontext);
>
> PG_RETURN_POINTER(state);
> }
>
> The final function for the computation of the first quartile is
>
> PG_FUNCTION_INFO_V1(first_quartile_final);
> Datum
> first_quartile_final(PG_FUNCTION_ARGS) {
> quartile_state *state = NULL;
> float8 result;
>
> if (PG_ARGISNULL(0))
> PG_RETURN_NULL();
>
> state = (quartile_state *) PG_GETARG_POINTER(0);
>
> /** HERE state->nelems causes a segflt */
> if (state->nelems<4)
> PG_RETU

Re: [GENERAL] Memory error in user-defined aggregation function

2012-08-07 Thread Tom Lane
Adriaan Joubert  writes:
> I've implemented an aggregation function to compute quartiles in C
> borrowing liberally from orafce code. I uses this code in a windowing
> context and it worked fine until today - and I'm not sure what
> changed. This is on 9.1.2 and I have also tried it on 9.1.4.

Hm, it doesn't look very different from what's done in e.g. array_agg.
You do have the transition datatype declared as "internal", no?

I notice that your transition function is sloppy about returning a null
pointer (as opposed to a SQL null) if both input arguments are null.
If all the aggregated values were nulls then the null pointer would
reach the final function and cause a crash similar to the one described.
But that's hardly "not data specific".

> I've recompiled postgres with debugging enabled and have connected to
> the backend with gdb, but haven't had any joy in persuading gdb to
> actually stop in the correct file so that I can step through. I'll
> keep on trying to make some headway with that.

I've found that gdb takes extra persuasion to notice shared libraries
that are loaded after it attaches to the process.  Usually the path
of least resistance is to ensure that the library is loaded before
you attach.  Use LOAD, or just CREATE OR REPLACE one of the functions
in your library.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] warnings about invalid "search_path"

2012-08-07 Thread Scott Marlowe
On Tue, Aug 7, 2012 at 7:15 AM, Samba  wrote:
> Thanks Gabriele for those pointers,
>
> I could now narrow it down to two things:
>
> "system_data" user logging into other databases [one of those may be the
> default 'postgres'] which does not have "system_data" schema
> other users [like 'postgres'] logging into their own or even other databases
> which does not have "system_data" schema.
>
> I did notice that we have added "system_data" schema to a few other users
> [roles]  who are also supposed to login to the database containing
> "system_data" schema and that is causing this side-effect of logging these
> warning messages when users who have "system_data" in the search_path log
> into other databases that do not have "system_data" schema.
>
> So, what i understand is needed for me is "how to add a schema (or multiple
> schemas) in the search path for a database irrespective of whichever user
> logs in to the database?"
>
> Could you explain how to add "schema(s) into search_path for a database
> irrespective of the user logging-in?

You can set  search path for a particular database:

alter database xyz set search_path='abc','xyz';

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] warnings about invalid "search_path"

2012-08-07 Thread Samba
Hi all,

I now realize that the issue is indeed occurring when users who have
"system_data" in their search_path log in to other databases that does not
have that schema.


Could someone explain how to "add schema(s) into search_path for a database
[not to user/role] irrespective of whichever user logging-in"?


Thanks and Regards,
Samba

---

On Tue, Aug 7, 2012 at 7:50 PM, Albe Laurenz wrote:

> Samba wrote:
> > I'm seeing some weired errors in the postgres logs after upgrading
> > to postgres-9.1(.3) about the schema added by default to search patch
> >
> > WARNING:  invalid value for parameter "search_path": "system_data"
> > DETAIL:  schema "system_data" does not exist
> >
> > We do have a user named "system_data" and a schema with the same
> name...
> >
> > Apart from these warning messages, there is not other problem about it
>
> Maybe the warning is from a different database that does not have such
> a schema.
>
> Try to add %d to the log_line_prefix parameter.
>
> Yours,
> Laurenz Albe
>


Re: [GENERAL] warnings about invalid "search_path"

2012-08-07 Thread Albe Laurenz
Samba wrote:
> I'm seeing some weired errors in the postgres logs after upgrading
> to postgres-9.1(.3) about the schema added by default to search patch
> 
> WARNING:  invalid value for parameter "search_path": "system_data"
> DETAIL:  schema "system_data" does not exist
> 
> We do have a user named "system_data" and a schema with the same
name...
> 
> Apart from these warning messages, there is not other problem about it

Maybe the warning is from a different database that does not have such
a schema.

Try to add %d to the log_line_prefix parameter.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Are stored procedures always kept in memory?

2012-08-07 Thread Pavel Stehule
Hello

>
>
> Now my questions is: Are the stored functions (both plpgsql and plain sql
> functions) kept always in a memory? Or they are stored similarly like
> tables, on the disk, reading them into memory when called and possibly
> release them from memory, if memory is needed for something else?

procedures living in pg_proc table. Before call procedure is loaded,
compiled (to abstract syntax tree) and compiled code (tree) is stored
to session cache. Cache is released after logout or function's update.

Regards

Pavel Stehule


>
>
>
> Thanks for reply.
>
>
>
> R.G.
>
>
>
> 
> Disclaimer: http://www.aps-holding.com/disclaimer.html

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] warnings about invalid "search_path"

2012-08-07 Thread Tom Lane
Samba  writes:
> I'm seeing some weired errors in the postgres logs after upgrading to
> postgres-9.1(.3) about the schema added by default to search patch
> WARNING:  invalid value for parameter "search_path": "system_data"
> DETAIL:  schema "system_data" does not exist
> ...
> Could anyone suggest what could be wrong with my setup and how to get past
> it?

>From the 9.1.4 release notes:

* Ignore missing schemas during non-interactive assignments of 
search_path (Tom Lane)

This re-aligns 9.1's behavior with that of older branches. Previously
9.1 would throw an error for nonexistent schemas mentioned in
search_path settings obtained from places such as ALTER DATABASE SET.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Are stored procedures always kept in memory?

2012-08-07 Thread Roman Golis
We run several instances of postgre in different countries, and we try keeping 
them as same as possible, in terms of structure of the tables and function 
definitions (except the content of schema "config", which differs between dbs). 
So if we need to implement some different algorithm per country, then we define 
a plpgsql function like this into each of our dbs:
 
BEGIN
select value from config.strings into country where name = 'country';
if country = 'CZ' then
-- Some computations here
elseif country = 'PL' then
-- Different calculations here
elseif country = 'RO' then
-- Yet another algorithm here
end if;
return (result);
END;
 
In this function, we get the value from a table config.strings (which contains 
a different value in each country's database), and based on this value we go 
through a specific if-branch. Simple. But reading this configuration value may 
involve reading from a disk.
 
So to avoid accessing the disk to fetch the country value, I would like to 
replace it by calling a function defined like this (in each db returning a 
different string indicating the country where db resides, of course):
 
create or replace function config.country () returns char(3) as $$ select 
'CZ'::char(3) $$ language sql immutable;
 
And then call it like:
 
if config.country () = 'CZ' then
-- Some computations here
 
Now my questions is: Are the stored functions (both plpgsql and plain sql 
functions) kept always in a memory? Or they are stored similarly like tables, 
on the disk, reading them into memory when called and possibly release them 
from memory, if memory is needed for something else?
 
Thanks for reply.
 
R.G.

Disclaimer: http://www.aps-holding.com/disclaimer.html


Re: [GENERAL] warnings about invalid "search_path"

2012-08-07 Thread Samba
Thanks Gabriele for those pointers,

I could now narrow it down to two things:


   1. "system_data" user logging into other databases [one of those may be
   the default 'postgres'] which does not have "system_data" schema
   2. other users [like 'postgres'] logging into their own or even other
   databases which does not have "system_data" schema.

I did notice that we have added "system_data" schema to a few other users
[roles]  who are also supposed to login to the database containing
"system_data" schema and that is causing this side-effect of logging these
warning messages when users who have "system_data" in the search_path log
into other databases that do not have "system_data" schema.

So, what i understand is needed for me is "how to add a schema (or multiple
schemas) in the search path for a database irrespective of whichever user
logs in to the database?"

Could you explain how to add "schema(s) into search_path for a database
irrespective of the user logging-in?

Thanks and Regards,
Samba

===

On Tue, Aug 7, 2012 at 4:53 PM, Gabriele Bartolini <
gabriele.bartol...@2ndquadrant.it> wrote:

> Hi Samba,
>
>   first: do not worry, it is perfectly normal.
>
>
> On Tue, 7 Aug 2012 16:25:14 +0530, Samba  wrote:
>
>> Hi all,
>> I'm seeing some weired errors in the postgres logs after upgrading to
>> postgres-9.1(.3) about the schema added by default to search patch
>>
>>  WARNING:  invalid value for parameter "search_path": "system_data"
>> DETAIL:  schema "system_data" does not exist
>>
>> We do have a user named "system_data" and a schema with the same
>> name...
>>
>
> A schema is something that belongs to a database.
>
> You have just set the search_path for a specific user ("system_data") to
> include "system_data" (am I right?). However, a user can theoretically
> connect to any database on the instance.
>
> The error above is generated when you connect with that user
> ("system_data") to a database that does not have the "system_data" schema.
>
> Please let me have more information if my assumption were wrong.
>
> Cheers,
> Gabriele
>
> Tip/Note: by default, search_path is set to search in the "$user" and
> public schemas. Therefore, if you connect using the "system_data" user, you
> do not need to force searching in that schema.
>
> --
>  Gabriele Bartolini - 2ndQuadrant Italia
>  PostgreSQL Training, Services and Support
>  Gabriele.Bartolini@**2ndQuadrant.it - www.2ndQuadrant.it
>


[GENERAL] pg_xlog growth on slave with streaming replication

2012-08-07 Thread Mike Roest
Hey Everyone,
I've got a bit of an interesting issue going on with pg_xlog growing on
a streaming replication slave.

We're running postgres 9.1.1 x64 built from source on Centos 5.8 x64.

On both the master and the slave we have wal_keep_segments configured for
1000
wal_keep_segments = 1000

This config appears to be working 100% on the master as it consistently has
the 1000 files and approx 17 gigs of storage usage.

However on the slave currently we have 4113 wal segments and growing (this
is in data/pg_xlog).  The slave has been up and syncing for about 10 days
give or take.

As far as I can tell using the pg_current_xlog_location() and
pg_last_xlog_replay_location() the slave is correctly in sync.

Looking at the recovery.conf I'm seeing that we don't have
archive_cleanup_command defined but looking at the example it seems to be
talking about incoming WAL locations for log shipping and doesn't appear to
talk about the actual pg_xlog directory.

Is the correct solution here as easy as setting up
archive_cleanup_command='pg_archivecleanup /db/data/pg_xlog %r'?

Thanks


Re: [GENERAL] Where is diskchecker.pl ?

2012-08-07 Thread Magnus Hagander
On Mon, Aug 6, 2012 at 6:54 PM, Rodrigo Gonzalez  wrote:
> On 06/08/12 13:31, Bruce Momjian wrote:
>>> For longer terms, perhaps we should set up an URL forwarder or
>>> something that the docs can link through in the cases where we really
>>> need this, so we can more easily update the URLs?
>> Well, the problem there is that they don't get the real URL unless they
>> go through our system, which seems "closed".
>>
> What about creating a wiki page with interesting software like this one,
> putting the link there, that can be maintained by community, and linking
> to this page in docs?

That's another reasonable idea. It doesn't solve the problem with
links going stale (just like having an URL redirector doesn't), but it
does make it much easier to change later...


-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Interval "1 month" is equals to interval "30 days" - WHY?

2012-08-07 Thread Dmitry Koterov
...and even worse:

SELECT ('1 year'::interval) = ('360 days'::interval); --> TRUE :-)
SELECT ('1 year'::interval) = ('365 days'::interval); --> FALSE :-)


On Tue, Aug 7, 2012 at 4:42 PM, Dmitry Koterov  wrote:

> Hello.
>
> I've just discovered a very strange thing:
>
> SELECT '1 mon'::interval = '30 days'::interval   --> TRUE???
>
> This returns TRUE (also affected when I create an unique index using an
> interval column). Why?
>
> I know that Postgres stores monthes, days and seconds in interval values
> separately. So how to make "=" to compare intervals "part-by-part" and not
> treat "1 mon" as "30 days"?
>
> P.S.
> Reproduced at least in 8.4 and 9.1.
>


Re: [GENERAL] Clogging problem

2012-08-07 Thread Marek Kielar



Dnia 6 sierpnia 2012 17:00 Adrian Klaver  napisał(a):

> > The clog has somewhat re-formed - the full listing of lsof (filtered for 
> > unique files) for postmaster(s) on the database mount is here:
> > http://BillionUploads.com/ya9kjv78t9es/postmaster_files_sorted.csv.html
> 
> FYI you might to consider using some other site for uploads. The above 
> is sort of scary and leads you down all sorts of false paths.
> 

Sorry about that, it's the first time I had used a hosting service and they 
didn't require creating an account (as in a list on Wikipedia). I guess using 
NoScript spoils with saving from trouble, but makes one come at wrong 
assumptions. I'll try to choose better next time.

> >
> > Consecutive commands were issued in a matter of minutes and differ slightly.
> >
> > Some totals / aggregates:
> > df – /data  83 141 382 144
> > du – /data  29 170 365 801
> > lsof – /data75 348 037 632
> > lsof – /data/base   74 975 969 280
> > lsof – /data/base (deleted) 53 769 936 896
> > lsof – /data/pg_xlog369 098 752
> > lsof – /data/pg_xlog (deleted)  201 326 592
> > lsof – /data/global 2 965 504
> >
> > It is clear that the server processes are keeping most of the files from 
> > being actually deleted.
> 
> Well the nature of database data files is they expand and/or contract as 
> needed. Unless you are getting rid of the actual object they refer to 
> they will not be deleted. The files WAL files in pg_xlog are a different 
> matter, but in the listing you sent they seem to be reasonable. There 
> are a couple of things off the top of my head that can cause data files 
> to expand unnecessarily:
> 1) Autovacuum is not aggressive enough.
> 2) There are open transactions keeping old tuples from being removed.
> 
>  From previous posts, you mentioned a 'permanent' connection to the 
> database. Are you sure it is not holding an open transaction?
> The pg_locks view would be a good place to start:
> http://www.postgresql.org/docs/9.1/interactive/view-pg-locks.html
> 


1) Running, through pgAdmin3, an ordinary VACUUM FULL ANALYZE and REINDEX on 
all our databases plus the "postgres" database that are in the cluster, didn't 
release a substantial amount of disk space - it might add up to maybe a few 
percent of the overall.


2) It doesn't seem there are any long-running transactions even though the PIDs 
do repeat during some time (but since the connections are kept open this seems 
reasonable):

postgres=# SELECT * FROM pg_locks ORDER BY pid;
locktype;database;relation;page;tuple;virtualxid;transactionid;classid;objid;objsubid;virtualtransaction;pid;mode;granted
relation;11874;1098585/101738;24367;AccessShareLock;t
virtualxid;85/101738;85/101738;24367;ExclusiveLock;t
virtualxid;20/788838;20/788838;24505;ExclusiveLock;t
virtualxid;14/923780;14/923780;24621;ExclusiveLock;t
virtualxid;76/139304;76/139304;24699;ExclusiveLock;t
virtualxid;55/19;55/19;24703;ExclusiveLock;t
virtualxid;59/363780;59/363780;24926;ExclusiveLock;t
(7 rows)

And after some time with a different invocation of psql (to let go of the PID):

postgres=# SELECT * FROM pg_locks ORDER BY pid;
locktype;database;relation;page;tuple;virtualxid;transactionid;classid;objid;objsubid;virtualtransaction;pid;mode;granted
virtualxid;56/410614;56/410614;25105;ExclusiveLock;t
virtualxid;3/667499;3/667499;25145;ExclusiveLock;t
relation;11874;1098585/101817;25171;AccessShareLock;t
virtualxid;85/101817;85/101817;25171;ExclusiveLock;t
(4 rows)


We are again approaching slowly the point that the server restart will be 
needed. If / when this happens, I'll provide statistics again.

Best regards,
Marek Kielar


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Interval "1 month" is equals to interval "30 days" - WHY?

2012-08-07 Thread Dmitry Koterov
Hello.

I've just discovered a very strange thing:

SELECT '1 mon'::interval = '30 days'::interval   --> TRUE???

This returns TRUE (also affected when I create an unique index using an
interval column). Why?

I know that Postgres stores monthes, days and seconds in interval values
separately. So how to make "=" to compare intervals "part-by-part" and not
treat "1 mon" as "30 days"?

P.S.
Reproduced at least in 8.4 and 9.1.


[GENERAL] Memory error in user-defined aggregation function

2012-08-07 Thread Adriaan Joubert
Hi,

I've implemented an aggregation function to compute quartiles in C
borrowing liberally from orafce code. I uses this code in a windowing
context and it worked fine until today - and I'm not sure what
changed. This is on 9.1.2 and I have also tried it on 9.1.4.

What I have determined so far (by sprinkling a lot of elog's
throughout the code) is that it does not seem to be data specific,
although it seems to depend on the number of aggregations I do (up to
about 1250 seems to be fine, beyond that it chokes). I also
established that there does not seem to be a problem with the transfer
function, and the data is accumulated without any issues. The error I
see is in the call to first_quartile_final (listed below). The pointer
to the transfer data structure is not null, but accessing the field
mstate->nelems causes a segflt. So the transfer data structure pointer
is bogus.

I've recompiled postgres with debugging enabled and have connected to
the backend with gdb, but haven't had any joy in persuading gdb to
actually stop in the correct file so that I can step through. I'll
keep on trying to make some headway with that.

In the meantime I would appreciate any comments as to whether the
approach taken is the right one, and whether additional checks can be
inserted to avoid this segmentation faults.

Many thanks,

Adriaan


My transfer data structure is

typedef struct
{
  int len; /* allocated length */
  int nextlen; /* next allocated length */
  int nelems; /* number of valid entries */
  float8  *values;
} quartile_state;

On the first call to the aggregate function this data structure is
allocated as follows:

static quartile_state *
quartile_accummulate(quartile_state *mstate, float8 value,
MemoryContext aggcontext)
{
MemoryContext oldcontext;

if (mstate == NULL)
{
/* First call - initialize */
oldcontext = MemoryContextSwitchTo(aggcontext);
mstate = palloc(sizeof(quartile_state));
mstate->len = 512;
mstate->nextlen = 2 * 512;
mstate->nelems = 0;
mstate->values = palloc(mstate->len * sizeof(float8));
MemoryContextSwitchTo(oldcontext);
}
else
{
if (mstate->nelems >= mstate->len)
{
int newlen = mstate->nextlen;

oldcontext = MemoryContextSwitchTo(aggcontext);
mstate->nextlen += mstate->len;
mstate->len = newlen;
mstate->values = repalloc(mstate->values, mstate->len * 
sizeof(float8));
MemoryContextSwitchTo(oldcontext);
}
}   

mstate->values[mstate->nelems++] = value;

return mstate;
}


And the transfer function itself is

PG_FUNCTION_INFO_V1(quartile_transfer);
Datum
quartile_transfer(PG_FUNCTION_ARGS) {
MemoryContext   aggcontext;
quartile_state *state = NULL;
float8 elem;

if (!AggCheckCallContext(fcinfo, &aggcontext))
{
elog(ERROR, "quartile_transform called in non-aggregate 
context");
}

state = PG_ARGISNULL(0) ? NULL : (quartile_state *) 
PG_GETARG_POINTER(0);
if (PG_ARGISNULL(1))
PG_RETURN_POINTER(state);

elem = PG_GETARG_FLOAT8(1);

state = quartile_accummulate(state, elem, aggcontext);

PG_RETURN_POINTER(state);
}

The final function for the computation of the first quartile is

PG_FUNCTION_INFO_V1(first_quartile_final);
Datum
first_quartile_final(PG_FUNCTION_ARGS) {
quartile_state *state = NULL;
float8 result;

if (PG_ARGISNULL(0))
PG_RETURN_NULL();

state = (quartile_state *) PG_GETARG_POINTER(0);

/** HERE state->nelems causes a segflt */
if (state->nelems<4)
PG_RETURN_NULL();

result = quartile_result(state, 0.25);

PG_RETURN_FLOAT8(result);
}

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] warnings about invalid "search_path"

2012-08-07 Thread Gabriele Bartolini

Hi Samba,

  first: do not worry, it is perfectly normal.

On Tue, 7 Aug 2012 16:25:14 +0530, Samba  wrote:

Hi all,
I'm seeing some weired errors in the postgres logs after upgrading to
postgres-9.1(.3) about the schema added by default to search patch

 WARNING:  invalid value for parameter "search_path": "system_data"
DETAIL:  schema "system_data" does not exist

We do have a user named "system_data" and a schema with the same
name...


A schema is something that belongs to a database.

You have just set the search_path for a specific user ("system_data") 
to include "system_data" (am I right?). However, a user can 
theoretically connect to any database on the instance.


The error above is generated when you connect with that user 
("system_data") to a database that does not have the "system_data" 
schema.


Please let me have more information if my assumption were wrong.

Cheers,
Gabriele

Tip/Note: by default, search_path is set to search in the "$user" and 
public schemas. Therefore, if you connect using the "system_data" user, 
you do not need to force searching in that schema.


--
 Gabriele Bartolini - 2ndQuadrant Italia
 PostgreSQL Training, Services and Support
 gabriele.bartol...@2ndquadrant.it - www.2ndQuadrant.it

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How does connect privilege works?

2012-08-07 Thread Albe Laurenz
Shridhar Daithankar wrote:
> I am trying to setup a cluster for trac databases and want to isolate
each db, by assigning a specific
> user to a DB.
> 
> I followed the documentation but as shown in the following example,
limiting access by connect does
> not seem to be working.
> 
> What am I missing?

The fact that by default the CONNECT privilege is granted to
PUBLIC, so everybody can connect.

> shridhar@bheem ~$ createuser testuser1

> shridhar@bheem ~$ createdb testdb2
> 
> shridhar@bheem ~$ psql testdb2
> testdb2=# revoke connect ON database testdb2 FROM testuser1;
> REVOKE
> testdb2=# \q
> 
> shridhar@bheem ~$ psql -U testuser1 testdb2
> psql (9.1.4)
> Type "help" for help.
> testdb2=> \q

PostgreSQL privileges are additive, you cannot specifically deny
a privilege to a certain user when the privilege is granted to PUBLIC.

The REVOKE-Statement you quote does nothing (no CONNECT privilege
was granted to "testuser1").

What you'll have to do is first REVOKE CONNECT ON DATABASE testdb2
FROM PUBLIC and then grant it to those users that you want to be able
to connect.

A good idea would be to create a role "testdb2_users", grant this
role CONNECT and then add everybody to the role who should be allowed
to connect.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] JSON in 9.2: limitations

2012-08-07 Thread Craig Ringer

(Reposted as the list manager appears to have eaten the first copy):

Hey all

It seems to be surprisingly hard to build JSON structures with 
PostgreSQL 9.2's json features, because:


- There's no aggregate, function or operator that merges two or more 
objects; and

- there's no single-value "json_escape" or equivalent.

Take this example from the SO question 
http://stackoverflow.com/questions/11813976/in-postgres-is-there-an-easy-way-to-select-several-attr-val-rows-into-one-recor/11814255#11814255


Given:

|create  table  t1(  attr textprimary  key,  val text);
insert  into  t1values(  'attr1',  'val1'  );

insert  into  t1values(  'attr2',  'val3'  );

insert  into  t1values(  'attr3',  'val3'  );

|

Produce:

|{ "attr1": "val1",  "attr2" :"val2",  "attr3" : "val3" }
|


It's very basic, but I couldn't work out a way of doing it that was safe 
if you also:


insert into t1 (attr,val) values ('at"tr', 'v"a"l');

which I found quite interesting.

With hstore there are several approaches that work:

|select  hstore(  array_agg(attr),  array_agg(val)  )  from  t1;
|

or

CREATE AGGREGATE hstore_agg ( basetype = hstore, sfunc = hs_concat, 
stype = hstore );


SELECT hstore_agg( attr => val ) FROM t1;
hstore_agg

 "at\"tr"=>"v\"a\"l", "attr1"=>"val1", "attr2"=>"val3", "attr3"=>"val3"
(1 row)


... but neither of these appear to be possible with json. Seems like 
there's a need for a:


json( text[], json[] )

and/or:

json_agg( json )

to allow the construction of json values. Both of these would also need 
funcs to create single json literals, a:


json_esc(anyelement) -> json

or at least:

json_esc(text) -> json


I'm not saying "... some some coding fairy should go and magically 
create those". I'm interested in opinions. Am I missing something 
obvious? Is this sort of thing supposed to be done via PL/v8 ? Is it 
just that the json feature needed to get finished so it was kept small 
for the first release?


Do such functions exist outside the merged patch? If not, would it be 
helpful to have them written?


--
Craig Ringer

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How does connect privilege works?

2012-08-07 Thread Craig Ringer

On 08/07/2012 11:51 AM, Shridhar Daithankar wrote:


testdb2=# revoke connect ON database testdb2 FROM testuser1;

REVOKE


You can't revoke a permission that isn't set. PostgreSQL doesn't have 
explicit deny rules, so you can only remove a grant.


The documentation on databases doesn't seem to cover that very well, 
with no listing of the default permissions.


You probably want to:

REVOKE CONNECT ON DATABASE testdb2 FROM public;

which will leave only superusers and the database owner able to connect 
to the DB. You can then:


GRANT CONNECT ON DATABASE testdb2 TO whoever;


--
Craig Ringer


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general