[GENERAL] Archiving and recovering pg_stat_tmp

2013-06-20 Thread Sameer Thakur
Hello,
I was trying to figure out how does one recover server statistics to the
same snapshot to which a database is restored after PITR.
The steps i had in mind were
1.Set up WAL archiving
2.On server shutdown one would need to backup  pg_stat_tmp along with file
system level back of database
3. On server crash setup configuration for recovery mode
4. Restart server, which replays WAL files and hen moves from recovery to
normal mode
What will be behavior be regarding pg_stat_tmp? Will it be deleted on
startup? Is it possible to recover the same statistics as on last server
shutdown? ICan the statistics recovered to the same PITR?
Thank you
Sameer


[GENERAL] Snapshot backups

2013-06-20 Thread James Sewell
Hey All,

This is a message to confirm my thoughts / validate a possible approach.

In a situation where PGDATA and {XLOG, ARCHIVELOG} are on different SAN/NAS
volumes and a backup is to be initiated do pg_start_backup and
pg_stop_backup need to be used?

I am using snapshots of each volume for backup.

My thinking is that they are not needed (although I realise it is good
practice).

As far as I can tell all they are doing is something like:

pg_start_backup:
  - create backup label
  - trigger checkpoint

pg_stop_backup
  - remove backup label file
  - creates backup history file
  - trigger log switch

There is nothing in here that is *required* from a backup point of view. Am
I missing anything?

James Sewell
Solutions Architect
_

[image:
http://www.lisasoft.com/sites/lisasoft/files/u1/2013hieghtslogan_0.png]

Level 2, 50 Queen St,
Melbourne, VIC, 3000

P: 03 8370 8000   F: 03 8370 8099  W: www.lisasoft.com

-- 


--
The contents of this email are confidential and may be subject to legal or 
professional privilege and copyright. No representation is made that this 
email is free of viruses or other defects. If you have received this 
communication in error, you may not copy or distribute any part of it or 
otherwise disclose its contents to anyone. Please advise the sender of your 
incorrect receipt of this correspondence.
image001.png

Re: [GENERAL] Archiving and recovering pg_stat_tmp

2013-06-20 Thread Amit Langote
On Thu, Jun 20, 2013 at 3:17 PM, Sameer Thakur samthaku...@gmail.com wrote:
 Hello,
 I was trying to figure out how does one recover server statistics to the
 same snapshot to which a database is restored after PITR.
 The steps i had in mind were
 1.Set up WAL archiving
 2.On server shutdown one would need to backup  pg_stat_tmp along with file
 system level back of database
 3. On server crash setup configuration for recovery mode
 4. Restart server, which replays WAL files and hen moves from recovery to
 normal mode
 What will be behavior be regarding pg_stat_tmp? Will it be deleted on
 startup? Is it possible to recover the same statistics as on last server
 shutdown? ICan the statistics recovered to the same PITR?

Documentation mentions following:

When the server shuts down, a permanent copy of the statistics data
is stored in the global subdirectory, so that statistics can be
retained across server restarts.

http://www.postgresql.org/docs/9.2/static/monitoring-stats.html

Though, I wonder if it was recently changed to $PGDATA/pg_stat instead
of $PGDATA/global per patch submitted in discussion:

http://www.postgresql.org/message-id/1718942738eb65c8407fcd864883f...@fuzzy.cz

When I checked on my 9.4dev installation, I found $PGDATA/pg_stat and
there were per database .stat files.

--
Amit Langote


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] variadic args to C functions

2013-06-20 Thread Pavel Stehule
Hello

I wrote lot of C VARIADIC functions - some examples are in core -
format function

   Schema   |   Name| Result data type | Argument data types  |  Type
+---+--+--+
 pg_catalog | concat| text | VARIADIC any   | normal
 pg_catalog | concat_ws | text | text, VARIADIC any | normal
 pg_catalog | format| text | text, VARIADIC any | normal
(3 rows)

look to http://okbob.blogspot.cz/2010/11/new-version-of-pst-collection-is.html
source code (string functions)

Code should be same without differences between external and internal functions.

Regards

Pavel


2013/6/20 Alan Nilsson anils...@apple.com:
 Has anyone got any pointers on implementing a C function in an extension that 
 takes variadic args?  I would like to do something like:

 select my_function(XXX,...);  where XXX will be between 1 and many integers.

 Possible?  I didn't see any examples in the contrib directory.




 --
 Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-general


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Tow kinds of different result while using create index concurrently

2013-06-20 Thread 高健
Hello:



I  have question about PG's create index concurrently. I think it is a
bug perhaps.



I  make two tables tab01 and tab02, they have no relationships.

I think create index concurrently  on tab02 will not be influenced by
transaction on tab01.

But the result differs:



My first program:transaction via ecpg(with host variable as where
condition),  psql's create index concurrently succeeded.



My second program:  transaction via ecpg,  psql's create index
concurrently is blocked until  ecpg program disconnect.



My third Test:  transaction via psql,  another psql's create
index concurrently succeeded.



My fourth Test:   transaction via psql(with pg_sleep),  another
psql's create index concurrently  is blocked until psql transaction
done(commit).



I am using PostgreSQL9.1.2. And on PostgreSQL9.2.4, the result is same.



My data:



[postgres@server bin]$ ./psql -U tester -d tester

psql (9.1.2)

Type help for help.



tester= \d tab01;

   Table public.tab01

 Column | Type | Modifiers

+--+---

 id | integer  |

 cd | character varying(4) |



tester= \d tab02;

 Table public.tab02

 Column |  Type   | Modifiers

+-+---

 id | integer |

 value  | integer |



tester= select * from tab01;

 id | cd

+

  1 | 14

  2 | 15

  3 | 14

(3 rows)



tester= select * from tab02;

 id | value

+---

  1 |   100

  2 |   200

  3 |   300

(3 rows)



tester=

-



My testing method  for  First program and Second program:



While my ecpg program is sleeping,

I open a terminal connect PG with psql,

then  send create index concurrently idx_tab02_id_new on tab02(id)



For my first  program, I can build index successfully.

For my second program, I can not build index, the sql statement is blocked
until  ecpg program disconnect from PG.



My table tab01 and tab02 has no relationships.

And I don't think that  my ecpg program will potentially use the index of
tab02.



In fact , If I look into  the c program created by ecpg-- test02.c

I can find this:

--

   { ECPGdo(__LINE__, 0, 1, db_conn, 0, ECPGst_normal, select count ( *
) from tab01 where cd = $1 ,

ECPGt_char,(vcd),(long)4 + 1,(long)1,(4 + 1)*sizeof(char),

ECPGt_NO_INDICATOR, NULL , 0L, 0L, 0L, ECPGt_EOIT,

ECPGt_int,(vCount),(long)1,(long)1,sizeof(int),

ECPGt_NO_INDICATOR, NULL , 0L, 0L, 0L, ECPGt_EORT);}

--

If I quoted the $1 manually and then compile it, then I can create index
concurrently while my ecpg program running:



--

   { ECPGdo(__LINE__, 0, 1, db_conn, 0, ECPGst_normal, select count ( *
) from tab01 where cd = '$1' ,

ECPGt_char,(vcd),(long)4 + 1,(long)1,(4 + 1)*sizeof(char),

ECPGt_NO_INDICATOR, NULL , 0L, 0L, 0L, ECPGt_EOIT,

ECPGt_int,(vCount),(long)1,(long)1,sizeof(int),

ECPGt_NO_INDICATOR, NULL , 0L, 0L, 0L, ECPGt_EORT);}

--





Here is my program 's source:

***My first program:

-

[root@server soft]# cat ./test01/test01.pc

int main()

{

   EXEC SQL BEGIN DECLARE SECTION;

 int vCount;

 char vcd[4+1];

   EXEC SQL END DECLARE SECTION;



   EXEC SQL CONNECT TO 'tester@127.0.0.1:5432' AS db_conn

 USER tester IDENTIFIED BY tester;



   EXEC SQL AT db_conn SELECT COUNT(*)

INTO :vCount FROM tab01;



   fprintf(stderr,count is:%d\n,vCount);



   fprintf(stderr,Before disconnect,sleep for 500 seconds\n);

   sleep(500);



   EXEC SQL DISCONNECT db_conn;



   fprintf(stderr,After disconnect,sleep for 600 seconds\n);

   sleep(600);



   return 0;


}

[root@server soft]#





***My Second Program:





[root@server soft]# cat ./test02/test02.pc

int main()

{

   EXEC SQL BEGIN DECLARE SECTION;

 int vCount;

 char vcd[4+1];

   EXEC SQL END DECLARE SECTION;



   EXEC SQL CONNECT TO 'tester@127.0.0.1:5432' AS db_conn

 USER tester IDENTIFIED BY tester;



   char *pCd=14;



   memset(vcd,'\0',5);

   strncpy(vcd, pCd,4);



   EXEC SQL AT db_conn SELECT COUNT(*)

INTO :vCount FROM tab01 WHERE cd = :vcd;



   fprintf(stderr,count is:%d\n,vCount);



   fprintf(stderr,Before disconnect,sleep for 500 seconds\n);

   sleep(500);



   EXEC SQL DISCONNECT db_conn;



   fprintf(stderr,After disconnect,sleep for 600 seconds\n);

   sleep(600);



   return 0;


}

 [root@server soft]#





And also, I can find another strange phenomenon via psql about create
index concurrently:

This time I use two psql client:



***My Third Test:


Re: [GENERAL] LDAP authentication timing out

2013-06-20 Thread Magnus Hagander
On Thu, Jun 20, 2013 at 7:24 AM, James Sewell james.sew...@lisasoft.comwrote:

 Hello All,

 I have the following config:

 hostsamerole+myrole samenetldap
 ldapserver=ldap1,ldap2,ldap3 ldapbinddn=mybinddn
 ldapbindpasswd=mypass ldapbasedn=mybase ldapsearchattribute=myatt

 Usually auth works perfectly with LDAP (starting a session from psql using
 an LDAP connection, authenticating with the LDAP password then exiting
 straight away) I see this:

 2013-06-20 15:19:53 EST DEBUG:  edb-postgres child[15901]: starting with (
 2013-06-20 15:19:53 EST DEBUG:  forked new backend, pid=15901 socket=10
 2013-06-20 15:19:53 EST DEBUG:  edb-postgres
 2013-06-20 15:19:53 EST DEBUG:  dccn
 2013-06-20 15:19:53 EST DEBUG:  )
 2013-06-20 15:19:53 EST DEBUG:  InitPostgres
 2013-06-20 15:19:53 EST DEBUG:  my backend ID is 1
 2013-06-20 15:19:53 EST DEBUG:  StartTransaction
 2013-06-20 15:19:53 EST DEBUG:  name: unnamed; blockState:   DEFAULT;
 state: INPROGR, xid/subid/cid: 0/1/0, nestlvl: 1, children:
 2013-06-20 15:19:53 EST DEBUG:  received password packet
 2013-06-20 15:19:53 EST DEBUG:  CommitTransaction
 2013-06-20 15:19:53 EST DEBUG:  name: unnamed; blockState:   STARTED;
 state: INPROGR, xid/subid/cid: 0/1/0, nestlvl: 1, children:
 2013-06-20 15:19:56 EST DEBUG:  shmem_exit(0): 7 callbacks to make
 2013-06-20 15:19:56 EST DEBUG:  proc_exit(0): 3 callbacks to make
 2013-06-20 15:19:56 EST DEBUG:  exit(0)
 2013-06-20 15:19:56 EST DEBUG:  shmem_exit(-1): 0 callbacks to make
 2013-06-20 15:19:56 EST DEBUG:  proc_exit(-1): 0 callbacks to make
 2013-06-20 15:19:56 EST DEBUG:  reaping dead processes
 2013-06-20 15:19:56 EST DEBUG:  server process (PID 15901) exited with
 exit code 0

 However around 10% of the time (although this varies) the session hangs
 after I type in my password till the auth timeout and I see this:

 2013-06-20 15:07:46 EST DEBUG:  forked new backend, pid=15587 socket=10
 2013-06-20 15:07:46 EST DEBUG:  edb-postgres child[15587]: starting with (
 2013-06-20 15:07:46 EST DEBUG:  edb-postgres
 2013-06-20 15:07:46 EST DEBUG:  dccn
 2013-06-20 15:07:46 EST DEBUG:  )
 2013-06-20 15:07:46 EST DEBUG:  InitPostgres
 2013-06-20 15:07:46 EST DEBUG:  my backend ID is 1
 2013-06-20 15:07:46 EST DEBUG:  StartTransaction
 2013-06-20 15:07:46 EST DEBUG:  name: unnamed; blockState:   DEFAULT;
 state: INPROGR, xid/subid/cid: 0/1/0, nestlvl: 1, children:
 2013-06-20 15:07:46 EST DEBUG:  received password packet
 2013-06-20 15:08:46 EST DEBUG:  shmem_exit(1): 7 callbacks to make
 2013-06-20 15:08:46 EST DEBUG:  proc_exit(1): 3 callbacks to make
 2013-06-20 15:08:46 EST DEBUG:  exit(1)
 2013-06-20 15:08:46 EST DEBUG:  shmem_exit(-1): 0 callbacks to make
 2013-06-20 15:08:46 EST DEBUG:  proc_exit(-1): 0 callbacks to make
 2013-06-20 15:08:46 EST DEBUG:  reaping dead processes
 2013-06-20 15:08:46 EST DEBUG:  server process (PID 15587) exited with
 exit code 1

 Anyone have any ideas? I never see this with MD5.

 I can multiple quickfire binds from an LDAP application and the same bind
 DN with no problems.


Sounds like an issue either with your ldap server, your network or the ldap
client library. But it's kind of hard to tell. You're probably best off
getting a network trace of the traffic between the ldap server and
postgres, to see how far it gets at all  - that's usually a good pointer
when it comes to timeouts.

Also, what version of postgres (looks from the names that this might be edb
advanced server and not actually postgres? In that case you might be better
off talking to the EDB people - they may have made some modifications to
the ldap code perhaps)?

What OS?
Versions?
What ldap client and version?
What ldap server?

-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


Re: [GENERAL] Snapshot backups

2013-06-20 Thread Magnus Hagander
On Thu, Jun 20, 2013 at 8:45 AM, James Sewell james.sew...@lisasoft.comwrote:

 Hey All,

 This is a message to confirm my thoughts / validate a possible approach.

 In a situation where PGDATA and {XLOG, ARCHIVELOG} are on different
 SAN/NAS volumes and a backup is to be initiated do pg_start_backup and
 pg_stop_backup need to be used?

 I am using snapshots of each volume for backup.

 My thinking is that they are not needed (although I realise it is good
 practice).

 As far as I can tell all they are doing is something like:

 pg_start_backup:
   - create backup label
   - trigger checkpoint

 pg_stop_backup
   - remove backup label file
   - creates backup history file
   - trigger log switch

 There is nothing in here that is *required* from a backup point of view.
 Am I missing anything?


The backup functions also set internal state in the database, so you can't
just replace it with doing those operations manually.  You do need to call
those functions.

-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


Re: [GENERAL] Archiving and recovering pg_stat_tmp

2013-06-20 Thread Sameer Thakur
Documentation mentions following:
Thanks, but how does this relate to statistics recovery wrt PITR?
regards
Sameer


Re: [GENERAL] Postgres DB crashing

2013-06-20 Thread Richard Huxton

On 18/06/13 18:31, bhanu udaya wrote:

Hello,
Greetings.

My PostgresSQL (9.2) is crashing after certain load tests. Currently,
postgressql is crashing when simulatenously 800 to 1000 threads are run
on a 10 million records schema. Not sure, if we have to tweak some more
parameters of postgres. Currently, the postgressql is configured as
below on a 7GB Ram on an Intel Xeon CPU E5507 2.27 GZ. Is this postgres
limitation to support only 800 threads or any other configuration
required. Please look at the log as below with errors. Please reply


max_connections 5000
shared_buffers  2024 MB
synchronous_commit  off
wal_buffers 100 MB
wal_writer_delays   1000ms
checkpoint_segments 512
checkpoint_timeout  5 min
checkpoint_completion_target0.5
checkpoint_warning  30s
work_memory 1G
effective_cache_size5 GB


Just to point out, your memory settings are set to allow *at least*

 shared-buffers 2GB + (5000 * 1GB) = 5TB+

You don't have that much memory. You probably don't have that much disk. 
This is never going to work.


As has been said, there's no way you can do useful work simultaneously 
with 1000 threads if you only have 4 cores - use a connection pooler. 
You'll also need to reduce work_mem to 1MB or so.


--
  Richard Huxton
  Archonet Ltd


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Archiving and recovering pg_stat_tmp

2013-06-20 Thread Amit Langote
On Thu, Jun 20, 2013 at 6:05 PM, Sameer Thakur samthaku...@gmail.com wrote:
Documentation mentions following:
 Thanks, but how does this relate to statistics recovery wrt PITR?

Upon clean server shutdown, you have the statistics files stored in
the pg_stat (previously global/) directory, which persists across
server restarts, which, might even be applicable to a PITR, as far as
I can understand. This would need some testing, though, to be sure
that it is the case.

--
Amit Langote


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres DB crashing

2013-06-20 Thread Merlin Moncure
On Thu, Jun 20, 2013 at 5:17 AM, Richard Huxton d...@archonet.com wrote:
 On 18/06/13 18:31, bhanu udaya wrote:

 Hello,
 Greetings.

 My PostgresSQL (9.2) is crashing after certain load tests. Currently,
 postgressql is crashing when simulatenously 800 to 1000 threads are run
 on a 10 million records schema. Not sure, if we have to tweak some more
 parameters of postgres. Currently, the postgressql is configured as
 below on a 7GB Ram on an Intel Xeon CPU E5507 2.27 GZ. Is this postgres
 limitation to support only 800 threads or any other configuration
 required. Please look at the log as below with errors. Please reply


 max_connections 5000
 shared_buffers  2024 MB
 synchronous_commit  off
 wal_buffers 100 MB
 wal_writer_delays   1000ms
 checkpoint_segments 512
 checkpoint_timeout  5 min
 checkpoint_completion_target0.5
 checkpoint_warning  30s
 work_memory 1G
 effective_cache_size5 GB


 Just to point out, your memory settings are set to allow *at least*

  shared-buffers 2GB + (5000 * 1GB) = 5TB+

 You don't have that much memory. You probably don't have that much disk.
 This is never going to work.

 As has been said, there's no way you can do useful work simultaneously with
 1000 threads if you only have 4 cores - use a connection pooler. You'll also
 need to reduce work_mem to 1MB or so.

aside: if you have particular query that needs extra work_mem, you can
always temporarily raise it at run time (unlike shared buffers).

OP needs to explore use of connection pooler, in particular pgbouncer.
 Anyways none of this explains why the server is actually crashing.

merlin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Exporting Data

2013-06-20 Thread David Johnston
guilherme wrote
 I need to get some information in database and export it.
 
 Is there a way to make PostgreSQL to export one data line to a new file?
 Like this:
 1 FIRST LINE -- line1.txt;
 2 SECOND LINE -- line2.txt;
 3 THIRD LINE -- line3.txt
 ...
 and so...
 
 I know that I can import all information into a unique file, but I need to
 split that information into severel files.
 I've already searched in everything and didn't find a solution.
 
 Can anybody help?
 
 Thanks in advance.

I doubt it.  You should export to a single file then use another tool to
perform the split.  What platform and you working on?  If its Linux using
psql | some_splitting_command should be doable.

I guess you put your query into a function and use procedural language
functionality to do that but I'm not sure on the necessary syntax.

There may be third-party ETL tools that fulfill this need as well.

David J.





--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Exporting-Data-tp5760108p5760118.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Do not understand high estimates of index scan vs seq scan

2013-06-20 Thread Antonio Goméz Soto
Hi all,

I do not understand why postgreSQL estimates an index scan only half as fast as 
a seq scan:

system=# explain select * from queuelog;QUERY PLAN
---
 Seq Scan on queuelog  (cost=0.00..20530.29 rows=610929 width=148)
(1 row)

system=# explain select * from queuelog where queuelog.start_time = 
'2013-05-20 8:30' and queuelog.start_time = '2013-06-21 17:0';
QUERY 
PLAN
---
 Index Scan using queuelog_start_time on queuelog  (cost=0.00..13393.18 
rows=316090 width=148)
   Index Cond: ((start_time = '2013-05-20 08:30:00+02'::timestamp with time 
zone) AND (start_time = '2013-06-21 17:00:00+02'::timestamp with time zone))


Is that solely because it nees to compare each index value to a fixed date/time?
I would assume the index would be much smaller then the actual data, or is it 
only based on the amount of rows?


Thanks,
Antonio


PS: here's the queuelog definition:

   Table public.queuelog
  Column  |   Type   |   Modifiers
--+--+---
 id   | integer  | not null default 
nextval('queuelog_id_seq'::regclass)
 created  | timestamp with time zone | not null default now()
 lastupdate   | timestamp with time zone | not null default now()
 start_time   | timestamp with time zone | not null default now()
 sessionid| character varying(50)| not null default ''::character 
varying
 call_seq | integer  | not null default 1
 queue| integer  | not null default 1
 dial | character varying(24)| not null default ''::character 
varying
 agent| integer  | not null default 1
 agents   | integer  | not null default 0
 agents_logged_in | integer  | not null default 0
 agents_avail | integer  | not null default 0
 queue_pos| integer  | not null default 1
 waittime | numeric  | not null default (0)::numeric
 ringtime | numeric  | not null default (0)::numeric
 talktime | numeric  | not null default (0)::numeric
 cause| integer  | not null default 16
 from_function| character varying(24)|
 from_lookupid| integer  | not null default 1
 to_function  | character varying(24)|
 to_lookupid  | integer  | not null default 1
 maxcallers   | integer  | not null default 0
Indexes:
queuelog_pkey PRIMARY KEY, btree (id)
queuelog_start_time btree (start_time)


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Exporting Data

2013-06-20 Thread Adrian Klaver

On 06/20/2013 06:06 AM, David Johnston wrote:

guilherme wrote

I need to get some information in database and export it.

Is there a way to make PostgreSQL to export one data line to a new file?
Like this:
1 FIRST LINE -- line1.txt;
2 SECOND LINE -- line2.txt;
3 THIRD LINE -- line3.txt
...
and so...

I know that I can import all information into a unique file, but I need to
split that information into severel files.
I've already searched in everything and didn't find a solution.

Can anybody help?

Thanks in advance.


I doubt it.  You should export to a single file then use another tool to
perform the split.  What platform and you working on?  If its Linux using
psql | some_splitting_command should be doable.

I guess you put your query into a function and use procedural language
functionality to do that but I'm not sure on the necessary syntax.

There may be third-party ETL tools that fulfill this need as well.


One I recently ran across is Dataset:

https://github.com/pudo/dataset

Docs:

https://dataset.readthedocs.org/en/latest/

In particular freezefile:

https://dataset.readthedocs.org/en/latest/freezefile.html

mode specifies whether the query output is to be combined into a single 
file (list) or whether a file should be generated for each result row 
(item).




David J.





--
Adrian Klaver
adrian.kla...@gmail.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Order by with column ordinal and collate - fails to parse

2013-06-20 Thread Tim Kane
Hi all,

I seem to be having problems constructing a query that combines the use of
distinct, order by, and collate key words.

For instance:

# select distinct(value)  from properties order by 1 collate C;
ERROR:  collations are not supported by type integer
LINE 1: ... distinct(value)  from properties order by 1 collate C...
 ^
# select distinct(value)  from properties order by distinct(value) collate
C;
ERROR:  syntax error at or near distinct
LINE 1: ...ct distinct(value)  from properties order by distinct(v...
 ^
# select distinct(value) as foo from properties order by foo collate C;
ERROR:  column foo does not exist
LINE 1: ...tinct(value) as foo from properties order by foo collat...


Am I just being a numpty here? I can work around it with a subquery, but
this seems like a bug to me. Particularly the first example where my
ordinal field reference is treated as an integer literal. I should note
that the field 'value' is of type 'text' (not integer).

Any input appreciated. Thanks :)


Re: [GENERAL] Postgres DB crashing

2013-06-20 Thread Alan Hodgson
On Thursday, June 20, 2013 07:52:21 AM Merlin Moncure wrote:
 OP needs to explore use of connection pooler, in particular pgbouncer.
  Anyways none of this explains why the server is actually crashing.

It might be hitting file descriptor limits. I didn't dig into the earlier part 
of this thread much, though.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Order by with column ordinal and collate - fails to parse

2013-06-20 Thread Adrian Klaver

On 06/20/2013 07:05 AM, Tim Kane wrote:

Hi all,

I seem to be having problems constructing a query that combines the use
of distinct, order by, and collate key words.

For instance:

# select distinct(value)  from properties order by 1 collate C;
ERROR:  collations are not supported by type integer
LINE 1: ... distinct(value)  from properties order by 1 collate C...
  ^


How about:

select distinct(value) collate C from properties order by 1 ;



Am I just being a numpty here? I can work around it with a subquery, but
this seems like a bug to me. Particularly the first example where my
ordinal field reference is treated as an integer literal. I should note
that the field 'value' is of type 'text' (not integer).

Any input appreciated. Thanks :)




--
Adrian Klaver
adrian.kla...@gmail.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Order by with column ordinal and collate - fails to parse

2013-06-20 Thread Tim Kane
Nice one. Yep, that works. Cheers..
I'll submit a bug report for this, though I'm on the fence as to wether
this is actually a bug per se.. I would have reasonably expected my
original syntax to have worked (as it does without any ORDER BY)..


On Thu, Jun 20, 2013 at 3:44 PM, Adrian Klaver adrian.kla...@gmail.comwrote:

 On 06/20/2013 07:05 AM, Tim Kane wrote:

 Hi all,

 I seem to be having problems constructing a query that combines the use
 of distinct, order by, and collate key words.

 For instance:

 # select distinct(value)  from properties order by 1 collate C;
 ERROR:  collations are not supported by type integer
 LINE 1: ... distinct(value)  from properties order by 1 collate C...
   ^


 How about:

 select distinct(value) collate C from properties order by 1 ;



 Am I just being a numpty here? I can work around it with a subquery, but
 this seems like a bug to me. Particularly the first example where my
 ordinal field reference is treated as an integer literal. I should note
 that the field 'value' is of type 'text' (not integer).

 Any input appreciated. Thanks :)



 --
 Adrian Klaver
 adrian.kla...@gmail.com



[GENERAL] Replication with Drop: could not open relation with OID

2013-06-20 Thread salah jubeh
Hello, 


I have a database server which do a complex  views calculation,  the result of 
those views are shipped to another database servers via a simple  replication 
tool which have a high  client loads. 


The tool  is creating a  table, and indexes based on predefined conf.,   then 
drop the table that needs to be synched then rename the temporary tables.  i.e.


BEGIN;
DROP TABLE IF EXISTS y;  -- the table I want to replace it 
ALTER TABLE x RENAME TO y;  -- x contains the data which synched from server  
(already created)
ALTER INDEX . RENAME TO .; -- rename indexes 
COMMIT;



In version 8.3 , 8.4, and 9.1, I get errors could not open relation with OID; 
However with version 9.2 every thing works fine, I tried to lock the table in 
access exclusive mode  before dropping it   i.e 
 
BEGIN;
LOCK TABLE y IN ACCESS EXCLUSIVE MODE;
DROP TABLE IF EXISTS y;  -- the table I want to replace 
ALTER TABLE x RENAME TO y;  -- x is the temporay table 
ALTER INDEX x_x_name_idx RENAME TO y_x_name_idx; -- rename indexes 
COMMIT;

But I still get the same errors.  

I have seen this post  

http://dba.stackexchange.com/questions/16909/rotate-a-table-in-postgresql and I 
used the same strategy for testing. In version 9.2 I was not able at all to 
generate the error. In 8.3, 8.4, 9.1 I was able to generate the errors. 


Since the tables, I am creating are quite big (several millions of record) , I 
am using drop and rename  to speed the creation. For small table sizes,  this 
problem does not appear often, but in my case it pops up often because of the 
table size. 


Is there any way to solve this for the mensioned versions 


Regards

[GENERAL] coalesce function

2013-06-20 Thread itishree sukla
Hi All,

I am using coalesce(firstname,lastname), to get the result if first name is
'NULL' it will give me lastname or either way. I am having data like
instead of NULL,  blank null ( i mean something like '' ) for which
coalesce is not working, is there any workaround or other function
available in postgresql, please do let me know.


Regards,
Itishree


Re: [GENERAL] coalesce function

2013-06-20 Thread Leif Biberg Kristensen
Torsdag 20. juni 2013 21.45.02 skrev itishree sukla:
 Hi All,
 
 I am using coalesce(firstname,lastname), to get the result if first name is
 'NULL' it will give me lastname or either way. I am having data like
 instead of NULL,  blank null ( i mean something like '' ) for which
 coalesce is not working, is there any workaround or other function
 available in postgresql, please do let me know.

CASE WHEN firstname NOT IN (NULL, '') THEN firstname ELSE lastname END;

regards, Leif


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] coalesce function

2013-06-20 Thread Serge Fonville
Hi,

http://www.postgresql.org/docs/9.1/static/functions-conditional.html describes
NULLIF, when combined with COALESCE it should answer your request.

HTH

Kind regards/met vriendelijke groet,

Serge Fonville

http://www.sergefonville.nl

Convince Microsoft!
They need to add TRUNCATE PARTITION in SQL Server
https://connect.microsoft.com/SQLServer/feedback/details/417926/truncate-partition-of-partitioned-table


2013/6/20 itishree sukla itishree.su...@gmail.com

 Hi All,

 I am using coalesce(firstname,lastname), to get the result if first name
 is 'NULL' it will give me lastname or either way. I am having data like
 instead of NULL,  blank null ( i mean something like '' ) for which
 coalesce is not working, is there any workaround or other function
 available in postgresql, please do let me know.


 Regards,
 Itishree



[GENERAL] Exporting Data

2013-06-20 Thread guilherme
I need to get some information in database and export it.

Is there a way to make PostgreSQL to export one data line to a new file?
Like this:
1 FIRST LINE -- line1.txt;
2 SECOND LINE -- line2.txt;
3 THIRD LINE -- line3.txt
...
and so...

I know that I can import all information into a unique file, but I need to
split that information into severel files.
I've already searched in everything and didn't find a solution.

Can anybody help?

Thanks in advance.



--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Exporting-Data-tp5760108.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] unexpected external sort Disk

2013-06-20 Thread Manuel Kniep
Hello,

I have table with 37 million entries the whole table has a size of  2.3 GB 

Although I have set the work_mem to 10 GB 

I see the an unexpected external sort Disk in Explain Analyze for around 650MB 
of data

 EXPLAIN ANALYZE  SELECT application_id, price_tier FROM application_prices 
order by application_id, created_at;

   QUERY PLAN

 Sort  (cost=5284625.89..5378196.50 rows=37428244 width=8) (actual 
time=36972.658..40618.161 rows=37428244 loops=1)
   Sort Key: application_id, created_at
   Sort Method: external sort  Disk: 658568kB
   -  Seq Scan on application_prices  (cost=0.00..576597.44 rows=37428244 
width=8) (actual time=0.012..6259.923 rows=37428244 loops=1)
 Total runtime: 42999.882 ms
(5 rows)

Has anyone an idea what I'm missing ?

Thanks

Manuel



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Tow kinds of different result while using create index concurrently

2013-06-20 Thread Jeff Janes
On Thu, Jun 20, 2013 at 1:27 AM, 高健 luckyjack...@gmail.com wrote:

 Hello:



 I  have question about PG's create index concurrently. I think it is a
 bug perhaps.



 I  make two tables tab01 and tab02, they have no relationships.

 I think create index concurrently  on tab02 will not be influenced by
 transaction on tab01.

 But the result differs:


This is expected.  In order to not interfere with normal activity, a
concurrent index build has to volunteer to be blocked by such activity
instead.  From the doc: When this option is used, PostgreSQL must perform
two scans of the table, and in addition it must wait for all existing
transactions that could potentially use the index to terminate.

Now in your case, perhaps the argument could be made that the transaction
hosting the 1st concurrent build could not potentially use the 2nd-building
index, but there is no convenient way for PostgreSQL to detect that fact.

 Cheers,

Jeff


Re: [GENERAL] Do not understand high estimates of index scan vs seq scan

2013-06-20 Thread Jeff Janes
On Thu, Jun 20, 2013 at 6:47 AM, Antonio Goméz Soto 
antonio.gomez.s...@gmail.com wrote:

 Hi all,

 I do not understand why postgreSQL estimates an index scan only half as
 fast as a seq scan:

 system=# explain select * from queuelog;QUERY
 PLAN
 ---
  Seq Scan on queuelog  (cost=0.00..20530.29 rows=610929 width=148)
 (1 row)

 system=# explain select * from queuelog where queuelog.start_time =
 '2013-05-20 8:30' and queuelog.start_time = '2013-06-21 17:0';

 QUERY PLAN

 ---
  Index Scan using queuelog_start_time on queuelog  (cost=0.00..13393.18
 rows=316090 width=148)
Index Cond: ((start_time = '2013-05-20 08:30:00+02'::timestamp with
 time zone) AND (start_time = '2013-06-21 17:00:00+02'::timestamp with time
 zone))


 Is that solely because it nees to compare each index value to a fixed
 date/time?
 I would assume the index would be much smaller then the actual data, or is
 it only based on the amount of rows?



Surely the index is smaller, but it has to visit both the index and the
table, because the index cannot satisfy the select *, and possibly for
visibility reasons as well.

The table must be well-clustered on the start_time column, or else the
estimate would be even worse.

Cheers,

Jeff


Re: [GENERAL] unexpected external sort Disk

2013-06-20 Thread Jeff Janes
On Thu, Jun 20, 2013 at 6:12 AM, Manuel Kniep rap...@adeven.com wrote:

 Hello,

 I have table with 37 million entries the whole table has a size of  2.3 GB

 Although I have set the work_mem to 10 GB



There is one piece of memory used in in-memory sorting that (currently) has
to be a single contiguous allocation, and that piece is limited to 1GB.
 This means you can't always use the entire amount of work_mem declared,
especially when sorting a very large number of very short rows, as you seem
to be doing.

There is another rounding issue that means sometimes as little as 512MB
of that 1GB is actually used.  This part is probably fixed for 9.3.


Cheers,

Jeff


Re: [GENERAL] PSA: If you are running Precise/12.04 upgrade your kernel.

2013-06-20 Thread Shaun Thomas

On 06/17/2013 04:00 PM, Joshua D. Drake wrote:


http://postgresql.1045698.n5.nabble.com/Ubuntu-12-04-3-2-Kernel-Bad-for-PostgreSQL-Performance-td5735284.html

tl;dr for that thread seems to be a driver problem (fusionIO?), I'm
unsure if Ubuntu specific or in the upstream kernel.


That instance wasn't a driver problem. The problem was that the FusionIO 
driver uses kernel threads to perform IO, and it seems that several of 
the 3.x kernels have issues with task migration using the new CFS CPU 
scheduler which replaced the O(1) one.


The next thread related to this that fixed our particular case was this one:

http://www.postgresql.org/message-id/50e4aab1.9040...@optionshouse.com

--
Shaun Thomas
OptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604
312-676-8870
stho...@optionshouse.com

__

See http://www.peak6.com/email_disclaimer/ for terms and conditions related to 
this email


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Exporting Data

2013-06-20 Thread Alban Hertroys
On Jun 20, 2013, at 14:33, guilherme guilhe...@quirius.com.br wrote:

 I need to get some information in database and export it.
 
 Is there a way to make PostgreSQL to export one data line to a new file?
 Like this:
 1 FIRST LINE -- line1.txt;
 2 SECOND LINE -- line2.txt;
 3 THIRD LINE -- line3.txt
 ...
 and so...
 
 I know that I can import all information into a unique file, but I need to
 split that information into severel files.
 I've already searched in everything and didn't find a solution.
 
 Can anybody help?


That's a sufficiently unique requirement that there probably is no way to do 
that natively.

Using a scripting language is probably your best bet. If you're already 
familiar with some, pick one of those. If not, I suggest Python (with the 
psycopg2 postgresql driver). There's even a version for Windows if that's what 
you're using.

Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Problem with left join when moving a column to another table

2013-06-20 Thread Jason Long
I am having some problems moving a column to another table and fixing
some views that rely on it.  I want to move the area_id column from
t_offerprice_pipe to t_offerprice and then left join the results.

When I have only one table I get the correct results.  area_id is
currently in the t_offerprice_pipe. The working portion on the query is
below.

I am joining the price.t_offerprice_pipe twice because I am looking for
a wild card with manufacturer_id=-100 that has lower precedence than a
specific manufacturer_id

LEFT JOIN t_offerprice_pipe opp ON opp.size_id = st.size_id AND

opp.manufacturer_id = st.manufacturer_id AND 
 opp.area_id
= c.area_id
LEFT JOIN price.t_offerprice_pipe opam ON opam.size_id = st.size_id AND 

opam.manufacturer_id = (-100) AND 

opam.area_id = c.area_id


After moving the column to t_offerprice I am attempting to add a second
left join, but is not working as I expected.  I am getting multiple
results from this query.

LEFT JOIN t_offerprice_pipe opp ON opp.size_id = st.size_id AND

opp.manufacturer_id = st.manufacturer_id
LEFT JOIN t_offerprice op ON op.id = opp.id AND
  op.area_id = c.area_id
LEFT JOIN price.t_offerprice_pipe oppam ON oppam.size_id = st.size_id
AND 

oppam.manufacturer_id = (-100)
LEFT JOIN t_offerprice opam ON opam.id = oppam.id AND
  opam.area_id =
c.area_id

This is a stripped down version of the query for clarity.

I tried moving the condition into the where clause with no success.

I would greatly appreciate any advice on rewriting this query.



Re: [GENERAL] PSA: If you are running Precise/12.04 upgrade your kernel.

2013-06-20 Thread Scott Marlowe
Good to know. I've got a few spare machines I might be able to test
3.2 kernels on in the next few months


On Thu, Jun 20, 2013 at 12:54 PM, Shaun Thomas stho...@optionshouse.com wrote:
 On 06/17/2013 04:00 PM, Joshua D. Drake wrote:


 http://postgresql.1045698.n5.nabble.com/Ubuntu-12-04-3-2-Kernel-Bad-for-PostgreSQL-Performance-td5735284.html

 tl;dr for that thread seems to be a driver problem (fusionIO?), I'm
 unsure if Ubuntu specific or in the upstream kernel.


 That instance wasn't a driver problem. The problem was that the FusionIO
 driver uses kernel threads to perform IO, and it seems that several of the
 3.x kernels have issues with task migration using the new CFS CPU scheduler
 which replaced the O(1) one.

 The next thread related to this that fixed our particular case was this one:

 http://www.postgresql.org/message-id/50e4aab1.9040...@optionshouse.com

 --
 Shaun Thomas
 OptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604
 312-676-8870
 stho...@optionshouse.com

 __

 See http://www.peak6.com/email_disclaimer/ for terms and conditions related
 to this email



-- 
To understand recursion, one must first understand recursion.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Problem with left join when moving a column to another table

2013-06-20 Thread David Johnston
Jason Long-2 wrote
 I am having some problems moving a column to another table and fixing
 some views that rely on it.  I want to move the area_id column from
 t_offerprice_pipe to t_offerprice and then left join the results.
 
 When I have only one table I get the correct results.  area_id is
 currently in the t_offerprice_pipe. The working portion on the query is
 below.

Maybe someone else can make sense of your partial examples but I cannot. 
I'd suggest creating self-contained queries that exhibit both the correct
and incorrect behavior.  Use the following template:

WITH from_table_not_specified (col1, col2) AS (
VALUES (1, 1), (2, 2)
)
, t_offerprice_pipe () AS (
VALUES (...), ()
)
, to_offerprice (...) AS (
VALUES (...), (...)
)
/* working query */
SELECT * 
FROM from_table_not_specified
LEFT JOIN t_offerprice_pipe op1 ON ...
LEFT JOIN t_offerprice_pipe op2 ON ...

/* not working query using same or similar CTEs where possible. */
SELECT *
FROM ...
LEFT JOIN ...
LEFT JOIN ...
LEFT JOIN ...
LEFT JOIN ...

Without a working query it is really hard (impossible really) to debug
wrong number of rows problems.  Especially since the query itself is
possibly not the problem but rather your data model is flawed.

David J.




--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Problem-with-left-join-when-moving-a-column-to-another-table-tp5760187p5760192.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] intagg

2013-06-20 Thread Arjen Nienhuis
On Thu, Jun 20, 2013 at 12:22 AM, Andrew Bartley ambart...@gmail.com wrote:
 Sorry that should be aggregate int_array_aggregate not function


 On 20 June 2013 08:16, Andrew Bartley ambart...@gmail.com wrote:

 Hi All,

 I am trying to use the intagg extension. in 9.1.9

 I have created the extension as such CREATE EXTENSION intagg

 Then tried to use the function int_array_aggregate.

 Returns this message

 function int_array_aggregate(integer[]) does not exist

 select int_array_aggregate(transactions) from x

 x being

 create table x (transactions int4[]);

 Can anyone please advise..

 Thanks

 Andrew Bartley



int_array_aggregate or (array_agg) needs int as input not int[]. You
can unnest first:

= INSERT INTO x VALUES ('{4,5,6}');
INSERT 0 1
= INSERT INTO x VALUES ('{1,20,30}');
INSERT 0 1
= SELECT unnest(transactions) FROM x;
 unnest

  4
  5
  6
  1
 20
 30
(6 rows)

= SELECT array_agg(i) FROM (SELECT unnest(transactions) from x) AS j(i);
array_agg
-
 {4,5,6,1,20,30}
(1 row)

= SELECT array_agg(i ORDER BY i) FROM (SELECT unnest(transactions)
from x) AS j(i);
array_agg
-
 {1,4,5,6,20,30}
(1 row)

= SELECT array_agg(i ORDER BY i) FROM (SELECT unnest(transactions)
from x) AS j(i) GROUP BY i % 2;
  array_agg
-
 {4,6,20,30}
 {1,5}
(2 rows)


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] intagg

2013-06-20 Thread Tom Lane
Andrew Bartley ambart...@gmail.com writes:
 function int_array_aggregate(integer[]) does not exist

int_array_aggregate() takes integers, not arrays of integers.

Depending on exactly what semantics you'd like to have, you could
probably build a custom aggregate to do this without any new C code
--- try basing it on array_cat() for instance.

regression=# create aggregate myagg (anyarray) (
sfunc = array_cat,
stype = anyarray,
initcond = '{}');
CREATE AGGREGATE
regression=# select * from x;
 transactions 
--
 {1,2}
 {3,4,5}
(2 rows)

regression=# select myagg(transactions) from x;
myagg
-
 {1,2,3,4,5}
(1 row)

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] coalesce function

2013-06-20 Thread David Johnston
itishree sukla wrote
 Hi All,
 
 I am using coalesce(firstname,lastname), to get the result if first name
 is
 'NULL' it will give me lastname or either way. I am having data like
 instead of NULL,  blank null ( i mean something like '' ) for which
 coalesce is not working, is there any workaround or other function
 available in postgresql, please do let me know.
 
 
 Regards,
 Itishree

This is the solution I am currently using in my work:

Runs in 9.0

CREATE OR REPLACE FUNCTION coalesce_emptystring(VARIADIC in_ordered_actual
varchar[])
RETURNS varchar
AS $$

SELECT input
FROM ( SELECT unnest($1) AS input ) src
WHERE input IS NOT NULL AND input  ''
LIMIT 1;

$$
LANGUAGE sql
STABLE
;

Same usage syntax as the built-in COALESCE but skips NULL and the
empty-string.  Note a string with only whitespace (i.e.,  '   ') is not
considered empty.

The problem with the CASE example provided is that while it works in the
specific case you are solving it does not readily generalize to more than 2
inputs.

Are you positive the lastname will always have a value?  You should
consider a last-resort default to ensure that the column never returns a
NULL.

coalesce_emptystring(firstname, lastname, 'Name Unknown')







--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/coalesce-function-tp5760161p5760205.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Problem with left join when moving a column to another table

2013-06-20 Thread Jason Long
David,

Thank you very much for your response.
Below is a script that will reproduce the problem with comments
included.

/***/

--drop table t_item;
--drop table t_price_base_table;
--drop table t_price_original_with_area_id;

--this table represents inventory line items
CREATE TABLE t_item
(
  id bigint NOT NULL,
  size_id bigint NOT NULL, 
  area_id bigint NOT NULL,   
  CONSTRAINT pk_t_item PRIMARY KEY (id)
);

INSERT INTO t_item VALUES (1, 1, 10);
INSERT INTO t_item VALUES (2, 4, 1);
INSERT INTO t_item VALUES (3, 19, 1);

-- I want to move the area_id(and other columns not listed here) to
another base table and left join it
CREATE TABLE t_price_original_with_area_id
(
  id bigint NOT NULL,
  size_id bigint NOT NULL, 
  area_id bigint NOT NULL, 
  CONSTRAINT pk_t_price_original_with_area_id PRIMARY KEY (id)
);

INSERT INTO t_price_original_with_area_id VALUES (162, 4, 6);
INSERT INTO t_price_original_with_area_id VALUES (161, 4, 2);
INSERT INTO t_price_original_with_area_id VALUES (159, 4, 1);
INSERT INTO t_price_original_with_area_id VALUES (638, 19, 9);
INSERT INTO t_price_original_with_area_id VALUES (633, 19, 14);
INSERT INTO t_price_original_with_area_id VALUES (675, 19, 45);
INSERT INTO t_price_original_with_area_id VALUES (64, 19, 1);

-- My simplified base table
CREATE TABLE t_price_base_table
(
  id bigint NOT NULL,
  area_id bigint NOT NULL, 
  CONSTRAINT pk_t_price_base_table PRIMARY KEY (id)
);

-- insert to add the information I want to transfer to the base table so
I can drop the area_id column
insert into t_price_base_table (id, area_id) (select id, area_id from
t_price_original_with_area_id);

/*
This is the working query.  Note it joins size_id and area_id in one
left join.
It produces 1 row for each item.  There is no match for item 1.  Item 2
and 3
match the price table.
*/
select it.*,
   pwoa.* 
from t_item it 
left join t_price_original_with_area_id pwoa on it.size_id=pwoa.size_id
and 
it.area_id=pwoa.area_id
order by it.id;
   
/*
This is the new query that is not working correctly.  
I am trying to left join the base table by its id and area_id.  
I need a left join because there is no guarantee that there is a
matching price.

The where claues seems to work, but I the orginal query is much more
complicated,
and I will be needed to do a simiar join in may views.
*/
select it.*,
   pwoa.*,
   pbt.* 
from t_item it 
left join t_price_original_with_area_id pwoa on it.size_id=pwoa.size_id 
left join t_price_base_table pbt on pbt.id=pwoa.id and 
it.area_id=pbt.area_id
/*
where (pwoa.id is not null and pbt.id is not null) or 
  (pwoa.id is null and pbt.id is null)
*/
order by it.id;


/***/





On Thu, 2013-06-20 at 12:29 -0700, David Johnston wrote: 

 Jason Long-2 wrote
  I am having some problems moving a column to another table and fixing
  some views that rely on it.  I want to move the area_id column from
  t_offerprice_pipe to t_offerprice and then left join the results.
  
  When I have only one table I get the correct results.  area_id is
  currently in the t_offerprice_pipe. The working portion on the query is
  below.
 
 Maybe someone else can make sense of your partial examples but I cannot. 
 I'd suggest creating self-contained queries that exhibit both the correct
 and incorrect behavior.  Use the following template:
 
 WITH from_table_not_specified (col1, col2) AS (
 VALUES (1, 1), (2, 2)
 )
 , t_offerprice_pipe () AS (
 VALUES (...), ()
 )
 , to_offerprice (...) AS (
 VALUES (...), (...)
 )
 /* working query */
 SELECT * 
 FROM from_table_not_specified
 LEFT JOIN t_offerprice_pipe op1 ON ...
 LEFT JOIN t_offerprice_pipe op2 ON ...
 
 /* not working query using same or similar CTEs where possible. */
 SELECT *
 FROM ...
 LEFT JOIN ...
 LEFT JOIN ...
 LEFT JOIN ...
 LEFT JOIN ...
 
 Without a working query it is really hard (impossible really) to debug
 wrong number of rows problems.  Especially since the query itself is
 possibly not the problem but rather your data model is flawed.
 
 David J.
 
 
 
 
 --
 View this message in context: 
 http://postgresql.1045698.n5.nabble.com/Problem-with-left-join-when-moving-a-column-to-another-table-tp5760187p5760192.html
 Sent from the PostgreSQL - general mailing list archive at Nabble.com.
 
 




Re: [GENERAL] coalesce function

2013-06-20 Thread Chris Angelico
On Fri, Jun 21, 2013 at 7:36 AM, David Johnston pol...@yahoo.com wrote:
 SELECT input
 FROM ( SELECT unnest($1) AS input ) src
 WHERE input IS NOT NULL AND input  ''
 LIMIT 1;

Does this guarantee the order of the results returned? Using LIMIT
without ORDER BY is something I've learned to avoid.

ChrisA


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Easiest way to compare the results of two queries row by row and column by column

2013-06-20 Thread Jason Long
Can someone suggest the easiest way to compare the results from two
queries to make sure they are identical?

I am rewriting a large number of views and I want to make sure that
nothing is changes in the results.

Something like 

select compare_results('select * from v_old', 'select * from v_new');

I would want this to check that the row count and each row matched
column by column.

I am hoping someone has already written something for this...



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Problem with left join when moving a column to another table

2013-06-20 Thread David Johnston
Jason Long-2 wrote
 David,
 
 Thank you very much for your response.
 Below is a script that will reproduce the problem with comments
 included.
 
 
 
 /*
 This is the new query that is not working correctly.  
 I am trying to left join the base table by its id and area_id.  
 I need a left join because there is no guarantee that there is a
 matching price.

The query I am pretty sure you want is:

WITH item (i_id, size_id, area_id) AS (
VALUES (1,1,10),(2,4,1),(3,19,1)
)
, price_orig (p_id, size_id, area_id) AS (
VALUES
(162,4,6),(161,4,2),(159,4,1),(638,19,9),(633,19,14),(675,19,45),(64,19,1)
)
, simple_base (p_id, area_id) AS (
  SELECT p_id, area_id FROm price_orig
)
--SELECT * FROM item LEFT JOIN price_orig USING (size_id, area_id)
--original
/*  your problem query
SELECT * FROM item 
LEFT JOIN price_orig USING (size_id)
LEFT JOIN simple_base ON (price_orig.p_id = simple_base.p_id AND
item.area_id = simple_base.area_id)
*/

-- the correct query
SELECT * FROM item 
LEFT JOIN (SELECT p_id, price_orig.size_id, simple_base.area_id FROM
price_orig JOIN simple_base USING (p_id)) rebuild
 USING (size_id, area_id)

In the original query you used both size and area to link to the price
table.  Even though you have moved the area to a different table in order to
keep the same semantics you have to continue performing the same relational
join.  If you intend something different then you are not providing enough
information since neither size_id nor area_id are unique within the price
table.  Because the combination of the two just happens to not be duplicated
in the supplied data the correct queries only return a single result per
item.

In the correct query I am providing I am first re-joining (with an inner
join) the two tables so that they appear just like the original table
appeared.  Then I am joining the view to the items table using both size
and area.

The fundamental problem is that you really do not want right-hand tables in
left joins to refer to each other.

FROM item 
LEFT JOIN price_orig ON item = price_orig
LEFT JOIN price_base ON item = price_baseAND price_orig = price_base --
the second AND expression is the problem.

I do not even try to remember nesting rules for JOIN generally.  My basic
form is:

FROM
INNER*
LEFT* (with the ON clause only referring to tables joined via INNER)

if my solution requires a different usage I either move parts of the query
into CTEs or I start explicitly adding parenthesis to explicitly group the
different pieces - and adding INNER JOIN where necessary like I did for your
example.

David J.





--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Problem-with-left-join-when-moving-a-column-to-another-table-tp5760187p5760210.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Problem with left join when moving a column to another table

2013-06-20 Thread Jason Long
On Thu, 2013-06-20 at 15:37 -0700, David Johnston wrote: 

 Jason Long-2 wrote
  David,
  
  Thank you very much for your response.
  Below is a script that will reproduce the problem with comments
  included.
  
  
  
  /*
  This is the new query that is not working correctly.  
  I am trying to left join the base table by its id and area_id.  
  I need a left join because there is no guarantee that there is a
  matching price.
 
 The query I am pretty sure you want is:
 
 WITH item (i_id, size_id, area_id) AS (
 VALUES (1,1,10),(2,4,1),(3,19,1)
 )
 , price_orig (p_id, size_id, area_id) AS (
 VALUES
 (162,4,6),(161,4,2),(159,4,1),(638,19,9),(633,19,14),(675,19,45),(64,19,1)
 )
 , simple_base (p_id, area_id) AS (
   SELECT p_id, area_id FROm price_orig
 )
 --SELECT * FROM item LEFT JOIN price_orig USING (size_id, area_id)
 --original
 /*  your problem query
 SELECT * FROM item 
 LEFT JOIN price_orig USING (size_id)
 LEFT JOIN simple_base ON (price_orig.p_id = simple_base.p_id AND
 item.area_id = simple_base.area_id)
 */
 
 -- the correct query
 SELECT * FROM item 
 LEFT JOIN (SELECT p_id, price_orig.size_id, simple_base.area_id FROM
 price_orig JOIN simple_base USING (p_id)) rebuild
  USING (size_id, area_id)
 
 In the original query you used both size and area to link to the price
 table.  Even though you have moved the area to a different table in order to
 keep the same semantics you have to continue performing the same relational
 join.  If you intend something different then you are not providing enough
 information since neither size_id nor area_id are unique within the price
 table.  Because the combination of the two just happens to not be duplicated
 in the supplied data the correct queries only return a single result per
 item.

There is a unique constraint on the real price table.  I hadn't thought
of how I will enforce the constraint across two tables.
size_id and area_id will have to be unique across both
t_price_base_table and t_price_original_with_area_id.  I will want to
drop area_id from t_price_original_with_area_id.

What is the best way to implement the cross table unique constraint?


 
 In the correct query I am providing I am first re-joining (with an inner
 join) the two tables so that they appear just like the original table
 appeared.  Then I am joining the view to the items table using both size
 and area.
 
 The fundamental problem is that you really do not want right-hand tables in
 left joins to refer to each other.
 
 FROM item 
 LEFT JOIN price_orig ON item = price_orig
 LEFT JOIN price_base ON item = price_baseAND price_orig = price_base --
 the second AND expression is the problem.
 
 I do not even try to remember nesting rules for JOIN generally.  My basic
 form is:
 
 FROM
 INNER*
 LEFT* (with the ON clause only referring to tables joined via INNER)
 
 if my solution requires a different usage I either move parts of the query
 into CTEs or I start explicitly adding parenthesis to explicitly group the
 different pieces - and adding INNER JOIN where necessary like I did for your
 example.
 
 David J.
 
 
 
 
 
 --
 View this message in context: 
 http://postgresql.1045698.n5.nabble.com/Problem-with-left-join-when-moving-a-column-to-another-table-tp5760187p5760210.html
 Sent from the PostgreSQL - general mailing list archive at Nabble.com.
 
 




[GENERAL] Re: Easiest way to compare the results of two queries row by row and column by column

2013-06-20 Thread David Johnston
Jason Long-2 wrote
 Can someone suggest the easiest way to compare the results from two
 queries to make sure they are identical?

First thing that comes to mind:

 WITH 
   before_qry (col1, col2, col3) AS ( VALUES (1,1,1),(2,2,2),(3,3,3) )
 , after_qry  (col1, col2, col3) AS ( VALUES (1,1,1),(2,2,2),(3,3,3) )
 , before_array AS (SELECT array_agg(before_qry) AS before_agg_array
FROM before_qry)
 , after_array  AS (SELECT array_agg(before_qry) AS after_agg_array FROM
before_qry)
 SELECT *, before_agg_array = after_agg_array
 FROM before_array CROSS JOIN after_array

Basically turn the resultsets into arrays (of composites) and then see if
the arrays are the same.  This has issues with respect to column names and
comparable datatypes (i.e., if one column is bigint and the other is integer
they still compare equally).

One thought would to only allow a view name (and possibly, separately, the
ORDER BY clause).  Catalog lookups can be used to check for identical view
output types.

No idea of something like this exists and is readily available.

David J.






--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Easiest-way-to-compare-the-results-of-two-queries-row-by-row-and-column-by-column-tp5760209p5760215.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Easiest way to compare the results of two queries row by row and column by column

2013-06-20 Thread Jeff Janes
On Thu, Jun 20, 2013 at 3:18 PM, Jason Long
mailing.li...@octgsoftware.comwrote:

 Can someone suggest the easiest way to compare the results from two
 queries to make sure they are identical?

 I am rewriting a large number of views and I want to make sure that
 nothing is changes in the results.

 Something like

 select compare_results('select * from v_old', 'select * from v_new');


I'd run:

select * from v_old
except
select * from v_new ;

And then

select * from v_new
except
select * from v_old ;

Both should return no rows.

However, if the queries can contain duplicate rows this will not detect
differences in the number of times a row is replicated, i.e. if one query
has a row 2 times and the other has it 3 times.  If you need to detect such
cases, I'd probably \copy out each query to a file, then use system tools
to sort and diff the files.

Cheers,

Jeff


Re: [GENERAL] Re: Easiest way to compare the results of two queries row by row and column by column

2013-06-20 Thread Jason Long
Thank you.  I will give it a try.  I have never used WITH before.

Thank you for the tips.

On Thu, 2013-06-20 at 16:05 -0700, David Johnston wrote: 

 Jason Long-2 wrote
  Can someone suggest the easiest way to compare the results from two
  queries to make sure they are identical?
 
 First thing that comes to mind:
 
  WITH 
before_qry (col1, col2, col3) AS ( VALUES (1,1,1),(2,2,2),(3,3,3) )
  , after_qry  (col1, col2, col3) AS ( VALUES (1,1,1),(2,2,2),(3,3,3) )
  , before_array AS (SELECT array_agg(before_qry) AS before_agg_array
 FROM before_qry)
  , after_array  AS (SELECT array_agg(before_qry) AS after_agg_array FROM
 before_qry)
  SELECT *, before_agg_array = after_agg_array
  FROM before_array CROSS JOIN after_array
 
 Basically turn the resultsets into arrays (of composites) and then see if
 the arrays are the same.  This has issues with respect to column names and
 comparable datatypes (i.e., if one column is bigint and the other is integer
 they still compare equally).
 
 One thought would to only allow a view name (and possibly, separately, the
 ORDER BY clause).  Catalog lookups can be used to check for identical view
 output types.
 
 No idea of something like this exists and is readily available.
 
 David J.
 
 
 
 
 
 
 --
 View this message in context: 
 http://postgresql.1045698.n5.nabble.com/Easiest-way-to-compare-the-results-of-two-queries-row-by-row-and-column-by-column-tp5760209p5760215.html
 Sent from the PostgreSQL - general mailing list archive at Nabble.com.
 
 




Re: [GENERAL] Problem with left join when moving a column to another table

2013-06-20 Thread David Johnston
Jason Long-2 wrote
 Jason Long-2 wrote
 
 
 There is a unique constraint on the real price table.  I hadn't thought
 of how I will enforce the constraint across two tables.
 size_id and area_id will have to be unique across both
 t_price_base_table and t_price_original_with_area_id.  I will want to
 drop area_id from t_price_original_with_area_id.
 
 What is the best way to implement the cross table unique constraint?

Don't.

If size+area is a unique constraint then there should be a table that
defines valid pairs and creates a PRIMARY KEY over them.

Per my original comment your issue isn't JOINs (well, your biggest issue
anyway) but your model.  The fact that you couldn't write a good query
simply exposed the problems in the model.  This is not uncommon.

I would need a lot more information (and time) than I have now to offer any
design thoughts on your schema; though I do find the unique constraint over
size+area to be unusual - as well as using that as a foreign key from the
item table.  You haven't specified the domain for this model but using homes
as an example I would use a 'model' table with model_id, size, area as
columns.  A particular house would then link in model and price.  You
could possibly further restrict that certain models can only sell for
certain prices if necessary - in which case you would have model_price and
possibly house_model_price tables (the later could be an FK).

David J.








--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Problem-with-left-join-when-moving-a-column-to-another-table-tp5760187p5760220.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] LDAP authentication timing out

2013-06-20 Thread James Sewell
Hey,

Thanks for the reply Magnus.

I'm getting some packet captures now - I just thought I'd throw this out
there in case anyone else had faced similar problems.

This is EDB PPAS, I'm following up with them in parallel.

Cheers,
James Sewell

James Sewell
Solutions Architect
_

[image:
http://www.lisasoft.com/sites/lisasoft/files/u1/2013hieghtslogan_0.png]

Level 2, 50 Queen St,
Melbourne, VIC, 3000

P: 03 8370 8000   F: 03 8370 8099  W: www.lisasoft.com



On Thu, Jun 20, 2013 at 6:30 PM, Magnus Hagander mag...@hagander.netwrote:

 On Thu, Jun 20, 2013 at 7:24 AM, James Sewell 
 james.sew...@lisasoft.comwrote:

 Hello All,

 I have the following config:

 hostsamerole+myrole samenetldap
 ldapserver=ldap1,ldap2,ldap3 ldapbinddn=mybinddn
 ldapbindpasswd=mypass ldapbasedn=mybase ldapsearchattribute=myatt

 Usually auth works perfectly with LDAP (starting a session from psql
 using an LDAP connection, authenticating with the LDAP password then
 exiting straight away) I see this:

 2013-06-20 15:19:53 EST DEBUG:  edb-postgres child[15901]: starting with (
 2013-06-20 15:19:53 EST DEBUG:  forked new backend, pid=15901 socket=10
 2013-06-20 15:19:53 EST DEBUG:  edb-postgres
 2013-06-20 15:19:53 EST DEBUG:  dccn
 2013-06-20 15:19:53 EST DEBUG:  )
 2013-06-20 15:19:53 EST DEBUG:  InitPostgres
 2013-06-20 15:19:53 EST DEBUG:  my backend ID is 1
 2013-06-20 15:19:53 EST DEBUG:  StartTransaction
 2013-06-20 15:19:53 EST DEBUG:  name: unnamed; blockState:   DEFAULT;
 state: INPROGR, xid/subid/cid: 0/1/0, nestlvl: 1, children:
 2013-06-20 15:19:53 EST DEBUG:  received password packet
 2013-06-20 15:19:53 EST DEBUG:  CommitTransaction
 2013-06-20 15:19:53 EST DEBUG:  name: unnamed; blockState:   STARTED;
 state: INPROGR, xid/subid/cid: 0/1/0, nestlvl: 1, children:
 2013-06-20 15:19:56 EST DEBUG:  shmem_exit(0): 7 callbacks to make
 2013-06-20 15:19:56 EST DEBUG:  proc_exit(0): 3 callbacks to make
 2013-06-20 15:19:56 EST DEBUG:  exit(0)
 2013-06-20 15:19:56 EST DEBUG:  shmem_exit(-1): 0 callbacks to make
 2013-06-20 15:19:56 EST DEBUG:  proc_exit(-1): 0 callbacks to make
 2013-06-20 15:19:56 EST DEBUG:  reaping dead processes
 2013-06-20 15:19:56 EST DEBUG:  server process (PID 15901) exited with
 exit code 0

 However around 10% of the time (although this varies) the session hangs
 after I type in my password till the auth timeout and I see this:

 2013-06-20 15:07:46 EST DEBUG:  forked new backend, pid=15587 socket=10
 2013-06-20 15:07:46 EST DEBUG:  edb-postgres child[15587]: starting with (
 2013-06-20 15:07:46 EST DEBUG:  edb-postgres
 2013-06-20 15:07:46 EST DEBUG:  dccn
 2013-06-20 15:07:46 EST DEBUG:  )
 2013-06-20 15:07:46 EST DEBUG:  InitPostgres
 2013-06-20 15:07:46 EST DEBUG:  my backend ID is 1
 2013-06-20 15:07:46 EST DEBUG:  StartTransaction
 2013-06-20 15:07:46 EST DEBUG:  name: unnamed; blockState:   DEFAULT;
 state: INPROGR, xid/subid/cid: 0/1/0, nestlvl: 1, children:
 2013-06-20 15:07:46 EST DEBUG:  received password packet
 2013-06-20 15:08:46 EST DEBUG:  shmem_exit(1): 7 callbacks to make
 2013-06-20 15:08:46 EST DEBUG:  proc_exit(1): 3 callbacks to make
 2013-06-20 15:08:46 EST DEBUG:  exit(1)
 2013-06-20 15:08:46 EST DEBUG:  shmem_exit(-1): 0 callbacks to make
 2013-06-20 15:08:46 EST DEBUG:  proc_exit(-1): 0 callbacks to make
 2013-06-20 15:08:46 EST DEBUG:  reaping dead processes
 2013-06-20 15:08:46 EST DEBUG:  server process (PID 15587) exited with
 exit code 1

 Anyone have any ideas? I never see this with MD5.

 I can multiple quickfire binds from an LDAP application and the same bind
 DN with no problems.


 Sounds like an issue either with your ldap server, your network or the
 ldap client library. But it's kind of hard to tell. You're probably best
 off getting a network trace of the traffic between the ldap server and
 postgres, to see how far it gets at all  - that's usually a good pointer
 when it comes to timeouts.

 Also, what version of postgres (looks from the names that this might be
 edb advanced server and not actually postgres? In that case you might be
 better off talking to the EDB people - they may have made some
 modifications to the ldap code perhaps)?

 What OS?
 Versions?
 What ldap client and version?
 What ldap server?

 --
  Magnus Hagander
  Me: http://www.hagander.net/
  Work: http://www.redpill-linpro.com/


-- 


--
The contents of this email are confidential and may be subject to legal or 
professional privilege and copyright. No representation is made that this 
email is free of viruses or other defects. If you have received this 
communication in error, you may not copy or distribute any part of it or 
otherwise disclose its contents to anyone. Please advise the sender of your 
incorrect receipt of this correspondence.
image001.png

Re: [GENERAL] Snapshot backups

2013-06-20 Thread James Sewell
Thanks Magnus,

Could you elaborate a bit more on this?

I've been having a look at do_pg_start_backup() and I can't really see
anything apart from enabling full page writes and running a checkpoint to
avoid getting a torn page. I could be missing something easily though, as
I'm not familiar with the codebase.

do_pg_stop_backup() isn't really of consequence, as the backup is taken
before this - so any restore is to a point in time before this as well.

I was under the impression a restore was (more or less) the same as a crash
recovery, and logically it seems like PGDATA snapshot is equivalent to a
crash/restart (disk at a discrete point in time).

I can understand if log replay might take longer, but I am struggling to
see how it could result in an inconsistent state?

As I said I know this isn't best practice, but just want to understand how
it works.

Cheers,



James Sewell
Solutions Architect
_

[image:
http://www.lisasoft.com/sites/lisasoft/files/u1/2013hieghtslogan_0.png]

Level 2, 50 Queen St,
Melbourne, VIC, 3000

P: 03 8370 8000   F: 03 8370 8099  W: www.lisasoft.com



On Thu, Jun 20, 2013 at 6:34 PM, Magnus Hagander mag...@hagander.netwrote:


 On Thu, Jun 20, 2013 at 8:45 AM, James Sewell 
 james.sew...@lisasoft.comwrote:

 Hey All,

 This is a message to confirm my thoughts / validate a possible approach.

 In a situation where PGDATA and {XLOG, ARCHIVELOG} are on different
 SAN/NAS volumes and a backup is to be initiated do pg_start_backup and
 pg_stop_backup need to be used?

 I am using snapshots of each volume for backup.

 My thinking is that they are not needed (although I realise it is good
 practice).

 As far as I can tell all they are doing is something like:

 pg_start_backup:
   - create backup label
   - trigger checkpoint

 pg_stop_backup
   - remove backup label file
   - creates backup history file
   - trigger log switch

 There is nothing in here that is *required* from a backup point of view.
 Am I missing anything?


 The backup functions also set internal state in the database, so you can't
 just replace it with doing those operations manually.  You do need to call
 those functions.

 --
  Magnus Hagander
  Me: http://www.hagander.net/
  Work: http://www.redpill-linpro.com/


-- 


--
The contents of this email are confidential and may be subject to legal or 
professional privilege and copyright. No representation is made that this 
email is free of viruses or other defects. If you have received this 
communication in error, you may not copy or distribute any part of it or 
otherwise disclose its contents to anyone. Please advise the sender of your 
incorrect receipt of this correspondence.
image001.png

Re: [GENERAL] Tow kinds of different result while using create index concurrently

2013-06-20 Thread 高健
Thanks Jeff

But What I can't understand is:
In  My first test, the create index concurrently works well.
In My second test,  the create index concurrently can not work.

The difference is only on ecpg's select statement :
One use host variable of char (its value is of integer 14) in select
statement,
While the other is just a simple select.

If the transaction will potentially the index, it should be same on my
first test and second test.

My customer want to use PG on their 7x24 environment, while rebuilding
index periodically.
If I can't do it on PG, it really confused me...

sincerely yours
Jian

2013/6/21 Jeff Janes jeff.ja...@gmail.com

 On Thu, Jun 20, 2013 at 1:27 AM, 高健 luckyjack...@gmail.com wrote:

 Hello:



 I  have question about PG's create index concurrently. I think it is a
 bug perhaps.



 I  make two tables tab01 and tab02, they have no relationships.

 I think create index concurrently  on tab02 will not be influenced by
 transaction on tab01.

 But the result differs:


 This is expected.  In order to not interfere with normal activity, a
 concurrent index build has to volunteer to be blocked by such activity
 instead.  From the doc: When this option is used, PostgreSQL must
 perform two scans of the table, and in addition it must wait for all
 existing transactions that could potentially use the index to terminate.

 Now in your case, perhaps the argument could be made that the transaction
 hosting the 1st concurrent build could not potentially use the 2nd-building
 index, but there is no convenient way for PostgreSQL to detect that fact.

  Cheers,

 Jeff



[GENERAL] Circular references

2013-06-20 Thread Melvin Call
I was given a dump of an existing remote schema and database, and the
restore on my local system failed. Looking into it, I found a circular
parent-child/child-parent relationship, and I don't believe this existing
structure is viable. To summarize, the organization entity has an attribute
of creator, which is a foreign key to the user table, but the user has to
belong to an organization, which is a foreign key to the organization
table. Since neither are nullable, there is no way to create even an
initial record. My guess is one or both of the tables was first populated,
and then the FK constraint(s) created.

So, my question is just a request to confirm that I haven't lost my mind
and/or am missing something. Is there any way this could work? The relevant
table structures are listed below.

Thanks a million,
Melvin

\d organization
  Table project.organization
 Column  |   Type   |
Modifiers
-+--+
 organization_id | bigint   | not null default
nextval('organization_organization_id_seq'::regclass)
 name| character varying(300)   | not null
 type_id | bigint   | not null
 description | text | not null default '-'::text
 website | character varying(500)   | default '-'::character varying
 date_created| timestamp with time zone | not null default
('now'::text)::date
 created_by  | bigint   | not null
 date_updated| timestamp with time zone |
 updated_by  | bigint   |
Indexes:
p_key_org_id PRIMARY KEY, btree (organization_id)
Foreign-key constraints:
f_key_org_org_type_id FOREIGN KEY (type_id) REFERENCES
organization_type(type_id)
f_key_org_user_created_by FOREIGN KEY (created_by) REFERENCES
user(user_id)
f_key_org_user_updated_by FOREIGN KEY (updated_by) REFERENCES
user(user_id)
Referenced by:
TABLE program CONSTRAINT f_key_program_org_id FOREIGN KEY
(organization_id) REFERENCES organization(organization_id)
TABLE user CONSTRAINT f_key_user_org_id FOREIGN KEY
(organization_id) REFERENCES organization(organization_id)


\d user
  Table project.user
 Column  |   Type   |
Modifiers
-+--+
 username| character varying(100)   | not null
 password| character varying(100)   | not null
 date_created| timestamp with time zone | not null
 date_updated| timestamp with time zone |
 updated_by  | bigint   |
 created_by  | bigint   | not null
 person_id   | bigint   | not null
 organization_id | bigint   | not null
 user_id | bigint   | not null default
nextval('user_user_id_seq'::regclass)
 user_role_id| bigint   | not null
Indexes:
p_key_user_id PRIMARY KEY, btree (user_id)
Foreign-key constraints:
f_key_user_org_id FOREIGN KEY (organization_id) REFERENCES
organization(organization_id)
f_key_user_person_id FOREIGN KEY (person_id) REFERENCES
person(person_id)
f_key_user_user_role_id FOREIGN KEY (user_role_id) REFERENCES
user_role(user_role_id)
Referenced by:
TABLE observation_parameter CONSTRAINT
f_key_observation_param_user_created_by FOREIGN KEY (created_by)
REFERENCES user(user_id)
TABLE observation_parameter CONSTRAINT
f_key_observation_param_user_updated_by FOREIGN KEY (updated_by)
REFERENCES user(user_id)
TABLE observation_tuple CONSTRAINT
f_key_observation_tuple_user_created_by FOREIGN KEY (created_by)
REFERENCES user(user_id)
TABLE observation_tuple CONSTRAINT
f_key_observation_tuple_user_updated_by FOREIGN KEY (updated_by)
REFERENCES user(user_id)
TABLE organization CONSTRAINT f_key_org_user_created_by FOREIGN KEY
(created_by) REFERENCES user(user_id)
TABLE organization CONSTRAINT f_key_org_user_updated_by FOREIGN KEY
(updated_by) REFERENCES user(user_id)
TABLE program_admin CONSTRAINT f_key_prog_admin_user_id FOREIGN KEY
(user_id) REFERENCES user(user_id)
TABLE program CONSTRAINT f_key_program_user_created_by FOREIGN KEY
(created_by) REFERENCES user(user_id)
TABLE program CONSTRAINT f_key_program_user_owner_id FOREIGN KEY
(owner_id) REFERENCES user(user_id)
TABLE program CONSTRAINT f_key_program_user_updated_by FOREIGN KEY
(updated_by) REFERENCES user(user_id)


Re: [GENERAL] Archiving and recovering pg_stat_tmp

2013-06-20 Thread Amit Langote
On Thu, Jun 20, 2013 at 8:32 PM, Amit Langote amitlangot...@gmail.com wrote:
 On Thu, Jun 20, 2013 at 6:05 PM, Sameer Thakur samthaku...@gmail.com wrote:
Documentation mentions following:
 Thanks, but how does this relate to statistics recovery wrt PITR?

 Upon clean server shutdown, you have the statistics files stored in
 the pg_stat (previously global/) directory, which persists across
 server restarts, which, might even be applicable to a PITR, as far as
 I can understand. This would need some testing, though, to be sure
 that it is the case.

So as I said, I gave it a try. Correct me if I am wrong in
understanding your requirement:

You need to have statistics recovered to the same state as they were
when you took the FS level backup of your database after shutting down
the server.

Shutting down is important since that is when you would have
statistics files ($PGDATA/pg_stat/*.stat) available to backup. They
capture the statistics as of when the server was shut down.

Now, later  you want to restore to that state (one in the above
backup) with statistics as in that backed up snapshot. So, you write a
recovery.conf in that backup directory with restore_command which
reads from an archive which you have setup for PITR purpose. When you
start the server using backup directory, it enters archive recovery
mode and then comes online. Now you may be wondering what the state of
statistics may be. When I tried, I got the same statistics as in the
file system snapshot. That is, the archive recovery (which brings
forward the database state to a later point time) did not in any way
affect the statistics.

What I did:

1) Collect a few statistics in a result file from a currently running
server. For example, the result of the query select * from
pg_stat_user_tables, into say stats1.txt

2) Clean shut down the server. Take a snapshot of the data directory,
cp -r $pgdata $pgbkp

3) Start the server and run a few pgbench tests so that statistics
change. Again collect stats, same as in (1) into say stats2.txt

4) Write $pgbkp/recovery.conf with appropriate restore_command and
maybe recovery target (PITR), which I did not, though. Note that we
have archiving enabled.

5) Start the server using -D $pgbkp (may be with port changed for the
sake of testing).

6) After server started in (5) is done recovering and comes online,
collect stats again into say stats3.txt

7) Compare stats3.txt with stats1.txt and stats2.txt.

8) I observed that stats3.txt == stats1.txt. That is stats after
recovery are same as they were when the snapshot was taken.



--
Amit Langote


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Circular references

2013-06-20 Thread Tom Lane
Melvin Call melvincall...@gmail.com writes:
 I was given a dump of an existing remote schema and database, and the
 restore on my local system failed. Looking into it, I found a circular
 parent-child/child-parent relationship, and I don't believe this existing
 structure is viable. To summarize, the organization entity has an attribute
 of creator, which is a foreign key to the user table, but the user has to
 belong to an organization, which is a foreign key to the organization
 table. Since neither are nullable, there is no way to create even an
 initial record. My guess is one or both of the tables was first populated,
 and then the FK constraint(s) created.

 So, my question is just a request to confirm that I haven't lost my mind
 and/or am missing something. Is there any way this could work? The relevant
 table structures are listed below.

I think you're right: there's no way that such a structure would be
very useful in practice, because inserting any new data would have a
chicken-vs-egg problem.  However, I'm curious about your statement that
dump/restore failed.  I tried this test case:

regression=# create database bogus;
CREATE DATABASE
regression=# \c bogus
You are now connected to database bogus as user postgres.
bogus=# create table t1 (f1 int primary key);
CREATE TABLE
bogus=# insert into t1 values (1),(2);
INSERT 0 2
bogus=# create table t2 (f1 int primary key);
CREATE TABLE
bogus=# insert into t2 values (1),(2);
INSERT 0 2
bogus=# alter table t1 add foreign key (f1) references t2;
ALTER TABLE
bogus=# alter table t2 add foreign key (f1) references t1;
ALTER TABLE

and then did a pg_dump and restore; and for me, the restore went
through just fine, because the dump script did exactly the same
thing, ie issue ALTER ADD FOREIGN KEY commands only after populating
the tables.  Was your dump from an ancient version of pg_dump?
Or maybe you tried to use separate schema and data dumps?
If neither, could you show a self-contained case where it fails?

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Problem with left join when moving a column to another table

2013-06-20 Thread Jason Long



On Thu, 2013-06-20 at 16:22 -0700, David Johnston wrote: 

 Jason Long-2 wrote
  Jason Long-2 wrote
  
  
  There is a unique constraint on the real price table.  I hadn't thought
  of how I will enforce the constraint across two tables.
  size_id and area_id will have to be unique across both
  t_price_base_table and t_price_original_with_area_id.  I will want to
  drop area_id from t_price_original_with_area_id.
  
  What is the best way to implement the cross table unique constraint?
 
 Don't.
 
 If size+area is a unique constraint then there should be a table that
 defines valid pairs and creates a PRIMARY KEY over them.
 
 Per my original comment your issue isn't JOINs (well, your biggest issue
 anyway) but your model.  The fact that you couldn't write a good query
 simply exposed the problems in the model.  This is not uncommon.
 
 I would need a lot more information (and time) than I have now to offer any
 design thoughts on your schema; though I do find the unique constraint over
 size+area to be unusual - as well as using that as a foreign key from the
 item table.  You haven't specified the domain for this model but using homes
 as an example I would use a 'model' table with model_id, size, area as
 columns.  A particular house would then link in model and price.  You
 could possibly further restrict that certain models can only sell for
 certain prices if necessary - in which case you would have model_price and
 possibly house_model_price tables (the later could be an FK).
 
 David J.
 
 
 
 
 
 
 
 
 --
 View this message in context: 
 http://postgresql.1045698.n5.nabble.com/Problem-with-left-join-when-moving-a-column-to-another-table-tp5760187p5760220.html
 Sent from the PostgreSQL - general mailing list archive at Nabble.com.
 
 

David,

I really appreciate your help.  I had not used WITH or any of the syntax
you showed me before.  Pretty cool.
I normally just write a bunch of views to build complex queries.

Does the syntax you showed me have performance benefits vs joining a
bunch of views together?

I spent way to much time trying to get the query to work, and all I
needed to do was write a view

create or replace view price.v_offerprice_pipe as
select op.id,  
   op.price,
   op.active,
   op.stditem,
   op.version,
   opp.size_id,
   opp.weight_id,
   opp.grade_id,
   opp.endfinish_id,
   opp.manufacturer_id,
   opp.condition_id,
   opp.area_id
from price.t_offerprice_pipe opp
join price.t_offerprice op on op.id=opp.id;

This allowed me to move  (price,  active, stditem, version) to the base
table without breaking any of my views with very minimal change to the
view definitions.

I just had to replace any references to price.t_offerprice_pipe with the
view price.v_offerprice_pipe in any of the views that were complaining
about dropping the columns.

I decided not to move area_id to the base table for now.  Without being
able to properly do a cross table unique constraint, it will stay where
it is currently.


Re: [GENERAL] Problem with left join when moving a column to another table

2013-06-20 Thread David Johnston
Jason Long-2 wrote
 Does the syntax you showed me have performance benefits vs joining a
 bunch of views together?

As a general rule CTE/WITH is going to be worse performing than the
equivalent view definition - depending on the view is actually used in the
query of course.  They both have their place.  A CTE/WITH is basically a
per-query VIEW though there is an optimization barrier that doesn't allow
the main query WHERE clause to limit the queries like it a view would
normally allow if possible.  Because of this I'll occasionally find need to
specify redundant where clauses inside the CTE to get decent performance on
large tables - mostly for my interactive queries.

David J.




--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Problem-with-left-join-when-moving-a-column-to-another-table-tp5760187p5760255.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Archiving and recovering pg_stat_tmp

2013-06-20 Thread Sameer Thakur
 You need to have statistics recovered to the same state as they were
 when you took the FS level backup of your database after shutting down
 the server.

Correct


 Shutting down is important since that is when you would have
 statistics files ($PGDATA/pg_stat/*.stat) available to backup. They
 capture the statistics as of when the server was shut down.
 Agreed


   What I did:


 1) Collect a few statistics in a result file from a currently running
 server. For example, the result of the query select * from
 pg_stat_user_tables, into say stats1.txt

 2) Clean shut down the server. Take a snapshot of the data directory,
 cp -r $pgdata $pgbkp

 3) Start the server and run a few pgbench tests so that statistics
 change. Again collect stats, same as in (1) into say stats2.txt

 4) Write $pgbkp/recovery.conf with appropriate restore_command and
 maybe recovery target (PITR), which I did not, though. Note that we
 have archiving enabled.

 5) Start the server using -D $pgbkp (may be with port changed for the
 sake of testing).

 6) After server started in (5) is done recovering and comes online,
 collect stats again into say stats3.txt

 7) Compare stats3.txt with stats1.txt and stats2.txt.

 8) I observed that stats3.txt == stats1.txt. That is stats after
 recovery are same as they were when the snapshot was taken.

 Thank you for all the effort! A question

  When server was restarted in (5) which stats file was loaded stats1.txt
or stats.2.txt?. I think it must have been stats1.txt as stats3.txt =
stats1.txt. What happens if stats2.txt is loaded on (5) instead on
stats1.txt? I am trying to figure out if the Server will reject stats file
from a different timeline than the one its been rolled back to.
regards
Sameer





Re: [GENERAL] Migration from DB2 to PostgreSQL

2013-06-20 Thread sachin kotwal
PostgreSQL has no such capability.  Unless you need that and 
want to code it yourself, the best solution would be to 
write a function that just ignores the third argument. 

For time being I will write a function that just ignores the third argument.

but if we really want to create such function like DB2 TO_CHAR()  we need to
code it.

like setting locale as third argument or format string according to third
argument and return it.



--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Migration-from-DB2-to-PostgreSQL-tp5759820p5760265.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general