[HACKERS] CREATE FOREIGN TABLE ( ... LIKE ... )

2013-10-08 Thread David Fetter
Folks,

Please find attached a patch implementing and documenting, to some
extent, $subject.  I did this in aid of being able to import SQL
standard catalogs and other entities where a known example could
provide a template for a foreign table.

Should there be errhint()s, too?  Should we pile up all such errors
and mention them at the end rather than simply bailing on the first
one?

TBD: regression tests.

Cheers,
David.
-- 
David Fetter da...@fetter.org http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter  XMPP: david.fet...@gmail.com
iCal: webcal://www.tripit.com/feed/ical/people/david74/tripit.ics

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate
diff --git a/doc/src/sgml/ref/create_foreign_table.sgml 
b/doc/src/sgml/ref/create_foreign_table.sgml
index 1ef4b5e..4a8e265 100644
--- a/doc/src/sgml/ref/create_foreign_table.sgml
+++ b/doc/src/sgml/ref/create_foreign_table.sgml
@@ -20,6 +20,7 @@
 synopsis
 CREATE FOREIGN TABLE [ IF NOT EXISTS ] replaceable 
class=PARAMETERtable_name/replaceable ( [
 replaceable class=PARAMETERcolumn_name/replaceable replaceable 
class=PARAMETERdata_type/replaceable [ OPTIONS ( replaceable 
class=PARAMETERoption/replaceable 'replaceable 
class=PARAMETERvalue/replaceable' [, ... ] ) ] [ COLLATE 
replaceablecollation/replaceable ] [ replaceable 
class=PARAMETERcolumn_constraint/replaceable [ ... ] ]
+| LIKE replaceablesource_table/replaceable [ 
replaceablelike_option/replaceable ... ] }
 [, ... ]
 ] )
   SERVER replaceable class=parameterserver_name/replaceable
@@ -114,6 +115,15 @@ CREATE FOREIGN TABLE [ IF NOT EXISTS ] replaceable 
class=PARAMETERtable_name
/varlistentry
 
varlistentry
+termliteralLIKE 
replaceablesource_table/replaceable/literal/term
+listitem
+ para
+  The literalLIKE/literal clause specifies a table from which
+  the new foreign table automatically copies all column names and their 
data types.
+ /para
+   /varlistentry
+
+   varlistentry
 termliteralNOT NULL//term
 listitem
  para
diff --git a/src/backend/parser/parse_utilcmd.c 
b/src/backend/parser/parse_utilcmd.c
index 19d19e5f..152fa01 100644
--- a/src/backend/parser/parse_utilcmd.c
+++ b/src/backend/parser/parse_utilcmd.c
@@ -649,7 +649,7 @@ transformTableConstraint(CreateStmtContext *cxt, Constraint 
*constraint)
 /*
  * transformTableLikeClause
  *
- * Change the LIKE srctable portion of a CREATE TABLE statement into
+ * Change the LIKE srctable portion of a CREATE [FOREIGN] TABLE statement 
into
  * column definitions which recreate the user defined column portions of
  * srctable.
  */
@@ -669,10 +669,12 @@ transformTableLikeClause(CreateStmtContext *cxt, 
TableLikeClause *table_like_cla
  
table_like_clause-relation-location);
 
/* we could support LIKE in many cases, but worry about it another day 
*/
+   /* Let's see whether just dropping this enables LIKE :)
if (cxt-isforeign)
ereport(ERROR,
(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
 errmsg(LIKE is not supported for creating 
foreign tables)));
+*/
 
relation = relation_openrv(table_like_clause-relation, 
AccessShareLock);
 
@@ -689,6 +691,25 @@ transformTableLikeClause(CreateStmtContext *cxt, 
TableLikeClause *table_like_cla
cancel_parser_errposition_callback(pcbstate);
 
/*
+* For foreign tables, disallow some options.
+*/
+   if (cxt-isforeign)
+   {
+   if (table_like_clause-options  CREATE_TABLE_LIKE_CONSTRAINTS)
+   ereport(ERROR,
+   (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+errmsg(ERROR: foreign tables do not 
support LIKE INCLUDING CONSTRAINTS)));
+   else if (table_like_clause-options  CREATE_TABLE_LIKE_INDEXES)
+   ereport(ERROR,
+   (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+errmsg(ERROR: foreign tables do not 
support LIKE INCLUDING INDEXES)));
+   else if (table_like_clause-options  CREATE_TABLE_LIKE_STORAGE)
+   ereport(ERROR,
+   (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
+errmsg(ERROR: foreign tables do not 
support LIKE INCLUDING STORAGE)));
+   }
+
+   /*
 * Check for privileges
 */
if (relation-rd_rel-relkind == RELKIND_COMPOSITE_TYPE)

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [COMMITTERS] pgsql: Add DISCARD SEQUENCES command.

2013-10-08 Thread Michael Paquier
On Tue, Oct 8, 2013 at 4:57 AM, Robert Haas robertmh...@gmail.com wrote:
 On Sat, Oct 5, 2013 at 8:10 AM, Michael Paquier
 michael.paqu...@gmail.com wrote:
 Here is the test case failing:
 =# create sequence foo;
 CREATE SEQUENCE
 =# select nextval('foo');
  nextval
 -
1
 (1 row)
 =# discard sequences ;
 DISCARD SEQUENCES
 =# select currval('foo');
 ERROR:  55000: currval of sequence foo is not yet defined in this session
 LOCATION:  currval_oid, sequence.c:780
 =# select lastval();
 The connection to the server was lost. Attempting reset: Failed.

 Thanks.  I have pushed a fix that I hope will be sufficient.
It is fine now. Thanks for the fix.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] drop-index-concurrently-1 on master fails at serializable

2013-10-08 Thread Kevin Grittner
*** 
/home/kgrittn/pg/master/src/test/isolation/expected/drop-index-concurrently-1.out
   2013-07-10 14:58:41.641205557 -0500
--- 
/home/kgrittn/pg/master/src/test/isolation/results/drop-index-concurrently-1.out
    2013-10-07 12:20:49.269277851 -0500
***
*** 30,40 
  id data  
 
  34 34    
- 134    34    
  step selects: EXECUTE getrow_seq;
  id data  
 
  34 34    
- 134    34    
  step end: COMMIT;
  step drop: ... completed
--- 30,38 

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: custom hash-based COUNT(DISTINCT) aggregate - unexpectedly high memory consumption

2013-10-08 Thread Atri Sharma


Sent from my iPad

 On 08-Oct-2013, at 10:41, Claudio Freire klaussfre...@gmail.com wrote:
 
 On Tue, Oct 8, 2013 at 1:23 AM, Atri Sharma atri.j...@gmail.com wrote:
 Consider the aspects associated with open addressing.Open addressing
 can quickly lead to growth in the main table.Also, chaining is a much
 cleaner way of collision resolution,IMHO.
 
 What do you mean by growth in the main table?
 
 Sorry, I should have been more verbose.
 
 AFAIK, Open addressing can be slower with a load factor approaching 1
 as compared to chaining. Also, I feel that implementation of open
 addressing can be more complicated as we have to deal with deletes
 etc.
 
 
 Deletes for a hash aggregate?

Yeah, that doesn't apply here.I was just listing out the demerits of open 
addressing :)

My point is, it is not wise to unnecessarily complicate matters by shifting to 
open addressing. If we want, we could look at changing the data structure used 
for chaining, but chaining is better for our requirements IMHO.

Regards,

Atri 

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch for fail-back without fresh backup

2013-10-08 Thread Sawada Masahiko
On Fri, Oct 4, 2013 at 4:32 PM, Fujii Masao masao.fu...@gmail.com wrote:

 You added several checks into SyncRepWaitForLSN() so that it can handle both
 synchronous_transfer=data_flush and =commit. This change made the source code
 of the function very complicated, I'm afraid. To simplify the source code,
 what about just adding new wait-for-lsn function for data_flush instead of
 changing SyncRepWaitForLSN()? Obviously that new function and
 SyncRepWaitForLSN()
 have the common part. I think that it should be extracted as separate 
 function.

Thank you for reviewing and comment!

yes I agree with you.
I attached the v12 patch which have modified based on above suggestions.
- Added new function SyncRepTransferWaitForLSN() and SyncRepWait()
  SyncRepTransferWaitForLSN() is called on date page flush. OTOH,
SyncRepWatiForLSN() is called on transaction commit.
  And both functions call the SyncRepWait() after check whether sync
commit/transfer is requested.
  Practically server will waits at SyncRepWait().


 + * Note that if sync transfer is requested, we can't regular
 maintenance until
 + * standbys to connect.
   */
 -if (synchronous_commit  SYNCHRONOUS_COMMIT_LOCAL_FLUSH)
 +if (synchronous_commit  SYNCHRONOUS_COMMIT_LOCAL_FLUSH 
 !SyncTransRequested())

 Per discussion with Pavan, ISTM we don't need to avoid setting
 synchronous_commit
 to local even if synchronous_transfer is data_flush. But you did that here. 
 Why?

I made a mistake. I have removed it.


 When synchronous_transfer = data_flush, anti-wraparound vacuum can be blocked.
 Is this safe?


In the new version patch, when synchronous_transfer = data_flush/all
AND synchronous_standby_names is set,
vacuum is blocked.
This behaviour of synchronous_transfer similar to synchronous_commit.
Should this allow to do anti-wraparound vacuum even if
synchronous_transfer = data_flush/all?
If so, it also allow to flush the data page while doing vacuum?

 +#synchronous_transfer = commit# data page synchronization level
 +# commit, data_flush or all

 This comment seems confusing. I think that this parameter specifies when to
 wait for replication.

 +typedef enum
 +{
 +SYNCHRONOUS_TRANSFER_COMMIT,/* no wait for flush data page */
 +SYNCHRONOUS_TRANSFER_DATA_FLUSH,/* wait for data page flush only
 + * no wait for WAL */
 +SYNCHRONOUS_TRANSFER_ALL/* wait for data page flush and WAL*/
 +}SynchronousTransferLevel;

 These comments also seem confusing. For example, I think that the meaning of
 SYNCHRONOUS_TRANSFER_COMMIT is something like wait for replication on
 transaction commit.

Those comments are modified in new patch.


 @@ -521,6 +531,13 @@ smgr_redo(XLogRecPtr lsn, XLogRecord *record)
   */
  XLogFlush(lsn);

 +/*
 + * If synchronous transfer is requested, wait for failback safe 
 standby
 + * to receive WAL up to lsn.
 + */
 +if (SyncTransRequested())
 +SyncRepWaitForLSN(lsn, true, true);

 If smgr_redo() is called only during recovery, SyncRepWaitForLSN() doesn't 
 need
 to be called here.

Thank you for info.
I have removed it at smgr_redo().

Regards,

---
Sawada Masahiko


synchronous_transfer_v12.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: custom hash-based COUNT(DISTINCT) aggregate - unexpectedly high memory consumption

2013-10-08 Thread Tomas Vondra
On 8 Říjen 2013, 6:23, Atri Sharma wrote:
 Hi Tomas,


 Consider the aspects associated with open addressing.Open addressing
 can quickly lead to growth in the main table.Also, chaining is a much
 cleaner way of collision resolution,IMHO.

 What do you mean by growth in the main table?

 Sorry, I should have been more verbose.

 AFAIK, Open addressing can be slower with a load factor approaching 1
 as compared to chaining. Also, I feel that implementation of open
 addressing can be more complicated as we have to deal with deletes
 etc.

 I feel we can redesign our current chaining mechanism to have skip
 lists instead of singly linked lists. I experimented with it sometime
 back and I feel that it gives a stable performance with higher loads.

 Regards,

 Atri

OK, thanks for the explanation.

I've spent some time hacking this yesterday, the results are available in
a separate branch:

  https://github.com/tvondra/count_distinct/tree/open-addressing

The complexity of the implementation is pretty much comparable to
chaining. I assume it would get much more complex with handling deletes
etc., but that's not really necessary here.

However I'm unable to make it at least comparable to chaining. The trick
is that the hash table performs reasonably only until it's ~ 65-70% full,
it gets really slow after that. So to maintain performance comparable to
chaining, I'd have to keep the table below this threshold, effectively
wasting ~30% of memory.

And the topic of this thread was about decreasing the memory consumptions,
so it seems to me open-addressing is not a good match here. I'll try a few
more things but I don't think it's going to fly.

I've made some significant improvements in the chaining version (in the
master branch), now getting about the memory consumption I've estimated.

Tomas



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch for fail-back without fresh backup

2013-10-08 Thread Pavan Deolasee
On Tue, Oct 8, 2013 at 2:33 PM, Sawada Masahiko sawada.m...@gmail.comwrote:

 On Fri, Oct 4, 2013 at 4:32 PM, Fujii Masao masao.fu...@gmail.com wrote:
 
 I attached the v12 patch which have modified based on above suggestions.


There are still some parts of this design/patch which I am concerned about.

1. The design clubs synchronous standby and failback safe standby rather
very tightly. IIRC this is based on the feedback you received early, so my
apologies for raising it again so late.
 a. GUC synchrnous_standby_names is used to name synchronous as well as
failback safe standbys. I don't know if that will confuse users.
 b. synchronous_commit's value will also control whether a sync/async
failback safe standby wait for remote write or flush. Is that reasonable ?
Or should there be a different way to configure the failback safe standby's
WAL safety ?

2. With the current design/implementation, user can't configure a
synchronous and an async failback safe standby at the same time. I think we
discussed this earlier and there was an agreement on the limitation. Just
wanted to get that confirmed again.

3. SyncRepReleaseWaiters() does not know whether its waking up backends
waiting for sync rep or failback safe rep. Is that ok ? For example, I
found that the elog() message announcing next takeover emitted by the
function may look bad. Since changing synchronous_transfer requires server
restart, we can teach SyncRepReleaseWaiters() to look at that parameter to
figure out whether the standby is sync and/or failback safe standby.

4. The documentation still need more work to clearly explain the use case.

5. Have we done any sort of stress testing of the patch ? If there is a
bug, the data corruption at the master can go unnoticed. So IMHO we need
many crash recovery tests to ensure that the patch is functionally correct.

Thanks,
Pavan

-- 
Pavan Deolasee
http://www.linkedin.com/in/pavandeolasee


Re: [HACKERS] SSI freezing bug

2013-10-08 Thread Kevin Grittner
Heikki Linnakangas hlinnakan...@vmware.com wrote:

 A comment somewhere would be nice to explain why we're no longer worried
 about confusing an old tuple version with a new tuple in the same slot.
 Not sure where.

For now I counted on the commit message.  Perhaps we also want to
add something to the README file?

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: custom hash-based COUNT(DISTINCT) aggregate - unexpectedly high memory consumption

2013-10-08 Thread Atri Sharma


Sent from my iPad

 
 However I'm unable to make it at least comparable to chaining. The trick
 is that the hash table performs reasonably only until it's ~ 65-70% full,
 it gets really slow after that. So to maintain performance comparable to
 chaining, I'd have to keep the table below this threshold, effectively
 wasting ~30% of memory.
I expected that. AFAIK, open addressing gets slow when the load factor 
approaches 1. I feel this is what is happening here.
 
 And the topic of this thread was about decreasing the memory consumptions,
 so it seems to me open-addressing is not a good match here. I'll try a few
 more things but I don't think it's going to fly.
 
Yeah, I also feel that open addressing isn't the way to go for the problem here.

 I've made some significant improvements in the chaining version (in the
 master branch), now getting about the memory consumption I've estimated.
 
I agree, we can hope to reduce the memory consumption by making changes in the 
current chaining implementation. I would like to look into changing the data 
structure used for chaining from singly linked list to maybe skip list or 
something else.

Regards,

Atri

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] plpgsql.print_strict_params

2013-10-08 Thread Marko Tiikkaja

On 10/7/13 9:46 PM, Robert Haas wrote:

Looks good to me.  Committed.


Thanks.

Also huge thanks to Ian for a phenomenal first review. :-)


Regards,
Marko Tiikkaja


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch for fail-back without fresh backup

2013-10-08 Thread Andres Freund
On 2013-10-08 15:07:02 +0530, Pavan Deolasee wrote:
 On Tue, Oct 8, 2013 at 2:33 PM, Sawada Masahiko sawada.m...@gmail.comwrote:
 
  On Fri, Oct 4, 2013 at 4:32 PM, Fujii Masao masao.fu...@gmail.com wrote:
  
  I attached the v12 patch which have modified based on above suggestions.
 
 
 There are still some parts of this design/patch which I am concerned about.
 
 1. The design clubs synchronous standby and failback safe standby rather
 very tightly. IIRC this is based on the feedback you received early, so my
 apologies for raising it again so late.

It is my impression that there still are several people having pretty
fundamental doubts about this approach in general. From what I remember
neither Heikki, Simon, Tom nor me were really convinced about this
approach.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Compression of full-page-writes

2013-10-08 Thread KONDO Mitsumasa

(2013/10/08 17:33), Haribabu kommi wrote:

The checkpoint_timeout and checkpoint_segments are increased to make sure no 
checkpoint happens during the test run.
Your setting is easy occurred checkpoint in checkpoint_segments = 256. I don't 
know number of disks in your test server, in my test server which has 4 magnetic 
disk(1.5k rpm), postgres generates 50 - 100 WALs per minutes.


And I cannot understand your setting which is sync_commit = off. This setting 
tend to cause cpu bottle-neck and data-loss. It is not general in database usage.

Therefore, your test is not fair comparison for Fujii's patch.

Going back to my DBT-2 benchmark, I have not got good performance (almost same 
performance).  So I am checking hunk, my setting, or something wrong in Fujii's 
patch now. I am going to try to send test result tonight.


Regards,
--
Mitsumasa KONDO
NTT Open Source Software Center


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Compression of full-page-writes

2013-10-08 Thread Andres Freund
On 2013-09-11 12:43:21 +0200, Andres Freund wrote:
 On 2013-09-11 19:39:14 +0900, Fujii Masao wrote:
  * Benchmark
pgbench -c 32 -j 4 -T 900 -M prepared
scaling factor: 100
  
checkpoint_segments = 1024
checkpoint_timeout = 5min
(every checkpoint during benchmark were triggered by checkpoint_timeout)
  
  * Result
[tps]
1344.2 (full_page_writes = on)
1605.9 (compress)
1810.1 (off)
  
[the amount of WAL generated during running pgbench]
4422 MB (on)
1517 MB (compress)
  885 MB (off)
  
[time required to replay WAL generated during running pgbench]
61s (on)  1209911 transactions were replayed,
  recovery speed: 19834.6 transactions/sec
39s (compress)   1445446 transactions were replayed,
  recovery speed: 37062.7 transactions/sec
37s (off)  1629235 transactions were replayed,
  recovery speed: 44033.3 transactions/sec
 
 ISTM for those benchmarks you should use an absolute number of
 transactions, not one based on elapsed time. Otherwise the comparison
 isn't really meaningful.

I really think we need to see recovery time benchmarks with a constant
amount of transactions to judge this properly.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch for fail-back without fresh backup

2013-10-08 Thread Pavan Deolasee
On Tue, Oct 8, 2013 at 3:16 PM, Andres Freund and...@2ndquadrant.comwrote:



 It is my impression that there still are several people having pretty
 fundamental doubts about this approach in general. From what I remember
 neither Heikki, Simon, Tom nor me were really convinced about this
 approach.


IIRC you and Tom were particularly skeptical about the approach. But do you
see a technical flaw or a show stopper with the approach ? Heikki has
written pg_rewind which is really very cool. But it fails to handle the
hint bit updates which are not WAL logged unless of course checksums are
turned on. We can have a GUC controlled option to  turn WAL logging on for
hint bit updates and then use pg_rewind for the purpose. But I did not see
any agreement on that either. Performance implication of WAL logging every
hint bit update could be huge.

Simon has raised usability concerns that Sawada-san and Samrat have tried
to address by following his suggestions. I am not fully convinced though we
have got that right. But then there is hardly any feedback on that aspect
lately.

In general, from the discussion it seems that the patch is trying to solve
a real problem. Even though Tom and you feel that rsync is probably good
enough and more trustworthy than any other approach, my feeling is that
many including Fujii-san still disagree with that argument based on real
user feedback. So where do we go from here ? I think it will really help
Sawada-san and Samrat if we can provide them some solid feedback and
approach to take.

Lately, I was thinking if we could do something else to track file system
updates without relying on WAL inspection and then use pg_rewind to solve
this problem. Some sort of prelaod library mechanism is one such
possibility. But haven't really thought through this entirely.

Thanks,
Pavan

-- 
Pavan Deolasee
http://www.linkedin.com/in/pavandeolasee


Re: [HACKERS] Re: custom hash-based COUNT(DISTINCT) aggregate - unexpectedly high memory consumption

2013-10-08 Thread Tomas Vondra
On 8 Říjen 2013, 11:42, Atri Sharma wrote:

 I've made some significant improvements in the chaining version (in the
 master branch), now getting about the memory consumption I've estimated.

 I agree, we can hope to reduce the memory consumption by making changes in
 the current chaining implementation. I would like to look into changing
 the data structure used for chaining from singly linked list to maybe skip
 list or something else.

Just to be sure - I haven't been messing with the HashAggregate
implementation directly, but with a custom aggregate. But feel free to
tweak the built-in hash table ;-)

Tomas



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Urgent Help Required

2013-10-08 Thread shailesh singh
I had got this message while running vacuum full from backend . Now My
database is not starting , Help pls.

backend vacuum full debug;
WARNING:  database debug must be vacuumed within 99 transactions
HINT:  To avoid a database shutdown, execute a full-database VACUUM in
debug.
ERROR:  relation debug does not exist
backend vacuum full;
WARNING:  database debug must be vacuumed within 98 transactions
HINT:  To avoid a database shutdown, execute a full-database VACUUM in
debug.
WARNING:  database debug must be vacuumed within 97 transactions
HINT:  To avoid a database shutdown, execute a full-database VACUUM in
debug.
WARNING:  database debug must be vacuumed within 96 transactions
HINT:  To avoid a database shutdown, execute a full-database VACUUM in
debug.
WARNING:  database debug must be vacuumed within 95 transactions
HINT:  To avoid a database shutdown, execute a full-database VACUUM in
debug.
WARNING:  database debug must be vacuumed within 94 transactions
HINT:  To avoid a database shutdown, execute a full-database VACUUM in
debug.
WARNING:  database debug must be vacuumed within 93 transactions
HINT:  To avoid a database shutdown, execute a full-database VACUUM in
debug.
WARNING:  database debug must be vacuumed within 92 transactions
HINT:  To avoid a database shutdown, execute a full-database VACUUM in
debug.
WARNING:  database debug must be vacuumed within 91 transactions
HINT:  To avoid a database shutdown, execute a full-database VACUUM in
debug.
WARNING:  database debug must be vacuumed within 90 transactions
HINT:  To avoid a database shutdown, execute a full-database VACUUM in
debug.
WARNING:  database debug must be vacuumed within 89 transactions
HINT:  To avoid a database shutdown, execute a full-database VACUUM in
debug.
WARNING:  database debug must be vacuumed within 88 transactions
HINT:  To avoid a database shutdown, execute a full-database VACUUM in
debug.
WARNING:  database debug must be vacuumed within 87 transactions
HINT:  To avoid a database shutdown, execute a full-database VACUUM in
debug.
WARNING:  database debug must be vacuumed within 86 transactions
HINT:  To avoid a database shutdown, execute a full-database VACUUM in
debug.
WARNING:  database debug must be vacuumed within 85 transactions
HINT:  To avoid a database shutdown, execute a full-database VACUUM in
debug.
WARNING:  database debug must be vacuumed within 84 transactions
HINT:  To avoid a database shutdown, execute a full-database VACUUM in
debug.
WARNING:  database debug must be vacuumed within 83 transactions
HINT:  To avoid a database shutdown, execute a full-database VACUUM in
debug.
ERROR:  could not access status of transaction 449971277
DETAIL:  could not open file pg_clog/01AD: No such file or directory


Now what?

Thanks in advance.

Shailesh Singh


Re: [HACKERS] Compression of full-page-writes

2013-10-08 Thread Haribabu kommi
On 08 October 2013 15:22 KONDO Mitsumasa wrote:
 (2013/10/08 17:33), Haribabu kommi wrote:
 The checkpoint_timeout and checkpoint_segments are increased to make sure no 
 checkpoint happens during the test run.

Your setting is easy occurred checkpoint in checkpoint_segments = 256. I don't 
know number of disks in your test server, in my test server which has 4 
magnetic disk(1.5k rpm), postgres generates 50 - 100 WALs per minutes.

A manual checkpoint is executed before starting of the test and verified as no 
checkpoint happened during the run by increasing the checkpoint_warning.

And I cannot understand your setting which is sync_commit = off. This setting 
tend to cause cpu bottle-neck and data-loss. It is not general in database 
usage.
Therefore, your test is not fair comparison for Fujii's patch.

I chosen the sync_commit=off mode because it generates more tps, thus it 
increases the volume of WAL.
I will test with sync_commit=on mode and provide the test results.

Regards,
Hari babu.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] insert throw error when year field len 4 for timestamptz datatype

2013-10-08 Thread Rushabh Lathia
On Tue, Oct 8, 2013 at 1:34 AM, Robert Haas robertmh...@gmail.com wrote:

 On Mon, Oct 7, 2013 at 12:41 AM, Rushabh Lathia
 rushabh.lat...@gmail.com wrote:
  Hmm right it has some inconsistency when year length is 6. But the patch
  is based on assumption that 5-digit number is a year, because YMD and HMS
  require at least six digits. Now Year with 6-digit number its getting
  conflict with
  YMD and HMS, that the reason its ending up with error. So with patch
  approach
  that's an expected behaviour for me.
 
  I spent good amount of time on thinking how we can improve the
 behaviour, or
  how can be change the assumption about the year field, YMD and HMS. At
  current point of time it seems difficult to me because postgres date
 module
  is tightly build with few assumption and changing that may lead to big
  project.
  Not sure but personally I feel that patch which was submitted earlier was
  definitely good improvement.
 
  Any other suggestion or thought for improvement ?

 I'm not entirely convinced that this patch is heading in the right
 direction.  The thing is, it lets you use 5-digit years always and
 longer years only in some contexts.  So I'm not sure this is really
 good enough for unambiguous date input.  If you want that, you should
 probably be using trusty YYY-MM-DD format.  But if you don't
 need that, then isn't a five-digit year most likely a typo?


 Do agree with you in certain extent.

But there are already ambiguity when it comes to postgres date module:

For example:
-- Doing select with year field  4
edb=# select '10-10-2'::timestamp;
 timestamp
---
 Thu Oct 10 00:00:00 2
(1 row)

edb=# create table test ( a timestamp );
CREATE TABLE
-- When try to insert it throw an error
edb=# insert into test values ('Thu Oct 10 00:00:00 2');
ERROR:  invalid input syntax for type timestamp: Thu Oct 10 00:00:00 2
LINE 1: insert into test values ('Thu Oct 10 00:00:00 2');
 ^
Of course user can use the specific format and then this kind of date
can be used.


This
 might be a case where throwing an error is actually better than trying
 to make sense of the input.

 I don't feel super-strongly about this, but I offer it as a question
 for reflection.



At the same time I do agree fixing this kind of issue in postgres datetime
module
is bit difficult without some assumption. Personally I feel patch do add
some
value but not fully compatible with all kind of year field format.

Bruce,

Do you have any thought/suggestion ?




 --
 Robert Haas
 EnterpriseDB: http://www.enterprisedb.com
 The Enterprise PostgreSQL Company




-- 
Rushabh Lathia


Re: [HACKERS] Re: custom hash-based COUNT(DISTINCT) aggregate - unexpectedly high memory consumption

2013-10-08 Thread Atri Sharma
On Tue, Oct 8, 2013 at 4:15 PM, Tomas Vondra t...@fuzzy.cz wrote:
 On 8 Říjen 2013, 11:42, Atri Sharma wrote:

 I've made some significant improvements in the chaining version (in the
 master branch), now getting about the memory consumption I've estimated.

 I agree, we can hope to reduce the memory consumption by making changes in
 the current chaining implementation. I would like to look into changing
 the data structure used for chaining from singly linked list to maybe skip
 list or something else.

 Just to be sure - I haven't been messing with the HashAggregate
 implementation directly, but with a custom aggregate. But feel free to
 tweak the built-in hash table ;-)

 Tomas


Heh.

Do you mind if I try it out on the custom agg you built? I assume it
is on the github repo link you shared?

Regards,

Atri


-- 
Regards,

Atri
l'apprenant


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgbench progress report improvements - split 3 v2 - A

2013-10-08 Thread Fabien COELHO



On Sat, Oct 5, 2013 at 6:10 PM, Noah Misch n...@leadboat.com wrote:

The sum of the squares of the latencies wraps after 2^63/(10^12 * avg_latency
* nclients) seconds.  That's unlikely to come up with the ordinary pgbench
script, but one can reach it in a few hours when benchmarking a command that
runs for many seconds.  If we care, we can track the figure as a double.  I
merely added a comment about it.


Using a double seems wise to me.


I think that both int64  double are fine.

The likelyhood of having underflows (double) or overflows/wraparounds 
(int64) seems pretty low for any practical run. I took the former because 
it is exact... up to when it is totally wrong. The later one may provide 
underestimated results silently: it would fail more continuously.


So I'm okay if it is changed to double consistently.

--
Fabien.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch: FORCE_NULL option for copy COPY in CSV mode

2013-10-08 Thread Andrew Dunstan


On 10/07/2013 11:34 PM, Amit Kapila wrote:

On Tue, Oct 8, 2013 at 12:55 AM, Andrew Dunstan and...@dunslane.net wrote:

On 10/07/2013 03:06 PM, Robert Haas wrote:



Also if your use case is to treat empty strings as NULL (as per above
documentation), can't it be handled with WITH NULL AS option.
For example, something like:

postgres=# COPY testnull FROM stdin with CSV NULL AS E'';
Enter data to be copied followed by a newline.
End with a backslash and a period on a line by itself.

50,
\.

postgres=# select * from testnull;
   a  |  b
+--
   50 | NULL
(1 row)

Good point.  If this patch is just implementing something that can
already be done with another syntax, we don't need it.



Isn't the point of this option to allow a *quoted* empty string to be forced
to NULL? If so, this is not testing the same case - in fact the COPY command
above just makes explicit the default CSV NULL setting anyway.

I am really not sure if all the purpose of patch can be achieved by
existing syntax, neither it is explained clearly.
However the proposal hasn't discussed why it's not good idea to extend
some similar syntax COPY .. NULL which is used to replace string
with NULL's?
Description of NULL says: Specifies the string that represents a null value.
Now why can't this syntax be extended to support quoted empty string
if it's not supported currently?
I have not checked completely, If it's difficult or not possible to
support in existing syntax, then even it add's more value to introduce
new syntax.

By asking above question, I doesn't mean that we should not go for the
new proposed syntax, rather it's to know and understand the benefit of
new syntax, also it helps during CF review for reviewer's if the
proposal involves new syntax and that's discussed previously.



Quite apart from any other consideration, this suggestion is inferior to 
what's proposed in that it's an all or nothing deal, while the patch 
allows you to specify the behaviour very explicitly on a per column 
basis. I can well imagine wanting to be able to force a quoted empty 
string to null for numeric fields but not for text.


The basic principal of our CSV processing is that we don't ever turn a 
NULL into something quoted and we don't ever turn something quoted into 
NULL. That's what lets us round-trip test just about every combination 
of options. I'm only going to be happy violating that, as this patch 
does, in a very explicit and controlled way.


cheers

andrew


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [GENERAL] Urgent Help Required

2013-10-08 Thread Adrian Klaver

On 10/08/2013 03:55 AM, shailesh singh wrote:

I had got this message while running vacuum full from backend . Now My
database is not starting , Help pls.

backend vacuum full debug;
WARNING:  database debug must be vacuumed within 99 transactions
HINT:  To avoid a database shutdown, execute a full-database VACUUM in
debug.
ERROR:  relation debug does not exist




Now what?


First some information.

1) What version of Postgres are you using?

2) Does database debug in fact exist or not?
In other words does it show up with \l in psql?

Also it not necessary to use FULL with the VACUUM.



Thanks in advance.

Shailesh Singh



--
Adrian Klaver
adrian.kla...@gmail.com


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Compression of full-page-writes

2013-10-08 Thread KONDO Mitsumasa

Hi,

I tested dbt-2 benchmark in single instance and synchronous replication.
Unfortunately, my benchmark results were not seen many differences...

* Test server
   Server: HP Proliant DL360 G7
   CPU:Xeon E5640 2.66GHz (1P/4C)
   Memory: 18GB(PC3-10600R-9)
   Disk:   146GB(15k)*4 RAID1+0
   RAID controller: P410i/256MB

* Result
** Single instance**
| NOTPM | 90%tile | Average | S.Deviation
+---+-+-+-
no-patched  | 3322.93   | 20.469071   | 5.882   | 10.478
patched | 3315.42   | 19.086105   | 5.669   | 9.108


** Synchronous Replication **
| NOTPM | 90%tile | Average | S.Deviation
+---+-+-+-
no-patched  | 3275.55   | 21.332866   | 6.072   | 9.882
patched | 3318.82   | 18.141807   | 5.757   | 9.829

** Detail of result
http://pgstatsinfo.projects.pgfoundry.org/DBT-2_Fujii_patch/


I set full_page_write = compress with Fujii's patch in DBT-2. But it does not
seems to effect for eleminating WAL files. I will try to DBT-2 benchmark more 
once, and try to normal pgbench in my test server.


Regards,
--
Mitsumasa KONDO
NTT Open Source Software Center
attachment: single-throughput.pngattachment: synchronous-throughput.pngattachment: single-latency.pngattachment: synchronous-latency.png
-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] insert throw error when year field len 4 for timestamptz datatype

2013-10-08 Thread Bruce Momjian
On Tue, Oct  8, 2013 at 05:08:17PM +0530, Rushabh Lathia wrote:
 This
 might be a case where throwing an error is actually better than trying
 to make sense of the input.
 
 I don't feel super-strongly about this, but I offer it as a question
 for reflection.
 
 
 
 At the same time I do agree fixing this kind of issue in postgres datetime
 module 
 is bit difficult without some assumption. Personally I feel patch do add some
 value but not fully compatible with all kind of year field format.
 
 Bruce,
 
 Do you have any thought/suggestion ?

I think Robert is asking the right question:  Is it better to accept
5-digit years, or throw an error?  Doing anything new with 6-digit years
is going to break the much more common use of YMD or HMS.

The timestamp data type only supports values to year 294276, so the full
6-digit range isn't even supported.  ('DATE' does go higher.)

The entire date/time processing allows imprecise input, so throwing an
error on clear 5-digit years seems wrong.  Basically, we have gone down
the road of interpreting date/time input liberally, so throwing an error
on a clear 5-digit year seems odd.

On the other hand, this has never come up before because no one cared
about 5-digit years, so you could argue that 5-digit years require
precise specification, which would favor throwing an error.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Compression of full-page-writes

2013-10-08 Thread KONDO Mitsumasa

(2013/10/08 20:13), Haribabu kommi wrote:

I chosen the sync_commit=off mode because it generates more tps, thus it 
increases the volume of WAL.

I did not think to there. Sorry...


I will test with sync_commit=on mode and provide the test results.

OK. Thanks!

--
Mitsumasa KONDO
NTT Open Source Software Center


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Bugfix and new feature for PGXS

2013-10-08 Thread Andrew Dunstan


On 10/07/2013 08:47 PM, Peter Eisentraut wrote:


I suspect this line

submake-libpq: $(libdir)/libpq.so ;

will cause problems on platforms with a different extension (e.g. OS X).



suggested fix is below.

cheers

andrew

diff --git a/src/Makefile.global.in b/src/Makefile.global.in
index bb732bb..b562378 100644
--- a/src/Makefile.global.in
+++ b/src/Makefile.global.in
@@ -422,7 +422,11 @@ ifndef PGXS
 submake-libpq:
$(MAKE) -C $(libpq_builddir) all
 else
-submake-libpq: $(libdir)/libpq.so ;
+ifneq ($(PORTNAME),cygwin)
+submake-libpq: $(libdir)/libpq$(DLSUFFIX) ;
+else
+submake-libpq: $(libdir)/cygpq$(DLSUFFIX) ;
+endif
 endif

 ifndef PGXS



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SSI freezing bug

2013-10-08 Thread Kevin Grittner
Heikki Linnakangas hlinnakan...@vmware.com wrote:
 On 07.10.2013 23:45, Heikki Linnakangas wrote:

 To fix the bug that Hannu pointed out, we also need to apply these fixes:

 http://www.postgresql.org/message-id/52440266.5040...@vmware.com

 Per a chat with Bruce, I'm going to apply that patch now to get it into
 the minor releases.

Please do.  (Somehow I had it in my head that you already had done so.)

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Release note fix for timeline item

2013-10-08 Thread Bruce Momjian
On Tue, Oct  8, 2013 at 01:25:30PM +0900, KONDO Mitsumasa wrote:
 (2013/10/08 10:35), Bruce Momjian wrote:
 docs:  update release notes for 8.4.18, 9.0.14, 9.1.10, 9.2.5, 9.3.1
 Thank you for creating good release note. I have one comment.
 
 In 9.1 and 9.2 release note, Is Improve WAL segment timeline
 handling during recovery means commit which is Install recycled
 WAL segments with current timeline ID during recovery? This is not
 so simple problem fix.
 This bug caused failing PITR which is finished archive recovery on
 the way. When it occered, it seemed to finish archive recovery
 without problem.
 
 I think it is comparatively big problem, so we should write it in release 
 note.
 Please fix it under following.
 
 +listitem
 + para
 +  Fix WAL segment timeline handling during recovery (Mitsumasa
 +  KONDO, Heikki Linnakangas)
 + /para
 +
 +para
 + When target timeline is up and executing restart point in archive 
 recovery
 + mode, archive recovery is failed on the way, because failing
 recycle of +WAL. When this problem occurred, it seemed to finish
 success of archive +  recovery without problem.
 +/para
 +/listitem

First, I want to apologize for not completing the release notes earlier
so that others could review them.  I started working on the release
notes on Friday, but my unfamiliarity with the process and fear of
making a mistake caused many delays.  I have improved the documentation
on the process which will hopefully help next time.

Second, I have read the thread beind this patch:


http://www.postgresql.org/message-id/flat/51798552.2010...@vmware.com#51798552.2010...@vmware.com

You are right that there is alot of details skipped in the release note
text.  I have developed the attached patch which I think does a better
job.  Is it OK?

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +
diff --git a/doc/src/sgml/release-9.2.sgml b/doc/src/sgml/release-9.2.sgml
new file mode 100644
index 75a4c98..bf01c37
*** a/doc/src/sgml/release-9.2.sgml
--- b/doc/src/sgml/release-9.2.sgml
***
*** 216,223 
  
  listitem
   para
!   Improve WAL segment timeline handling during recovery (Heikki
!   Linnakangas)
   /para
  /listitem
  
--- 216,228 
  
  listitem
   para
!   Fix WAL segment timeline handling during recovery (Mitsumasa Kondo,
!   Heikki Linnakangas)
!  /para
! 
!  para
!   WAL file recycling during standby recovery could lead to premature
!   recovery completion, resulting in data loss.
   /para
  /listitem
  

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: custom hash-based COUNT(DISTINCT) aggregate - unexpectedly high memory consumption

2013-10-08 Thread Tomas Vondra
On 8 Říjen 2013, 13:52, Atri Sharma wrote:
 On Tue, Oct 8, 2013 at 4:15 PM, Tomas Vondra t...@fuzzy.cz wrote:
 On 8 Říjen 2013, 11:42, Atri Sharma wrote:

 I've made some significant improvements in the chaining version (in
 the
 master branch), now getting about the memory consumption I've
 estimated.

 I agree, we can hope to reduce the memory consumption by making changes
 in
 the current chaining implementation. I would like to look into changing
 the data structure used for chaining from singly linked list to maybe
 skip
 list or something else.

 Just to be sure - I haven't been messing with the HashAggregate
 implementation directly, but with a custom aggregate. But feel free to
 tweak the built-in hash table ;-)

 Tomas


 Heh.

 Do you mind if I try it out on the custom agg you built? I assume it
 is on the github repo link you shared?

Not at all, that's why I pushed that into a public repo. The master
branch contains the regular chained hash table, the open addressing is in
a separate branch (also in the repo).

Tomas



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] GIN improvements part 1: additional information

2013-10-08 Thread Alexander Korotkov
Hi, Tomas!

On Sun, Oct 6, 2013 at 3:58 AM, Tomas Vondra t...@fuzzy.cz wrote:

 I've attempted to rerun the benchmarks tests I did a few weeks ago, but
  I got repeated crashes when loading the data (into a table with
 tsvector+gin index).

 Right before a crash, theres this message in the log:

PANIC:  not enough space in leaf page!


Thanks for testing. Heikki's version of patch don't works for me too on
even much more simplier examples. I can try to get it working if he answer
my question about GinDataLeafPageGetPostingList* macros.

--
With best regards,
Alexander Korotkov.


Re: [HACKERS] [GENERAL] Urgent Help Required

2013-10-08 Thread shailesh singh
Dear all,
 First of all i wish to share actual error meassge,

Below are the queries i had executed on the terminal on my server


-bash-3.2$ touch fix.sql
-bash-3.2$ echo VACUUM FULL;  fix.sql
-bash-3.2$ postgres -D /var/lib/pgsql/data patnadbold  fix.sql
WARNING: database patnadbold must be vacuumed within 100 transactions
HINT: To avoid a database shutdown, execute a full-database VACUUM in
patnadbold.
WARNING: database patnadbold must be vacuumed within 100 transactions
HINT: To avoid a database shutdown, execute a full-database VACUUM in
patnadbold.

PostgreSQL stand-alone backend 8.1.11
backend WARNING: database patnadbold must be vacuumed within 99
transactions
HINT: To avoid a database shutdown, execute a full-database VACUUM in
patnadbold.
WARNING: database patnadbold must be vacuumed within 98 transactions
HINT: To avoid a database shutdown, execute a full-database VACUUM in
patnadbold.
WARNING: database patnadbold must be vacuumed within 97 transactions
HINT: To avoid a database shutdown, execute a full-database VACUUM in
patnadbold.
WARNING: database patnadbold must be vacuumed within 96 transactions
HINT: To avoid a database shutdown, execute a full-database VACUUM in
patnadbold.
WARNING: database patnadbold must be vacuumed within 95 transactions
HINT: To avoid a database shutdown, execute a full-database VACUUM in
patnadbold.
WARNING: database patnadbold must be vacuumed within 94 transactions
HINT: To avoid a database shutdown, execute a full-database VACUUM in
patnadbold.
WARNING: database patnadbold must be vacuumed within 93 transactions
HINT: To avoid a database shutdown, execute a full-database VACUUM in
patnadbold.
WARNING: database patnadbold must be vacuumed within 92 transactions
HINT: To avoid a database shutdown, execute a full-database VACUUM in
patnadbold.
WARNING: database patnadbold must be vacuumed within 91 transactions
HINT: To avoid a database shutdown, execute a full-database VACUUM in
patnadbold.
WARNING: database patnadbold must be vacuumed within 90 transactions
HINT: To avoid a database shutdown, execute a full-database VACUUM in
patnadbold.
WARNING: database patnadbold must be vacuumed within 89 transactions
HINT: To avoid a database shutdown, execute a full-database VACUUM in
patnadbold.
WARNING: database patnadbold must be vacuumed within 88 transactions
HINT: To avoid a database shutdown, execute a full-database VACUUM in
patnadbold.
WARNING: database patnadbold must be vacuumed within 87 transactions
HINT: To avoid a database shutdown, execute a full-database VACUUM in
patnadbold.
WARNING: database patnadbold must be vacuumed within 86 transactions
HINT: To avoid a database shutdown, execute a full-database VACUUM in
patnadbold.
WARNING: database patnadbold must be vacuumed within 85 transactions
HINT: To avoid a database shutdown, execute a full-database VACUUM in
patnadbold.
WARNING: database patnadbold must be vacuumed within 84 transactions
HINT: To avoid a database shutdown, execute a full-database VACUUM in
patnadbold.
ERROR: could not access status of transaction 33011
DETAIL: could not open file pg_clog/: No such file or directory
exit


After this i am able to stop /start my db server but i am not able to
connect to my databases (it tells to run vacuum full first on patnadbold
databases)


1)I am using postgres 8.4 version.
2) I had two databases on this server i) patnadbold  ii) patnaonlinedb

For me patnadbold is of no use if at this moment i lost this database that
also fine to me.
I wanted to connect patnaonlinedb any how and wanted to perform backup of
this , Solution please.


On Tue, Oct 8, 2013 at 6:19 PM, Adrian Klaver adrian.kla...@gmail.comwrote:

 On 10/08/2013 03:55 AM, shailesh singh wrote:

 I had got this message while running vacuum full from backend . Now My
 database is not starting , Help pls.

 backend vacuum full debug;
 WARNING:  database debug must be vacuumed within 99 transactions
 HINT:  To avoid a database shutdown, execute a full-database VACUUM in
 debug.
 ERROR:  relation debug does not exist



 Now what?


 First some information.

 1) What version of Postgres are you using?

 2) Does database debug in fact exist or not?
 In other words does it show up with \l in psql?

 Also it not necessary to use FULL with the VACUUM.



 Thanks in advance.

 Shailesh Singh



 --
 Adrian Klaver
 adrian.kla...@gmail.com




-- 
  With Regards,
  शैलेश सिंह |Shailesh Singh
 +९१-९६५०३१७५१७ | +91-9650317517


Re: [HACKERS] [GENERAL] Urgent Help Required

2013-10-08 Thread Adrian Klaver

On 10/08/2013 08:03 AM, shailesh singh wrote:

Dear all,
  First of all i wish to share actual error meassge,

Below are the queries i had executed on the terminal on my server


-bash-3.2$ touch fix.sql
-bash-3.2$ echo VACUUM FULL;  fix.sql
-bash-3.2$ postgres -D /var/lib/pgsql/data patnadbold  fix.sql
WARNING: database patnadbold must be vacuumed within 100 transactions
HINT: To avoid a database shutdown, execute a full-database VACUUM in
patnadbold.
WARNING: database patnadbold must be vacuumed within 100 transactions
HINT: To avoid a database shutdown, execute a full-database VACUUM in
patnadbold.

PostgreSQL stand-alone backend 8.1.11




After this i am able to stop /start my db server but i am not able to
connect to my databases (it tells to run vacuum full first on patnadbold
databases)


1)I am using postgres 8.4 version.


This seems to be at odds with PostgreSQL stand-alone backend 8.1.11.
Are you sure you are working on the correct database cluster?


2) I had two databases on this server i) patnadbold  ii) patnaonlinedb

For me patnadbold is of no use if at this moment i lost this database
that also fine to me.
I wanted to connect patnaonlinedb any how and wanted to perform backup
of this , Solution please.





--
Adrian Klaver
adrian.kla...@gmail.com


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Completing PL support for Event Triggers

2013-10-08 Thread Dimitri Fontaine
Hi,

Thanks for your review, sorry about the delays in answering, I've been
quite busy with other matters recently, it seems like things are going
to be smoother this week.

Peter Eisentraut pete...@gmx.net writes:
 Review of the PL/Tcl part:  The functionality looks OK.  There are some
 cosmetic issues.  If those are addressed, I think this can be committed.

 In the documentation, Event Triggers - Event triggers.

Done in the attached v2 of the patch.

 For the example in the documentation, please show the output, that is,
 what the trigger outputs.

I checked and we don't do that for plpgsql… if you insist, I'll be happy
to prepare something though.

 Remove the extra space before  tclsnitch.

Done in the attached.

 Document the return value (it is ignored).  Will we need the return
 value in a future expansion?  Should we leave room for that?

That's documented already in the main Event Triggers chapter of the
documentation, please see the following URL. Should we repeat the docs
in the per-PL chapters?

  http://www.postgresql.org/docs/9.3/interactive/event-trigger-definition.html

 Change command trigger to event trigger in several places.

Done.

 compile_pltcl_function() does not catch trigger function being called as
 event trigger or vice versa.  Not sure if that should be covered.

It's not covered in the PLpgSQL parts, at least.

 The first PG_TRY block in pltcl_event_trigger_handler() is unnecessary,
 because there is nothing in there that can throw an exception.

Cleaned.

 I'd remove some comments from pltcl_event_trigger_handler().  They were
 obviously copied from pltcl_trigger_handler(), but that function is much
 longer, so more comments might have been appropriate there.

Done

 For plperl, the previous reviews mostly apply analogously.  In addition,
 I have these specific points:

 - In plperl_event_trigger_build_args(), the hv_ksplit() call is probably
 unnecessary.

Yeah, removed.

 - plperl_call_perl_event_trigger_func and plperl_call_perl_trigger_func
 as well as plperl_event_trigger_handler and plperl_trigger_handler have
 a lot of overlap could perhaps be combined or refactored differently.

I didn't do that in the attached.

 - In plperl_event_trigger_handler(), a point is being made about setting
 up current_call_data before SPI_connect.  Other handler functions don't
 do this, though.  It's not quite clear to me on first readong why it's
 done differently here.

I don't know where that comes from. I know that without it (just tried)
the regression tests are all failing.

 You introduced some new compiler warnings, please fix those.

 http://pgci.eisentraut.org/jenkins/view/PostgreSQL/job/postgresql_commitfest_world/80/warnings19Result/new/?

I did that in the attached, thanks.

Is there a simple enough way to recompile changed files in -O0? To cover
the whole compiler warnings we want to catch, my experience is that the
only way is to reconfigure then recompile the whole tree with different
compiler optimisation targets, as the compiler then see different
things.

I didn't find a way to be able to run the compiler once and be done, and
it's really easy to forget redoing the whole tree just in case.

 In the source code, I'm dubious about the use of is_dml_trigger.  I can
 see where you are coming from, but in most of the code, a trigger is a
 trigger and an event trigger is an event trigger.  Let's not introduce
 more terminology.

Well, that's how it's been committer in PLpgSQL, and I kept that in the
others. In the attached, I used the terminology you seem to be proposing
here, that is is_trigger and is_event_trigger.

 For me, the plpython patch causes an (well, many) assertion failures in
 the regression tests, because this change is wrong:

I still need to review my python setup here to be able to build
something, I had problems with master even in a debian VM IIRC.

Regards,
-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support



event_trigger_plperl.v2.patch.gz
Description: Binary data


event_trigger_pltcl.v2.patch.gz
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch for fail-back without fresh backup

2013-10-08 Thread Heikki Linnakangas

On 08.10.2013 13:00, Pavan Deolasee wrote:

On Tue, Oct 8, 2013 at 3:16 PM, Andres Freundand...@2ndquadrant.comwrote:


It is my impression that there still are several people having pretty
fundamental doubts about this approach in general. From what I remember
neither Heikki, Simon, Tom nor me were really convinced about this
approach.


IIRC you and Tom were particularly skeptical about the approach. But do you
see a technical flaw or a show stopper with the approach ? Heikki has
written pg_rewind which is really very cool. But it fails to handle the
hint bit updates which are not WAL logged unless of course checksums are
turned on. We can have a GUC controlled option to  turn WAL logging on for
hint bit updates and then use pg_rewind for the purpose. But I did not see
any agreement on that either. Performance implication of WAL logging every
hint bit update could be huge.


Yeah, I definitely think we should work on the pg_rewind approach 
instead of this patch. It's a lot more flexible. The performance hit of 
WAL-logging hint bit updates is the price you have to pay, but a lot of 
people were OK with that to get page checksum, so I think a lot of 
people would be OK with it for this purpose too. As long as it's 
optional, of course. And anyone using page checksums are already paying 
that price.


- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Urgent Help Required

2013-10-08 Thread Robert Haas
*Don't* VACUUM FULL.  Just VACUUM.  It's not the same thing.

...Robert


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal: Set effective_cache_size to greater of .conf value, shared_buffers

2013-10-08 Thread Bruce Momjian
On Thu, Sep  5, 2013 at 05:14:37PM -0400, Bruce Momjian wrote:
 On Thu, Sep  5, 2013 at 06:14:33PM +0200, Magnus Hagander wrote:
   I have developed the attached patch which implements an auto-tuned
   effective_cache_size which is 4x the size of shared buffers.  I had to
   set effective_cache_size to its old 128MB default so the EXPLAIN
   regression tests would pass unchanged.
  
  That's not really autotuning though. ISTM that making the *default* 4
  x shared_buffers might make perfect sense, but do we really need to
  hijack the value of -1 for that? That might be useful for some time
  when we have actual autotuning, that somehow inspects the system and
  tunes it from there.
 
  I also don't think it should be called autotuning, when it's just a
  smarter default value.
  
  I like the feature, though, just not the packaging.
 
 That auto-tuning text came from the wal_buffer documentation, which
 does exactly this based on shared_buffers:
 
 The contents of the WAL buffers are written out to disk at every
 transaction commit, so extremely large values are unlikely to
 provide a significant benefit.  However, setting this value to at
 least a few megabytes can improve write performance on a busy
 -- server where many clients are committing at once.  The auto-tuning
---
 selected by the default setting of -1 should give reasonable
 results in most cases.
 
 I am fine with rewording and not using -1, but we should change the
 wal_buffer default and documentation too then.  I am not sure what other
 value than -1 to use?  0?  I figure if we ever get better auto-tuning,
 we would just remove this functionality and make it better.

Patch applied with a default of 4x shared buffers.  I have added a 9.4
TODO that we might want to revisit this.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] logical changeset generation v6.1

2013-10-08 Thread Robert Haas
On Mon, Oct 7, 2013 at 9:32 AM, Andres Freund and...@2ndquadrant.com wrote:
 - RelationIsDoingTimetravel is still a crappy name.  How about
 RelationRequiredForLogicalDecoding?  And maybe the reloption
 treat_as_catalog_table can become required_for_logical_decoding.

 Hm. I don't really like the name, required seems to imply that it's
 necessary to turn this on to get data replicated in that relation. How
 about accessible_during_logical_decoding or user_catalog_table? The
 latter would allow us to use it to add checks for user relations used in
 indexes which need a treatment similar to enums.

user_catalog_table is a pretty good description, but should we worry
about the fact that logical replication isn't mentioned in there
anywhere?

In what way do you feel that it's more clear to say *accessible
during* rather than *required for* logical decoding?

I was trying to make the naming consistent; i.e. if we have
RelationRequiredForLogicalDecoding then name the option to match.

 All in all, it seems to me that we shouldn't try to punt.  Maybe we
 should have something that works like ALTER TABLE name CLUSTER ON
 index_name to configure which index should be used for logical
 replication.  Possibly this same syntax could be used as ALTER
 MATERIALIZED VIEW to set the candidate key for that case.

 How about using the current logic by default but allow to tune it
 additionally with an option like that?

I'm OK with defaulting to the primary key if there is one, but I think
that no other candidate key should be entertained unless the user
configures it.  I think the behavior we get without that will be just
too weird.  We could use the same logic you're proposing here for
CLUSTER, too, but we don't; that's because we've (IMHO, rightly)
decided that the choice of index is too important to be left to
chance.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] unaccent module - two params function should be immutable

2013-10-08 Thread Bruce Momjian
On Tue, Sep 24, 2013 at 05:36:58PM -0400, Bruce Momjian wrote:
 On Tue, Sep 17, 2013 at 10:15:47AM -0400, Robert Haas wrote:
  On Sat, Sep 14, 2013 at 9:42 AM, Pavel Stehule pavel.steh...@gmail.com 
  wrote:
   I have developed the attached patch based on your suggestion.  I did not
   see anything in the code that would make it STABLE, except a lookup of a
   dictionary library:
  
   dictOid = get_ts_dict_oid(stringToQualifiedNameList(unaccent),
   false);
  
   yes, it risk, but similar is with timezones, and other external data.
  
  That's a catalog lookup - not a reliance on external data.
 
 Sorry, I was wrong.  Only unaccent_dict() calls get_ts_dict_oid(), and
 we aren't changing the signature of that function.

Applied.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] space reserved for WAL record does not match what was written: panic on windows

2013-10-08 Thread Robert Haas
On Mon, Oct 7, 2013 at 4:47 PM, Andres Freund and...@2ndquadrant.com wrote:
 On 2013-10-07 13:25:17 -0400, Robert Haas wrote:
 On Fri, Oct 4, 2013 at 8:19 AM, Andres Freund and...@2ndquadrant.com wrote:
  Could it be that MAXALIGN/TYPEALIGN doesn't really work for values
  bigger than 32bit?
 
  #define MAXALIGN(LEN)   TYPEALIGN(MAXIMUM_ALIGNOF, (LEN))
  #define TYPEALIGN(ALIGNVAL,LEN)  \
  (((intptr_t) (LEN) + ((ALIGNVAL) - 1))  ~((intptr_t) ((ALIGNVAL) 
  - 1)))

 Isn't the problem, more specifically, that it doesn't work for values
 larger than an intptr_t?

 Well, yes. And intptr_t is 32bit wide on a 32bit platform.

 And does that indicate that intptr_t is the wrong type to be using here?

 No, I don't think so. intptr_t is defined to be a integer type to which
 you can cast a pointer, cast it back and still get the old value. On
 32bit platforms it usually will be 32bit wide.
 All that's fine for the classic usages of TYPEALIGN where it's used on
 pointers or lengths of stuff stored in memory. Those will always fit in
 32bit on a 32bit platform. But here we're using it on explicit 64bit
 types (XLogRecPtr).
 Now, you could argue that we should make it use 64bit math everywhere -
 but I think that might incur quite the price on some 32bit
 platforms. It's used in the tuple decoding stuff, that's quite the hot
 path in some workloads.

 So I guess it's either a separate macro, or we rewrite that piece of
 code to work slightly differently and work directly on the lenght or
 such.

 Maybe we should add a StaticAssert ensuring the TYPEALIGN macro only
 gets passed 32bit types?

I think having two macros that behave identically on all platforms
anyone cares about[1] but which can cause difficult-to-find
corner-case-bugs on other platforms is just a recipe for disaster.  I
pledge to screw that up at least once.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

[1] And by anyone, I mean me.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] old warning in docs

2013-10-08 Thread Robert Haas
On Mon, Oct 7, 2013 at 7:51 AM, Andrew Dunstan
andrew.duns...@pgexperts.com wrote:
 Given that we have not supported releases older than 8.3 for quite a while,
 do we need to keep this in extend.sgml any longer?

+1 for removing that (but only in master, not the back-branches).

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Another bug(?) turned up by the llvm optimization checker

2013-10-08 Thread Greg Stark
The commit below, specifically the change mentioned in the last paragraph
to fix isLockedRel broke the following comment in addRangeTableEntry:

 * If pstate is NULL, we just build an RTE and return it without adding it
 * to an rtable list.

In fact isLockedRefname() will seg fault promptly if pstate is NULL. I'm
not clear why this behaviour is needed though since as far as I can tell
nowhere in the code calls addRangeTableEntry or any of its derivatives with
pstate==NULL. I'm inclined to just remove the comment and the test for
pstate==NULL lower down but I don't really know what the motivation for
this promised behaviour was in the first place so I'm hesitant to do it on
my own.


commit 61e532820824504aa92ad93c427722d3fa9c1632
Author: Tom Lane t...@sss.pgh.pa.us
Date:   Tue Oct 27 17:11:18 2009 +

Make FOR UPDATE/SHARE in the primary query not propagate into WITH
queries;
for example in
  WITH w AS (SELECT * FROM foo) SELECT * FROM w, bar ... FOR UPDATE
the FOR UPDATE will now affect bar but not foo.  This is more useful and
consistent than the original 8.4 behavior, which tried to propagate FOR
UPDATE
into the WITH query but always failed due to assorted implementation
restrictions.  Even though we are in process of removing those
restrictions,
it seems correct on philosophical grounds to not let the outer query's
FOR UPDATE affect the WITH query.

In passing, fix isLockedRel which frequently got things wrong in
nested-subquery cases: FOR UPDATE OF foo applies to an alias foo in
the
current query level, not subqueries.  This has been broken for a long
time,
but it doesn't seem worth back-patching further than 8.4 because the
actual
consequences are minimal.  At worst the parser would sometimes get
RowShareLock on a relation when it should be AccessShareLock or vice
versa.
That would only make a difference if someone were using ExclusiveLock
concurrently, which no standard operation does, and anyway FOR UPDATE
doesn't result in visible changes so it's not clear that the someone
would
notice any problem.  Between that and the fact that FOR UPDATE barely
works
with subqueries at all in existing releases, I'm not excited about
worrying
about it.


-- 
greg


Re: [HACKERS] Patch: FORCE_NULL option for copy COPY in CSV mode

2013-10-08 Thread Robert Haas
On Tue, Oct 8, 2013 at 8:33 AM, Andrew Dunstan and...@dunslane.net wrote:
 On 10/07/2013 11:34 PM, Amit Kapila wrote:
 On Tue, Oct 8, 2013 at 12:55 AM, Andrew Dunstan and...@dunslane.net
 wrote:
 On 10/07/2013 03:06 PM, Robert Haas wrote:
 Also if your use case is to treat empty strings as NULL (as per above
 documentation), can't it be handled with WITH NULL AS option.
 For example, something like:

 postgres=# COPY testnull FROM stdin with CSV NULL AS E'';
 Enter data to be copied followed by a newline.
 End with a backslash and a period on a line by itself.

 50,
 \.

 postgres=# select * from testnull;
a  |  b
 +--
50 | NULL
 (1 row)

 Good point.  If this patch is just implementing something that can
 already be done with another syntax, we don't need it.


 Isn't the point of this option to allow a *quoted* empty string to be
 forced
 to NULL? If so, this is not testing the same case - in fact the COPY
 command
 above just makes explicit the default CSV NULL setting anyway.

 I am really not sure if all the purpose of patch can be achieved by
 existing syntax, neither it is explained clearly.
 However the proposal hasn't discussed why it's not good idea to extend
 some similar syntax COPY .. NULL which is used to replace string
 with NULL's?
 Description of NULL says: Specifies the string that represents a null
 value.
 Now why can't this syntax be extended to support quoted empty string
 if it's not supported currently?
 I have not checked completely, If it's difficult or not possible to
 support in existing syntax, then even it add's more value to introduce
 new syntax.

 By asking above question, I doesn't mean that we should not go for the
 new proposed syntax, rather it's to know and understand the benefit of
 new syntax, also it helps during CF review for reviewer's if the
 proposal involves new syntax and that's discussed previously.


 Quite apart from any other consideration, this suggestion is inferior to
 what's proposed in that it's an all or nothing deal, while the patch allows
 you to specify the behaviour very explicitly on a per column basis. I can
 well imagine wanting to be able to force a quoted empty string to null for
 numeric fields but not for text.

 The basic principal of our CSV processing is that we don't ever turn a NULL
 into something quoted and we don't ever turn something quoted into NULL.
 That's what lets us round-trip test just about every combination of options.
 I'm only going to be happy violating that, as this patch does, in a very
 explicit and controlled way.

Andrew, are you planning to review  commit this?

Thanks,

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] unaccent module - two params function should be immutable

2013-10-08 Thread Pavel Stehule
2013/10/8 Bruce Momjian br...@momjian.us

 On Tue, Sep 24, 2013 at 05:36:58PM -0400, Bruce Momjian wrote:
  On Tue, Sep 17, 2013 at 10:15:47AM -0400, Robert Haas wrote:
   On Sat, Sep 14, 2013 at 9:42 AM, Pavel Stehule 
 pavel.steh...@gmail.com wrote:
I have developed the attached patch based on your suggestion.  I
 did not
see anything in the code that would make it STABLE, except a lookup
 of a
dictionary library:
   
dictOid =
 get_ts_dict_oid(stringToQualifiedNameList(unaccent),
false);
   
yes, it risk, but similar is with timezones, and other external data.
  
   That's a catalog lookup - not a reliance on external data.
 
  Sorry, I was wrong.  Only unaccent_dict() calls get_ts_dict_oid(), and
  we aren't changing the signature of that function.

 Applied.


nice

thank you

Regards

Pavel Stehule



 --
   Bruce Momjian  br...@momjian.ushttp://momjian.us
   EnterpriseDB http://enterprisedb.com

   + It's impossible for everything to be true. +



Re: [HACKERS] unaccent module - two params function should be immutable

2013-10-08 Thread Bruce Momjian
On Tue, Oct  8, 2013 at 06:31:03PM +0200, Pavel Stehule wrote:
 
 
 
 2013/10/8 Bruce Momjian br...@momjian.us
 
 On Tue, Sep 24, 2013 at 05:36:58PM -0400, Bruce Momjian wrote:
  On Tue, Sep 17, 2013 at 10:15:47AM -0400, Robert Haas wrote:
   On Sat, Sep 14, 2013 at 9:42 AM, Pavel Stehule 
 pavel.steh...@gmail.com
  wrote:
I have developed the attached patch based on your suggestion.  I 
 did
 not
see anything in the code that would make it STABLE, except a lookup
 of a
dictionary library:
   
        dictOid = get_ts_dict_oid(stringToQualifiedNameList
 (unaccent),
false);
   
yes, it risk, but similar is with timezones, and other external 
 data.
  
   That's a catalog lookup - not a reliance on external data.
 
  Sorry, I was wrong.  Only unaccent_dict() calls get_ts_dict_oid(), and
  we aren't changing the signature of that function.
 
 Applied.
 
 
 nice
 
 thank you

Do we need to update any version or anything?  I didn't think so.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + Everyone has their own god. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] unaccent module - two params function should be immutable

2013-10-08 Thread Pavel Stehule
2013/10/8 Bruce Momjian br...@momjian.us

 On Tue, Oct  8, 2013 at 06:31:03PM +0200, Pavel Stehule wrote:
 
 
 
  2013/10/8 Bruce Momjian br...@momjian.us
 
  On Tue, Sep 24, 2013 at 05:36:58PM -0400, Bruce Momjian wrote:
   On Tue, Sep 17, 2013 at 10:15:47AM -0400, Robert Haas wrote:
On Sat, Sep 14, 2013 at 9:42 AM, Pavel Stehule 
 pavel.steh...@gmail.com
   wrote:
 I have developed the attached patch based on your suggestion.
  I did
  not
 see anything in the code that would make it STABLE, except a
 lookup
  of a
 dictionary library:

 dictOid = get_ts_dict_oid(stringToQualifiedNameList
  (unaccent),
 false);

 yes, it risk, but similar is with timezones, and other
 external data.
   
That's a catalog lookup - not a reliance on external data.
  
   Sorry, I was wrong.  Only unaccent_dict() calls get_ts_dict_oid(),
 and
   we aren't changing the signature of that function.
 
  Applied.
 
 
  nice
 
  thank you

 Do we need to update any version or anything?  I didn't think so.


I am not sure - does pg_upgrade change of flag after upgrade without
increasing version number?

Regards

Pavel



 --
   Bruce Momjian  br...@momjian.ushttp://momjian.us
   EnterpriseDB http://enterprisedb.com

   + Everyone has their own god. +



Re: [HACKERS] segfault with contrib lo

2013-10-08 Thread Robert Haas
On Mon, Oct 7, 2013 at 12:32 PM, Marc Cousin cousinm...@gmail.com wrote:
 I was using the lo contrib a few days ago and wasn't paying attention, and
 forgot the for each row in the create trigger command... PostgreSQL
 segfaulted, when the trigger tried to access the row's attributes.

 Please find attached a patch to control that the trigger is correctly defined
 (as in the example): a before trigger, for each row, and a parameter (if the
 parameter was omitted, it segfaulted too). I hope I did this correctly.

Thanks for the patch.  Please add it to the next CommitFest.

https://commitfest.postgresql.org/action/commitfest_view/open

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] unaccent module - two params function should be immutable

2013-10-08 Thread Bruce Momjian
On Tue, Oct  8, 2013 at 06:38:30PM +0200, Pavel Stehule wrote:
 I am not sure - does pg_upgrade change of flag after upgrade without 
 increasing
 version number?

What happens in pg_upgrade is that the CREATE EXTENSION command is
pg_dump'ed, and run by pg_uprade, and it then pulls from the SQL file to
create the new function signature.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + Everyone has their own god. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch: FORCE_NULL option for copy COPY in CSV mode

2013-10-08 Thread Andrew Dunstan


On 10/08/2013 12:30 PM, Robert Haas wrote:

On Tue, Oct 8, 2013 at 8:33 AM, Andrew Dunstan and...@dunslane.net wrote:

On 10/07/2013 11:34 PM, Amit Kapila wrote:

On Tue, Oct 8, 2013 at 12:55 AM, Andrew Dunstan and...@dunslane.net
wrote:

On 10/07/2013 03:06 PM, Robert Haas wrote:

Also if your use case is to treat empty strings as NULL (as per above
documentation), can't it be handled with WITH NULL AS option.
For example, something like:

postgres=# COPY testnull FROM stdin with CSV NULL AS E'';
Enter data to be copied followed by a newline.
End with a backslash and a period on a line by itself.

50,
\.

postgres=# select * from testnull;
a  |  b
+--
50 | NULL
(1 row)

Good point.  If this patch is just implementing something that can
already be done with another syntax, we don't need it.


Isn't the point of this option to allow a *quoted* empty string to be
forced
to NULL? If so, this is not testing the same case - in fact the COPY
command
above just makes explicit the default CSV NULL setting anyway.

I am really not sure if all the purpose of patch can be achieved by
existing syntax, neither it is explained clearly.
However the proposal hasn't discussed why it's not good idea to extend
some similar syntax COPY .. NULL which is used to replace string
with NULL's?
Description of NULL says: Specifies the string that represents a null
value.
Now why can't this syntax be extended to support quoted empty string
if it's not supported currently?
I have not checked completely, If it's difficult or not possible to
support in existing syntax, then even it add's more value to introduce
new syntax.

By asking above question, I doesn't mean that we should not go for the
new proposed syntax, rather it's to know and understand the benefit of
new syntax, also it helps during CF review for reviewer's if the
proposal involves new syntax and that's discussed previously.


Quite apart from any other consideration, this suggestion is inferior to
what's proposed in that it's an all or nothing deal, while the patch allows
you to specify the behaviour very explicitly on a per column basis. I can
well imagine wanting to be able to force a quoted empty string to null for
numeric fields but not for text.

The basic principal of our CSV processing is that we don't ever turn a NULL
into something quoted and we don't ever turn something quoted into NULL.
That's what lets us round-trip test just about every combination of options.
I'm only going to be happy violating that, as this patch does, in a very
explicit and controlled way.

Andrew, are you planning to review  commit this?



Yes.

cheers

andrew



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] unaccent module - two params function should be immutable

2013-10-08 Thread Pavel Stehule
2013/10/8 Bruce Momjian br...@momjian.us

 On Tue, Oct  8, 2013 at 06:38:30PM +0200, Pavel Stehule wrote:
  I am not sure - does pg_upgrade change of flag after upgrade without
 increasing
  version number?

 What happens in pg_upgrade is that the CREATE EXTENSION command is
 pg_dump'ed, and run by pg_uprade, and it then pulls from the SQL file to
 create the new function signature.


ok, then it is ok



 --
   Bruce Momjian  br...@momjian.ushttp://momjian.us
   EnterpriseDB http://enterprisedb.com

   + Everyone has their own god. +



Re: [HACKERS] segfault with contrib lo

2013-10-08 Thread Marc Cousin
On Tuesday 08 October 2013 12:28:46 Robert Haas wrote:
 On Mon, Oct 7, 2013 at 12:32 PM, Marc Cousin cousinm...@gmail.com wrote:
  I was using the lo contrib a few days ago and wasn't paying attention, and
  forgot the for each row in the create trigger command... PostgreSQL
  segfaulted, when the trigger tried to access the row's attributes.
  
  Please find attached a patch to control that the trigger is correctly
  defined (as in the example): a before trigger, for each row, and a
  parameter (if the parameter was omitted, it segfaulted too). I hope I did
  this correctly.
 Thanks for the patch.  Please add it to the next CommitFest.
 
 https://commitfest.postgresql.org/action/commitfest_view/open
Done.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] unaccent module - two params function should be immutable

2013-10-08 Thread Alvaro Herrera
Bruce Momjian escribió:

 Do we need to update any version or anything?  I didn't think so.

I think there should be an 1.1 version here.  That way, if somebody is
using the existing definition from the 1.0 module, they can get the new
definition by doing an extension upgrade.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] unaccent module - two params function should be immutable

2013-10-08 Thread Bruce Momjian
On Tue, Oct  8, 2013 at 02:25:25PM -0300, Alvaro Herrera wrote:
 Bruce Momjian escribió:
 
  Do we need to update any version or anything?  I didn't think so.
 
 I think there should be an 1.1 version here.  That way, if somebody is
 using the existing definition from the 1.0 module, they can get the new
 definition by doing an extension upgrade.

Uh, how would they get this new version?  By compiling 9.4 and
installing it in 9.3?

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + Everyone has their own god. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] unaccent module - two params function should be immutable

2013-10-08 Thread Alvaro Herrera
Bruce Momjian escribió:
 On Tue, Oct  8, 2013 at 02:25:25PM -0300, Alvaro Herrera wrote:
  Bruce Momjian escribió:
  
   Do we need to update any version or anything?  I didn't think so.
  
  I think there should be an 1.1 version here.  That way, if somebody is
  using the existing definition from the 1.0 module, they can get the new
  definition by doing an extension upgrade.
 
 Uh, how would they get this new version?  By compiling 9.4 and
 installing it in 9.3?

Oh, is this only in 9.4?  Then there's no point.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch: FORCE_NULL option for copy COPY in CSV mode

2013-10-08 Thread Andrew Dunstan


On 10/08/2013 12:47 PM, Andrew Dunstan wrote:


On 10/08/2013 12:30 PM, Robert Haas wrote:

Andrew, are you planning to review  commit this?



Yes.




Incidentally, the patch doesn't seem to add the option to file_fdw, 
which I really think it should.


cheers

andrew



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] GIN improvements part 1: additional information

2013-10-08 Thread Heikki Linnakangas

On 04.10.2013 14:13, Alexander Korotkov wrote:

On Fri, Oct 4, 2013 at 12:28 PM, Heikki Linnakangashlinnakan...@vmware.com

wrote:



In the attached patch, I in fact already did that for data leaf pages, but
didn't change the format of non-leaf pages yet. If we want to support
pg_upgrade, we might want to refrain from changing the non-leaf format.


In GinDataLeafPageGetPostingList* you use sizeof(ItemPointerData) without
MAXALIGN. Is it an error or you especially use 2 extra bytes on leaf page?


I didn't even think of it. Now that I do think of it, I don't see a 
reason to use MAXALIGN there. PostingItems only require 2-byte 
alignment. It's a bit fragile and underdocumented though. It probably 
would be good to have a struct to represent that layout. Something like:


struct
{
  ItemPointerData rightBound;
  PostingItem postingItems[1]; /* variable length array */
} PostingItemPageContent;

And use that struct in the macros.

Then again, we do currently use MAXALIGN there, so if we want to avoid 
changing the on-disk format, we have to keep it...


- Heikki


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] old warning in docs

2013-10-08 Thread Magnus Hagander
On Oct 8, 2013 6:27 PM, Robert Haas robertmh...@gmail.com wrote:

 On Mon, Oct 7, 2013 at 7:51 AM, Andrew Dunstan
 andrew.duns...@pgexperts.com wrote:
  Given that we have not supported releases older than 8.3 for quite a
while,
  do we need to keep this in extend.sgml any longer?

 +1 for removing that (but only in master, not the back-branches).


+1, and definitely not in back branches.

/Magnus


Re: [HACKERS] logical changeset generation v6.1

2013-10-08 Thread Steve Singer

On 10/07/2013 09:32 AM, Andres Freund wrote:

Todo:
* rename treat_as_catalog_table, after agreeing on the new name
* rename remaining timetravel function names
* restrict SuspendDecodingSnapshots usage to RelationInitPhysicalAddr,
   that ought to be enough.
* add InLogicalDecoding() function.
* throw away older data when reading xl_running_xacts records, to deal
   with immediate shutdowns/crashes


What is your current plan for decoding sequence updates?  Is this 
something that you were going to hold-off on supporting till a future 
version? ( know this was discussed a while ago but I don't remember 
where it stands now)


From a Slony point of view this isn't a big deal, I can continue to 
capture sequence changes in sl_seqlog when I create each SYNC event and 
then just replicate the INSERT statements in sl_seqlog via logical 
decoding.  I can see why someone building a replication system not based 
on the concept of a SYNC would have a harder time with this.


I am guessing we would want to pass sequence operations to the plugins 
as we encounter the WAL for them out-of-band of any transaction.   This 
would mean that a set of operations like


begin;
insert into a (id) values(4);
insert into a (id) values(nextval('some_seq'));
commit

would be replayed on the replicas as
setval('some_seq',100);
begin;
insert into a (id) values (4);
insert into a (id) values (100);
commit;




--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal: Set effective_cache_size to greater of .conf value, shared_buffers

2013-10-08 Thread Kevin Hale Boyes
The patch contains a small typo in config.sgml.  Probably just drop the
is from is can.

+results if this database cluster is can utilize most of the memory

Kevin.


On 8 October 2013 10:13, Bruce Momjian br...@momjian.us wrote:

 On Thu, Sep  5, 2013 at 05:14:37PM -0400, Bruce Momjian wrote:
  On Thu, Sep  5, 2013 at 06:14:33PM +0200, Magnus Hagander wrote:
I have developed the attached patch which implements an auto-tuned
effective_cache_size which is 4x the size of shared buffers.  I had
 to
set effective_cache_size to its old 128MB default so the EXPLAIN
regression tests would pass unchanged.
  
   That's not really autotuning though. ISTM that making the *default* 4
   x shared_buffers might make perfect sense, but do we really need to
   hijack the value of -1 for that? That might be useful for some time
   when we have actual autotuning, that somehow inspects the system and
   tunes it from there.
  
   I also don't think it should be called autotuning, when it's just a
   smarter default value.
  
   I like the feature, though, just not the packaging.
 
  That auto-tuning text came from the wal_buffer documentation, which
  does exactly this based on shared_buffers:
 
  The contents of the WAL buffers are written out to disk at every
  transaction commit, so extremely large values are unlikely to
  provide a significant benefit.  However, setting this value to at
  least a few megabytes can improve write performance on a busy
  -- server where many clients are committing at once.  The
 auto-tuning
 
  ---
  selected by the default setting of -1 should give reasonable
  results in most cases.
 
  I am fine with rewording and not using -1, but we should change the
  wal_buffer default and documentation too then.  I am not sure what other
  value than -1 to use?  0?  I figure if we ever get better auto-tuning,
  we would just remove this functionality and make it better.

 Patch applied with a default of 4x shared buffers.  I have added a 9.4
 TODO that we might want to revisit this.

 --
   Bruce Momjian  br...@momjian.ushttp://momjian.us
   EnterpriseDB http://enterprisedb.com

   + It's impossible for everything to be true. +


 --
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers



Re: [HACKERS] Patch for fast gin cache performance improvement

2013-10-08 Thread Ian Link



Although I asked this question, I've reconsidered about these parameters, and it
seems that these parameters not only make code rather complex but are a little
confusing to users.  So I'd like to propose to introduce only one parameter:
fast_cache_size.  While users that give weight to update performance for the
fast update technique set this parameter to a large value, users that give
weight not only to update performance but to search performance set this
parameter to a small value.  What do you think about this?
I think it makes sense to maintain this separation. If the user doesn't 
need a per-index setting, they don't have to use the parameter. Since 
the parameter is off by default, they don't even need to worry about it. 
There might as well be one parameter for users that don't need 
fine-grained control. We can document this and I don't think it will be 
confusing to the user.



4. In my understanding, the small value of gin_fast_limit/fast_cache_size leads
to the increase in GIN search performance, which, however, leads to the decrease
in GIN update performance.  Am I right?  If so, I think the tradeoff should be
noted in the documentation.

I believe this is correct.

5. The following documents in Chapter 57. GIN Indexes need to be 
updated: * 57.3.1. GIN Fast Update Technique * 57.4. GIN Tips and Tricks

Sure, I can add something.


6. I would like to see the results for the additional test cases (tsvectors).
I don't really have any good test cases for this available, and have 
very limited time for postgres at the moment. Feel free to create a test 
case, but I don't believe I can at the moment. Sorry!



7. The commented-out elog() code should be removed.
Sorry about that, I shouldn't have submitted the patch with those still 
there.


I should have a new patch soonish, hopefully. Thanks for your feedback!
Ian

Ian Link wrote:

8. I think there are no issues in this patch.  However, I have one question: how
this patch works in the case where gin_fast_limit/fast_cache_size = 0?  In this
case, in my understanding, this patch inserts new entries into the pending list
temporarily and immediately moves them to the main GIN data structure using
ginInsertCleanup().  Am I right?  If so, that is obviously inefficient.



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] dynamic shared memory

2013-10-08 Thread Robert Haas
On Thu, Sep 26, 2013 at 9:27 AM, Noah Misch n...@leadboat.com wrote:
  There's no data corruption problem if we proceed - but there likely
  has been one leading to the current state.

 +1 for making this one a PANIC, though.  With startup behind us, a valid dsm
 state file pointed us to a control segment with bogus contents.  The
 conditional probability of shared memory corruption seems higher than that of
 a DBA editing the dsm state file of a running cluster to incorrectly name as
 the dsm control segment some other existing shared memory segment.

To respond specifically to this point... inability to open a file on
disk does not mean that shared memory is corrupted.  Full stop.

A scenario I have seen a few times is that someone changes the
permissions on part or all of $PGDATA while the server is running.  I
have only ever seen this happen on Windows.  What typically happens
today - depending on the exact scenario - is that the checkpoints will
fail, but the server will remain up, sometimes even committing
transactions under synchronous_commit=off, even though it can't write
out its data.  If you fix the permissions before shutting down the
server, you don't even lose any data.  Making inability to read a file
into a PANIC condition will cause any such cluster to remain up only
as long as nobody tries to use dynamic shared memory, and then throw
up its guts.  I don't think users will appreciate that.

I am tempted to commit the latest version of this patch as I have it.
I think there's a lot of bikeshedding left to be done here, but
there's no real reason why we can't change this subsequent to the
initial commit as the answers become more clear.  Changing the error
levels used for particular messages, or rearranging the directory
structure, is quite trivial.  But we can't do that as long as we have
N people with =N opinions on what to do, and the way to get more
clarity there is to get the code out in front of a few more people and
see how things shake out.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] logical changeset generation v6.1

2013-10-08 Thread Andres Freund
On 2013-10-08 15:02:39 -0400, Steve Singer wrote:
 On 10/07/2013 09:32 AM, Andres Freund wrote:
 Todo:
 * rename treat_as_catalog_table, after agreeing on the new name
 * rename remaining timetravel function names
 * restrict SuspendDecodingSnapshots usage to RelationInitPhysicalAddr,
that ought to be enough.
 * add InLogicalDecoding() function.
 * throw away older data when reading xl_running_xacts records, to deal
with immediate shutdowns/crashes
 
 What is your current plan for decoding sequence updates?  Is this something
 that you were going to hold-off on supporting till a future version? ( know
 this was discussed a while ago but I don't remember where it stands now)

I don't plan to implement it as part of this - the optimizations in
sequences make it really unsuitable for that (nontransaction, allocated
in bulk, ...).
Simon had previously posted about sequence AMs, and I have a prototype
patch that implements that concept (which needs considerable cleanup). I
plan to post about it whenever this is finished.

I think many replication solutions that care about sequences in a
nontrivial will want to implement their own sequence logic anyway, so I
think that's not a bad path.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] logical changeset generation v6.1

2013-10-08 Thread Andres Freund
On 2013-10-08 12:20:22 -0400, Robert Haas wrote:
 On Mon, Oct 7, 2013 at 9:32 AM, Andres Freund and...@2ndquadrant.com wrote:
  - RelationIsDoingTimetravel is still a crappy name.  How about
  RelationRequiredForLogicalDecoding?  And maybe the reloption
  treat_as_catalog_table can become required_for_logical_decoding.
 
  Hm. I don't really like the name, required seems to imply that it's
  necessary to turn this on to get data replicated in that relation. How
  about accessible_during_logical_decoding or user_catalog_table? The
  latter would allow us to use it to add checks for user relations used in
  indexes which need a treatment similar to enums.
 
 user_catalog_table is a pretty good description, but should we worry
 about the fact that logical replication isn't mentioned in there
 anywhere?

I personally don't worry about it, although I see why somebody could.

 In what way do you feel that it's more clear to say *accessible
 during* rather than *required for* logical decoding?

Because required for can easily be understood that you need to set it
if you want a table's changes to be replicated. Which is not the case...

 I was trying to make the naming consistent; i.e. if we have
 RelationRequiredForLogicalDecoding then name the option to match.

Maybe this should be RelationAccessibleInLogicalDecoding() then - that
seems like a better description anyway?

  All in all, it seems to me that we shouldn't try to punt.  Maybe we
  should have something that works like ALTER TABLE name CLUSTER ON
  index_name to configure which index should be used for logical
  replication.  Possibly this same syntax could be used as ALTER
  MATERIALIZED VIEW to set the candidate key for that case.
 
  How about using the current logic by default but allow to tune it
  additionally with an option like that?
 
 I'm OK with defaulting to the primary key if there is one, but I think
 that no other candidate key should be entertained unless the user
 configures it.  I think the behavior we get without that will be just
 too weird.  We could use the same logic you're proposing here for
 CLUSTER, too, but we don't; that's because we've (IMHO, rightly)
 decided that the choice of index is too important to be left to
 chance.

I don't understand why this would be a good path. If you DELETE/UPDATE
and you don't have a primary key you get something that definitely
identifies the row with the current behaviour. It might not be the best
thing, but it sure is better than nothing. E.g. for auditing it's
probably quite sufficient to just use any of the candidate keys if
there (temporarily) is no primary key.
If you implement a replication solution and don't want that behaviour
there, you are free to guard against it there - which is a good thing
to do.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] GIN improvements part 1: additional information

2013-10-08 Thread Heikki Linnakangas

On 08.10.2013 17:47, Alexander Korotkov wrote:

Hi, Tomas!

On Sun, Oct 6, 2013 at 3:58 AM, Tomas Vondrat...@fuzzy.cz  wrote:


I've attempted to rerun the benchmarks tests I did a few weeks ago, but
  I got repeated crashes when loading the data (into a table with
tsvector+gin index).

Right before a crash, theres this message in the log:

PANIC:  not enough space in leaf page!



Thanks for testing. Heikki's version of patch don't works for me too on
even much more simplier examples. I can try to get it working if he answer
my question about GinDataLeafPageGetPostingList* macros.


The new macros in that patch version were quite botched. Here's a new 
attempt.


- Heikki


gin-packed-postinglists-9-heikki.patch.gz
Description: GNU Zip compressed data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] dynamic shared memory

2013-10-08 Thread Andres Freund
On 2013-10-08 15:40:04 -0400, Robert Haas wrote:
 I am tempted to commit the latest version of this patch as I have it.

I haven't looked at the latest version of the patch, but based on the
previous version I have no problem with that.

If you'd feel more comfortable with another round of review, scanning
for things other than elevels, I can do that towards the weekend. Before
or after you've committed.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] drop-index-concurrently-1 on master fails at serializable

2013-10-08 Thread Kevin Grittner
Kevin Grittner kgri...@ymail.com wrote:

 [ isolation test failed at snapshot-based isolation levels ]

Fix pushed, that looks for the right results based on isolation level.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] space reserved for WAL record does not match what was written: panic on windows

2013-10-08 Thread Andres Freund
On 2013-10-08 12:26:17 -0400, Robert Haas wrote:
 On Mon, Oct 7, 2013 at 4:47 PM, Andres Freund and...@2ndquadrant.com wrote:
  So I guess it's either a separate macro, or we rewrite that piece of
  code to work slightly differently and work directly on the lenght or
  such.
 
  Maybe we should add a StaticAssert ensuring the TYPEALIGN macro only
  gets passed 32bit types?
 
 I think having two macros that behave identically on all platforms
 anyone cares about[1] but which can cause difficult-to-find
 corner-case-bugs on other platforms is just a recipe for disaster.  I
 pledge to screw that up at least once.

Do you have a better alternative? Making the computation unconditionally
64bit will have a runtime overhead and adding a StaticAssert in the
existing macro doesn't work because we use it in array sizes where gcc
balks.
We could try using inline functions, but that's not going to be pretty
either.

I don't really see that many further usecases that will align 64bit
values on 32bit platforms, so I think we're ok for now.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch: FORCE_NULL option for copy COPY in CSV mode

2013-10-08 Thread Ian Lawrence Barwick
2013/10/9 Andrew Dunstan and...@dunslane.net:

 On 10/08/2013 12:47 PM, Andrew Dunstan wrote:


 On 10/08/2013 12:30 PM, Robert Haas wrote:

 Andrew, are you planning to review  commit this?


 Yes.



 Incidentally, the patch doesn't seem to add the option to file_fdw, which I
 really think it should.

Patch author here. I'll dig out the use-case I had for this patch and have a
look at the file_fdw option, which never occurred to me. (I'm
doing some FDW-related stuff at the moment so would be more than happy
to handle that too).

(Sorry for the delay in following up on this, I kind of assumed the patch
would be squirrelled away until the next commitfest ;) )

Regards

Ian Barwick


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal: Set effective_cache_size to greater of .conf value, shared_buffers

2013-10-08 Thread Bruce Momjian
On Tue, Oct  8, 2013 at 01:04:18PM -0600, Kevin Hale Boyes wrote:
 The patch contains a small typo in config.sgml.  Probably just drop the is
 from is can.
 
 +results if this database cluster is can utilize most of the memory
 
 Kevin.

Thank you, fixed.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + Everyone has their own god. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] GIN improvements part 1: additional information

2013-10-08 Thread Tomas Vondra
On 8.10.2013 21:59, Heikki Linnakangas wrote:
 On 08.10.2013 17:47, Alexander Korotkov wrote:
 Hi, Tomas!

 On Sun, Oct 6, 2013 at 3:58 AM, Tomas Vondrat...@fuzzy.cz  wrote:

 I've attempted to rerun the benchmarks tests I did a few weeks ago, but
   I got repeated crashes when loading the data (into a table with
 tsvector+gin index).

 Right before a crash, theres this message in the log:

 PANIC:  not enough space in leaf page!


 Thanks for testing. Heikki's version of patch don't works for me too on
 even much more simplier examples. I can try to get it working if he
 answer
 my question about GinDataLeafPageGetPostingList* macros.
 
 The new macros in that patch version were quite botched. Here's a new
 attempt.

Nope, still the same errors :-(

PANIC:  not enough space in leaf page!
LOG:  server process (PID 29722) was terminated by signal 6: Aborted
DETAIL:  Failed process was running: autovacuum: ANALYZE public.messages

regards
Tomas


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] dynamic shared memory

2013-10-08 Thread Noah Misch
On Tue, Oct 08, 2013 at 03:40:04PM -0400, Robert Haas wrote:
 On Thu, Sep 26, 2013 at 9:27 AM, Noah Misch n...@leadboat.com wrote:

   There's no data corruption problem if we proceed - but there likely
   has been one leading to the current state.
 
  +1 for making this one a PANIC, though.  With startup behind us, a valid dsm
  state file pointed us to a control segment with bogus contents.  The
  conditional probability of shared memory corruption seems higher than that 
  of
  a DBA editing the dsm state file of a running cluster to incorrectly name as
  the dsm control segment some other existing shared memory segment.
 
 To respond specifically to this point... inability to open a file on
 disk does not mean that shared memory is corrupted.  Full stop.
 
 A scenario I have seen a few times is that someone changes the
 permissions on part or all of $PGDATA while the server is running.

I was discussing the third ereport() in dsm_backend_startup(), which does not
pertain to inability to open a file.  The second ereport() would fire in the
damaged-permissions scenario, and I fully agree with that one using ERROR.

Incidentally, dsm_backend_startup() has a typo: s/one/none/

 I am tempted to commit the latest version of this patch as I have it.

Works for me.

Thanks,
nm

-- 
Noah Misch
EnterpriseDB http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch: FORCE_NULL option for copy COPY in CSV mode

2013-10-08 Thread Amit Kapila
On Tue, Oct 8, 2013 at 6:03 PM, Andrew Dunstan and...@dunslane.net wrote:

 On 10/07/2013 11:34 PM, Amit Kapila wrote:

 On Tue, Oct 8, 2013 at 12:55 AM, Andrew Dunstan and...@dunslane.net
 wrote:

 On 10/07/2013 03:06 PM, Robert Haas wrote:


 Also if your use case is to treat empty strings as NULL (as per above
 documentation), can't it be handled with WITH NULL AS option.
 For example, something like:

 postgres=# COPY testnull FROM stdin with CSV NULL AS E'';
 Enter data to be copied followed by a newline.
 End with a backslash and a period on a line by itself.

 50,
 \.

 postgres=# select * from testnull;
a  |  b
 +--
50 | NULL
 (1 row)

 Good point.  If this patch is just implementing something that can
 already be done with another syntax, we don't need it.


 Isn't the point of this option to allow a *quoted* empty string to be
 forced
 to NULL? If so, this is not testing the same case - in fact the COPY
 command
 above just makes explicit the default CSV NULL setting anyway.

 I am really not sure if all the purpose of patch can be achieved by
 existing syntax, neither it is explained clearly.
 However the proposal hasn't discussed why it's not good idea to extend
 some similar syntax COPY .. NULL which is used to replace string
 with NULL's?
 Description of NULL says: Specifies the string that represents a null
 value.
 Now why can't this syntax be extended to support quoted empty string
 if it's not supported currently?
 I have not checked completely, If it's difficult or not possible to
 support in existing syntax, then even it add's more value to introduce
 new syntax.

 By asking above question, I doesn't mean that we should not go for the
 new proposed syntax, rather it's to know and understand the benefit of
 new syntax, also it helps during CF review for reviewer's if the
 proposal involves new syntax and that's discussed previously.


 Quite apart from any other consideration, this suggestion is inferior to
 what's proposed in that it's an all or nothing deal, while the patch allows
 you to specify the behaviour very explicitly on a per column basis. I can
 well imagine wanting to be able to force a quoted empty string to null for
 numeric fields but not for text.

 The basic principal of our CSV processing is that we don't ever turn a NULL
 into something quoted and we don't ever turn something quoted into NULL.
 That's what lets us round-trip test just about every combination of options.
 I'm only going to be happy violating that, as this patch does, in a very
 explicit and controlled way.

Will this option allow only quoted empty string to be NULL or will
handle without quoted empty string as well?
It seems from documentation that current option FORCE_NOT_NULL handles
for both (Do not match the specified columns' values against the null
string. In the default case where the null string is empty, this means
that empty values will be read as zero-length strings rather than
nulls, even when they are not quoted.).

With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Typo in 9.2.5 release note item?

2013-10-08 Thread Amit Langote
Hi,

In the release notes for 9.2.5,

Should the following:

Fix rare GROUP BY query error caused by improperly processed date
type modifiers (Tom Lane)

be:

Fix rare GROUP BY query error caused by improperly processed *data*
type modifiers (Tom Lane)

Assuming the change has to do with the following commit:

http://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=bd5ab4b28745605493ab7061724ba0375ee9593a


--
Amit Langote


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] space reserved for WAL record does not match what was written: panic on windows

2013-10-08 Thread David Rowley
On Wed, Oct 9, 2013 at 5:26 AM, Robert Haas robertmh...@gmail.com wrote:

 On Mon, Oct 7, 2013 at 4:47 PM, Andres Freund and...@2ndquadrant.com
 wrote:
  On 2013-10-07 13:25:17 -0400, Robert Haas wrote:
  On Fri, Oct 4, 2013 at 8:19 AM, Andres Freund and...@2ndquadrant.com
 wrote:
   Could it be that MAXALIGN/TYPEALIGN doesn't really work for values
   bigger than 32bit?
  
   #define MAXALIGN(LEN)   TYPEALIGN(MAXIMUM_ALIGNOF,
 (LEN))
   #define TYPEALIGN(ALIGNVAL,LEN)  \
   (((intptr_t) (LEN) + ((ALIGNVAL) - 1))  ~((intptr_t)
 ((ALIGNVAL) - 1)))
 
  Isn't the problem, more specifically, that it doesn't work for values
  larger than an intptr_t?
 
  Well, yes. And intptr_t is 32bit wide on a 32bit platform.
 
  And does that indicate that intptr_t is the wrong type to be using here?
 
  No, I don't think so. intptr_t is defined to be a integer type to which
  you can cast a pointer, cast it back and still get the old value. On
  32bit platforms it usually will be 32bit wide.
  All that's fine for the classic usages of TYPEALIGN where it's used on
  pointers or lengths of stuff stored in memory. Those will always fit in
  32bit on a 32bit platform. But here we're using it on explicit 64bit
  types (XLogRecPtr).
  Now, you could argue that we should make it use 64bit math everywhere -
  but I think that might incur quite the price on some 32bit
  platforms. It's used in the tuple decoding stuff, that's quite the hot
  path in some workloads.
 
  So I guess it's either a separate macro, or we rewrite that piece of
  code to work slightly differently and work directly on the lenght or
  such.
 
  Maybe we should add a StaticAssert ensuring the TYPEALIGN macro only
  gets passed 32bit types?

 I think having two macros that behave identically on all platforms
 anyone cares about[1] but which can cause difficult-to-find
 corner-case-bugs on other platforms is just a recipe for disaster.  I
 pledge to screw that up at least once.


The only improvement I thought of during writing the patch was to rename
MAXALIGN to something more related to RAM, like perhaps RAMMAXALIGN or
MEMORYMAXALIGN, so callers might think twice if they're using it for
anything apart from memory addresses. I didn't really come up with a name I
thought was good enough to warrant the change, so I left it as it was.

I also don't think it is perfect that both the marcos do the same thing on
a 64bit compilation, but I think I was in the same boat as you... couldn't
think of anything better.

If you can think of a name that will confuse users less then maybe it's
worth making the change.

David


 --
 Robert Haas
 EnterpriseDB: http://www.enterprisedb.com
 The Enterprise PostgreSQL Company

 [1] And by anyone, I mean me.



[HACKERS] Re: docs: clarify references to md5 hash and md5 crypt in pgcrypto docs

2013-10-08 Thread Peter Eisentraut
Where did this patch come from?

I think part of that patch should be reverted, but I can't find the
original submission for the rationale.




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Revive line type

2013-10-08 Thread Peter Eisentraut
On Thu, 2013-10-03 at 17:50 +0530, Jeevan Chalke wrote:
 Will you please attach new patch with above block removed ? Then I
 will quickly check that new patch and mark as Ready For Committer.
 
Here you go.

From ba421c778cc3f7c32886ac038389cfbad3c0df67 Mon Sep 17 00:00:00 2001
From: Peter Eisentraut pete...@gmx.net
Date: Wed, 9 Oct 2013 01:09:18 -0400
Subject: [PATCH v2] Revive line type
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Change the input/output format to {A,B,C}, to match the internal
representation.

Complete the implementations of line_in, line_out, line_recv, line_send.
Remove comments and error messages about the line type not being
implemented.  Add regression tests for existing line operators and
functions.

Reviewed-by: rui hua 365507506...@gmail.com
Reviewed-by: Álvaro Herrera alvhe...@2ndquadrant.com
Reviewed-by: Jeevan Chalke jeevan.cha...@enterprisedb.com
---
 doc/src/sgml/datatype.sgml |  42 -
 doc/src/sgml/func.sgml |   6 +
 src/backend/utils/adt/geo_ops.c| 224 ++--
 src/include/catalog/pg_type.h  |   3 +-
 src/include/utils/geo_decls.h  |   7 -
 src/test/regress/expected/geometry.out |   3 -
 src/test/regress/expected/line.out | 271 +
 src/test/regress/expected/sanity_check.out |   3 +-
 src/test/regress/output/misc.source|   3 +-
 src/test/regress/parallel_schedule |   2 +-
 src/test/regress/serial_schedule   |   1 +
 src/test/regress/sql/geometry.sql  |   4 -
 src/test/regress/sql/line.sql  |  87 +
 13 files changed, 503 insertions(+), 153 deletions(-)
 create mode 100644 src/test/regress/expected/line.out
 create mode 100644 src/test/regress/sql/line.sql

diff --git a/doc/src/sgml/datatype.sgml b/doc/src/sgml/datatype.sgml
index 87668ea..07f0385 100644
--- a/doc/src/sgml/datatype.sgml
+++ b/doc/src/sgml/datatype.sgml
@@ -3051,9 +3051,7 @@ titleGeometric Types/title
para
 Geometric data types represent two-dimensional spatial
 objects. xref linkend=datatype-geo-table shows the geometric
-types available in productnamePostgreSQL/productname.  The
-most fundamental type, the point, forms the basis for all of the
-other types.
+types available in productnamePostgreSQL/productname.
/para
 
 table id=datatype-geo-table
@@ -3063,8 +3061,8 @@ titleGeometric Types/title
row
 entryName/entry
 entryStorage Size/entry
-entryRepresentation/entry
 entryDescription/entry
+entryRepresentation/entry
/row
   /thead
   tbody
@@ -3077,8 +3075,8 @@ titleGeometric Types/title
row
 entrytypeline/type/entry
 entry32 bytes/entry
-entryInfinite line (not fully implemented)/entry
-entry((x1,y1),(x2,y2))/entry
+entryInfinite line/entry
+entry{A,B,C}/entry
/row
row
 entrytypelseg/type/entry
@@ -3153,6 +3151,38 @@ titlePoints/title
/sect2
 
sect2
+titleLines/title
+
+indexterm
+ primaryline/primary
+/indexterm
+
+para
+ Lines (typeline/type) are represented by the linear equation Ax + By
+ + C = 0, where A and B are not both zero.  Values of
+ type typeline/type is input and output in the following form:
+synopsis
+{ replaceableA/replaceable, replaceableB/replaceable, replaceableC/replaceable }
+/synopsis
+
+ Alternatively, any of the following forms can be used for input:
+
+synopsis
+[ ( replaceablex1/replaceable , replaceabley1/replaceable ) , ( replaceablex2/replaceable , replaceabley2/replaceable ) ]
+( ( replaceablex1/replaceable , replaceabley1/replaceable ) , ( replaceablex2/replaceable , replaceabley2/replaceable ) )
+  ( replaceablex1/replaceable , replaceabley1/replaceable ) , ( replaceablex2/replaceable , replaceabley2/replaceable )
+replaceablex1/replaceable , replaceabley1/replaceable   ,   replaceablex2/replaceable , replaceabley2/replaceable
+/synopsis
+
+ where
+ literal(replaceablex1/replaceable,replaceabley1/replaceable)/literal
+ and
+ literal(replaceablex2/replaceable,replaceabley2/replaceable)/literal
+ are two (different) points on the line.
+/para
+   /sect2
+
+   sect2
 titleLine Segments/title
 
 indexterm
diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index 7dd1ef2..b162618 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -8123,6 +8123,12 @@ titleGeometric Type Conversion Functions/title
 entryliteralcircle(polygon '((0,0),(1,1),(2,0))')/literal/entry
/row
row
+entryliteralfunctionline(typepoint/type, typepoint/type)/function/literal/entry
+entrytypeline/type/entry
+entrypoints to line/entry
+entryliteralline(point '(-1,0)', point '(1,0)')/literal/entry
+   /row
+   row
 entry