Re: [HACKERS] POLA violation with \c service=

2015-01-10 Thread Erik Rijkers
On Fri, January 9, 2015 20:15, David Fetter wrote:
 [psql_fix_uri_service_003.patch]

Applies on master; the feature (switching services) works well but a \c without 
any parameters produces a segfault:

(centos 6.6, 4.9.2, 64-bit)


$ echo -en $PGSERVICEFILE\n$PGSERVICE\n$PGPORT\n
/home/aardvark/.pg_service
service_pola
6968

$ psql
Timing is on.
psql (9.5devel_service_pola_20150109_2340_ac7009abd228)
Type help for help.

testdb=# \c service=HEAD
You are now connected to database testdb as user aardvark via socket in 
/tmp at port 6545.
testdb=# \c service=service_pola
You are now connected to database testdb as user aardvark via socket in 
/tmp at port 6968.
testdb=# \c
Segmentation fault (core dumped)


or, under gdb:


(gdb) run
Starting program: 
/home/aardvark/pg_stuff/pg_installations/pgsql.service_pola/bin/psql
[Thread debugging using libthread_db enabled]
Using host libthread_db library /lib64/libthread_db.so.1.
Timing is on.
psql (9.5devel_service_pola_20150109_2340_ac7009abd228)
Type help for help.

testdb=# \c

Program received signal SIGSEGV, Segmentation fault.
0x0040b1f5 in uri_prefix_length (connstr=0x0) at common.c:1785
1785if (strncmp(connstr, uri_designator,
(gdb) bt
#0  0x0040b1f5 in uri_prefix_length (connstr=0x0) at common.c:1785
#1  recognized_connection_string (connstr=connstr@entry=0x0) at common.c:1805
#2  0x00406592 in do_connect (port=0x676960 6968, host=0x676940 
/tmp, user=0x676900 aardvark, dbname=0x0) at
command.c:1643
#3  exec_command (query_buf=0x678150, scan_state=0x677fc0, cmd=0x68f730 c) at 
command.c:247
#4  HandleSlashCmds (scan_state=scan_state@entry=0x677fc0, query_buf=0x678150) 
at command.c:112
#5  0x00411283 in MainLoop (source=0x32d4b8e6c0 _IO_2_1_stdin_) at 
mainloop.c:335
#6  0x00413b4e in main (argc=optimized out, argv=0x7fffd9f8) at 
startup.c:340
(gdb)

I hope that helps to pinpoint the problem.


thanks,


Erik Rijkers
















-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Compression of full-page-writes

2015-01-10 Thread Michael Paquier
On Fri, Jan 9, 2015 at 9:49 PM, Rahila Syed rahilasyed...@gmail.com wrote:
So this test can be used to evaluate how shorter records influence
performance since the master waits for flush confirmation from the
standby, right?

 Yes. This test can help measure performance improvement due to reduced I/O
 on standby as master waits for WAL records flush on standby.
It may be interesting to run such tests with more concurrent
connections at the same time, like 32 or 64.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Fixing memory leak in pg_upgrade

2015-01-10 Thread Michael Paquier
On Sat, Jan 10, 2015 at 2:27 AM, Bruce Momjian br...@momjian.us wrote:
 so some memory is allocated, and has to be freed.  I looked at avoiding
 the call to gen_db_file_maps() for old_db-rel_arr.nrels == 0, but there
 are checks in there comparing the old/new relation counts, so it can't
 be skipped.  I also removed the unnecessary memory initialization.
Patch is fine IMO for its purpose.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] POLA violation with \c service=

2015-01-10 Thread Andrew Dunstan


On 01/09/2015 02:15 PM, David Fetter wrote:

Some C cleanups...






Not quite enough cleanup. As I told you on IRC, the only addition to 
common.h should be the declaration of recognized_connection_string. 
These do not belong there (they belong in common.c):


   +static const char uri_designator[] = postgresql://;
   +static const char short_uri_designator[] = postgres://;

These declarations in common.h would cause a separate instance of these 
pieces of storage to occur in every object file where the .h file had 
been #included. In general, you should not expect to see any static 
declarations in .h files.


In addition, you need to ensure that recognized_connection_string() is 
not called with a NULL argument.



cheers

andrew



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] POLA violation with \c service=

2015-01-10 Thread Andrew Dunstan


On 01/10/2015 09:32 AM, Andres Freund wrote:

On 2015-01-10 09:16:07 -0500, Andrew Dunstan wrote:

+static const char uri_designator[] = postgresql://;
+static const char short_uri_designator[] = postgres://;

These declarations in common.h would cause a separate instance of these
pieces of storage to occur in every object file where the .h file had been
#included. In general, you should not expect to see any static declarations
in .h files.

Save static inline functions, that is.





Yeah, but not normally data items. (I did say in general). As a 
general rule for novice C programmers I think my rule of thumb is 
reasonable.


cheers

andrew


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Escaping from blocked send() reprised.

2015-01-10 Thread Andres Freund
On 2014-09-04 08:49:22 -0400, Robert Haas wrote:
 On Tue, Sep 2, 2014 at 3:01 PM, Andres Freund and...@2ndquadrant.com wrote:
  I'm slightly worried about the added overhead due to the latch code. In
  my implementation I only use latches after a nonblocking read, but
  still. Every WaitLatchOrSocket() does a drainSelfPipe(). I wonder if
  that can be made problematic.
 
 I think that's not the word you're looking for.

There's a less missing...

 At some point I hacked up a very crude prototype that made LWLocks use
 latches to sleep instead of semaphores.  It was slow.

Interesting. I dimly remembered you mentioning this, that's how I
rediscovered this message.

Do you remember any details?

My guess that's not so much the overhead of the latch itself, but the
lack of the directed wakeup stuff the OS provides for semaphores.


If we could replace all usages of semaphores that set immediate
interrupts to ok, we could quite easily make the deadlock detector
et. al. run outside of signal handlers. That would imo make it more
robust, and easier to understand - right now the correctness of locking
done in the deadlock detector isn't obvious.  With the infrastructure in
place it'd also allow your new parallelism code to run outside of signal
handlers.

Unfortunately currently sempahores can't be unlocked in a signal handler
(as sysv semaphores aren't signal safe)... It'd also be not so nice to
set both a latch and semaphores in every signal handler.


 AIUI, the only reason why we need the self-pipe thing is because on
 some platforms signals don't interrupt system calls.  But my
 impression was that those platforms were somewhat obscure.

To the contrary, I think it's only very obscure platforms where signals
still interrupt syscalls - we set SA_RESTART for pretty much
everything. There's a couple of system calls that ignore SA_RESTART. For
some that's defined in posix, for others it's operating system
specific. E.g. on linux semop(), poll(), select() are defined to always
return EINTR when interrupted.

Anyway, the discussion since cleared up that we need the self byte to
handle a race, anyway.

 Basically, it doesn't feel like a good thing that we've got two sets
 of primitives for making a backend wait that (1) don't really know
 about each other and (2) use different operating system primitives.
 Presumably one of the two systems is better; let's figure out which
 one it is, use that one all the time, and get rid of the other one.

I think the latch interface is clearly better for what we use
sema/latches for as it allows to wait for signals (latch sets), a socket
and timeouts. So let's try to figure out how to make it perform
comparably or better than semaphores.

There's imo only one semaphore user that can't trivially be replaced by
latches: the semaphore spinlock emulation. Both proc.c and and lwlock.c
can be converted quite easily - in the latter case, it might actually
end up saving some code.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Transactions involving multiple postgres foreign servers

2015-01-10 Thread Michael Paquier
On Sat, Jan 10, 2015 at 9:02 AM, Jim Nasby jim.na...@bluetreble.com wrote:
 On 1/8/15, 12:00 PM, Kevin Grittner wrote:
 The key point is that the distributed transaction data must be
 flagged as needing to commit rather than roll back between the
 prepare phase and the final commit.  If you try to avoid the
 PREPARE, flagging, COMMIT PREPARED sequence by building the
 flagging of the distributed transaction metadata into the COMMIT
 process, you still have the problem of what to do on crash
 recovery.  You really need to use 2PC to keep that clean, I think.
Yes, 2PC is needed as long as more than 2 nodes perform write
operations within a transaction.

 If we had an independent transaction coordinator then I agree with you
 Kevin. I think Robert is proposing that if we are controlling one of the
 nodes that's participating as well as coordinating the overall transaction
 that we can take some shortcuts. AIUI a PREPARE means you are completely
 ready to commit. In essence you're just waiting to write and fsync the
 commit message. That is in fact the state that a coordinating PG node would
 be in by the time everyone else has done their prepare. So from that
 standpoint we're OK.

 Now, as soon as ANY of the nodes commit, our coordinating node MUST be able
 to commit as well! That would require it to have a real prepared transaction
 of it's own created. However, as long as there is zero chance of any other
 prepared transactions committing before our local transaction, that step
 isn't actually needed. Our local transaction will either commit or abort,
 and that will determine what needs to happen on all other nodes.

It is a property of 2PC to ensure that a prepared transaction will
commit. Now, once it is confirmed on the coordinator that all the
remote nodes have successfully PREPAREd, the coordinator issues COMMIT
PREPARED to each node. What do you do if some nodes report ABORT
PREPARED while other nodes report COMMIT PREPARED? Do you abort the
transaction on coordinator, commit it or FATAL? This lets the cluster
in an inconsistent state, meaning that some consistent cluster-wide
recovery point is needed as well (Postgres-XC and XL have introduced
the concept of barriers for such problems, stuff created first by
Pavan Deolassee).
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] POLA violation with \c service=

2015-01-10 Thread Andres Freund
On 2015-01-10 09:49:52 -0500, Andrew Dunstan wrote:
 Save static inline functions, that is.

 Yeah, but not normally data items. (I did say in general). As a general
 rule for novice C programmers I think my rule of thumb is reasonable.

Agreed. I just tried to preempt somebody grepping for static in
src/include and crying foul ;)

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT UPDATE and RLS

2015-01-10 Thread Stephen Frost
Dean,

* Dean Rasheed (dean.a.rash...@gmail.com) wrote:
 On 9 January 2015 at 20:26, Stephen Frost sfr...@snowman.net wrote:
  Where this leaves me, at least, is feeling like we should always apply
  the INSERT WITH CHECK policy, then if there is a conflict, check the
  UPDATE USING policy and throw an error if the row isn't visible but
  otherwise perform the UPDATE and then check the UPDATE WITH CHECK
  policy.
 
 Yeah, I can see the logic in that, but I'm starting to question the
 final then in the above, i.e., the order in which the checks are
 done.
 
 Currently we're applying RLS CHECKs after the INSERT or UPDATE, like
 WITH CHECK OPTIONs on views. The SQL spec says that WITH CHECK OPTIONs
 on views have to be applied after the INSERT/UPDATE on the base
 relation, but we're free to do something different for RLS CHECKs if
 that makes more sense. If we want RLS to be more like column-level
 privilege checking, then it does make sense to do the checks sooner,
 so perhaps we should be checking the RLS policies before the
 INSERT/UPDATE, like CHECK constraints.

I've been wondering about the placement of the checks also.  I keep
meaning to go back to the SQL spec and review the WITH CHECK requirement
as it feels a bit odd to me, but then again, it is the spec.  As you
say, we're not obligated to follow it for RLS in any case.

 Doing that, it makes sense to first check INSERT CHECK policies,
 potentially throwing an error before any PK conflict is detected, so
 an error would result even in the INSERT .. ON CONFLICT IGNORE case.
 That way INSERT CHECK policies would always be applied regardless of
 the path taken, as you suggest.

Right.

 I would still say that the RLS UPDATE USING/CHECK policies should only
 be applied if the UPDATE path is taken (before the UPDATE is applied),
 and they need to be applied to the (old/new) update tuples only. It's
 a bit like there is a WHERE record already exists clause attached to
 the UPDATE. The RLS UPDATE policies should only be applied to the rows
 affected by the UPDATE.

Agreed.  I don't see how we could possibly do otherwise.  The only tuple
we have at the outset is the to-be-INSERT'd tuple.  We won't find the
conflicting tuple until we try and even then that conflicting tuple
might be different from the to-be-INSERT'd tuple, so we can't check that
until we've discovered the conflict exists.  Further, we won't have the
tuple to check for the UPDATE's CHECK policy until all of the
expressions (if any) are run on the UPDATE side.

  I see your point that this runs counter to the 'mod_count'
  example use-case and could cause problems for users using RLS with such
  a strategy.  For my part, I expect users of RLS to expect errors in such
  a case rather than allowing it, but it's certainly a judgement call.
 
 Actually I don't think this is a problem for the mod_count example.
 This isn't the same as AND'ing together the INSERT and UPDATE quals,
 because each set of policies is being applied to a different tuple.
 There are 3 tuples in play here:-
 
 1). The insert tuple, which has INSERT CHECK polcies applied to it
 2). The existing (old) update tuple, which has UPDATE USING policies
 applied to it
 3). The (new) update tuple, which has UPDATE CHECK policies applied to it

Right.

 NOTE: If we do change RLS CHECKs to be executed prior to modifying any
 data, that's potentially a change that could be made independently of
 the UPSERT patch. We should also probably then stop referring to them
 as WITH CHECK OPTIONs in the docs and in error messages because
 they're not the same thing, and the code might be neater if they had
 their own data-structure rather than overloading WithCheckOption.

I agree that's an independent change.  I'm not sure if it would end up
being neater or not offhand but I can see some advantages to breaking it
up from WithCheck as it might be simpler for users to understand
(particularly those users who are not already familiar with WithCheck).
Were you thinking about working up a patch for such a change?  If not,
I'll see about finding time to do it, unless someone else wants to
volunteer. :)

Thanks!

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] INSERT ... ON CONFLICT UPDATE and RLS

2015-01-10 Thread Dean Rasheed
On 9 January 2015 at 20:26, Stephen Frost sfr...@snowman.net wrote:
 Where this leaves me, at least, is feeling like we should always apply
 the INSERT WITH CHECK policy, then if there is a conflict, check the
 UPDATE USING policy and throw an error if the row isn't visible but
 otherwise perform the UPDATE and then check the UPDATE WITH CHECK
 policy.

Yeah, I can see the logic in that, but I'm starting to question the
final then in the above, i.e., the order in which the checks are
done.

Currently we're applying RLS CHECKs after the INSERT or UPDATE, like
WITH CHECK OPTIONs on views. The SQL spec says that WITH CHECK OPTIONs
on views have to be applied after the INSERT/UPDATE on the base
relation, but we're free to do something different for RLS CHECKs if
that makes more sense. If we want RLS to be more like column-level
privilege checking, then it does make sense to do the checks sooner,
so perhaps we should be checking the RLS policies before the
INSERT/UPDATE, like CHECK constraints.

Doing that, it makes sense to first check INSERT CHECK policies,
potentially throwing an error before any PK conflict is detected, so
an error would result even in the INSERT .. ON CONFLICT IGNORE case.
That way INSERT CHECK policies would always be applied regardless of
the path taken, as you suggest.

I would still say that the RLS UPDATE USING/CHECK policies should only
be applied if the UPDATE path is taken (before the UPDATE is applied),
and they need to be applied to the (old/new) update tuples only. It's
a bit like there is a WHERE record already exists clause attached to
the UPDATE. The RLS UPDATE policies should only be applied to the rows
affected by the UPDATE.

 I see your point that this runs counter to the 'mod_count'
 example use-case and could cause problems for users using RLS with such
 a strategy.  For my part, I expect users of RLS to expect errors in such
 a case rather than allowing it, but it's certainly a judgement call.


Actually I don't think this is a problem for the mod_count example.
This isn't the same as AND'ing together the INSERT and UPDATE quals,
because each set of policies is being applied to a different tuple.
There are 3 tuples in play here:-

1). The insert tuple, which has INSERT CHECK polcies applied to it
2). The existing (old) update tuple, which has UPDATE USING policies
applied to it
3). The (new) update tuple, which has UPDATE CHECK policies applied to it


NOTE: If we do change RLS CHECKs to be executed prior to modifying any
data, that's potentially a change that could be made independently of
the UPSERT patch. We should also probably then stop referring to them
as WITH CHECK OPTIONs in the docs and in error messages because
they're not the same thing, and the code might be neater if they had
their own data-structure rather than overloading WithCheckOption.

Regards,
Dean


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] POLA violation with \c service=

2015-01-10 Thread Andres Freund
On 2015-01-10 09:16:07 -0500, Andrew Dunstan wrote:
+static const char uri_designator[] = postgresql://;
+static const char short_uri_designator[] = postgres://;
 
 These declarations in common.h would cause a separate instance of these
 pieces of storage to occur in every object file where the .h file had been
 #included. In general, you should not expect to see any static declarations
 in .h files.

Save static inline functions, that is.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] libpq 9.4 requires /etc/passwd?

2015-01-10 Thread Tom Lane
Bruce Momjian br...@momjian.us writes:
 On Fri, Jan  9, 2015 at 06:57:02PM -0500, Tom Lane wrote:
 Hmm ... actually, I'll bet it's not $HOME that's at issue, but the name
 of the user.  Commit a4c8f14364c27508233f8a31ac4b10a4c90235a9 turned
 failure of pg_fe_getauthname() into a hard connection failure, whereas
 previously it was harmless as long as the caller provided a username.
 
 I wonder if we shouldn't just revert that commit in toto.  Yeah,
 identifying an out-of-memory error might be useful, but this cure
 seems a lot worse than the disease.  What's more, this coding reports
 *any* pg_fe_getauthname failure as out of memory, which is much worse
 than useless.
 
 Alternatively, maybe don't try to resolve username this way until
 we've found that the caller isn't providing any username.

 I have developed the attached patch which documents why the user name
 lookup might fail, and sets the failure string to .  It preserves the
 memory allocation failure behavior.

I'm unimpressed with that patch.  It does nothing to fix the fundamental
problem, which is that error handling in this area is many bricks shy of
a load.  And to the extent that it addresses Christoph's complaint, it
does so only in a roundabout and utterly undocumented way.

I think we need to suck it up and fix pg_fe_getauthname to have proper
error reporting, which in turns means fixing pqGetpwuid (since the API
for that is incapable of distinguishing no such user from error during
lookup).  Accordingly, I propose the attached patch instead.  Some
notes:

* According to Microsoft's documentation, Windows' GetUserName() does not
set errno (it would be mildly astonishing if it did...).  The existing
error-handling code in src/common/username.c is therefore wrong too.

* port.h seems to think it should declare pqGetpwuid() if __CYGWIN__,
but noplace else agrees with that.

* I made pg_fe_getauthname's Windows username[] array 256 bytes for
consistency with username.c, where some actual research seems to have
been expended on how large it ought to be.

* pqGetpwuid thinks there was once a four-argument spec for getpwuid_r.
That must have been in the late bronze age, because even in 1992's Single
Unix Spec it's five arguments.  And the SUS is our standard benchmark
for Unix portability.  So I think we could rip that out, along with the
corresponding configure test, at least in HEAD.  I've not done so here
though.

In principle, changing the API specs for pg_fe_getauthname and pqGetpwuid
should be safe because we don't officially export those functions.  In
practice, on platforms that don't honor the export restriction list it's
theoretically possible that somebody is calling those from third-party
code.  So what I propose we do with this is patch HEAD and 9.4 only.
We need to do *something* in 9.4 to address Christoph's complaint, and
that branch is new enough that we can probably get away with changing
officially-unsupported APIs.  The lack of other field complaints makes
me okay with not trying to fix these issues further back.

Comments?

regards, tom lane

diff --git a/src/common/username.c b/src/common/username.c
index ee5ef1c..ac9a898 100644
*** a/src/common/username.c
--- b/src/common/username.c
*** get_user_name(char **errstr)
*** 58,64 
  
  	if (!GetUserName(username, len))
  	{
! 		*errstr = psprintf(_(user name lookup failure: %s), strerror(errno));
  		return NULL;
  	}
  
--- 58,65 
  
  	if (!GetUserName(username, len))
  	{
! 		*errstr = psprintf(_(user name lookup failure: error code %lu),
! 		   GetLastError());
  		return NULL;
  	}
  
diff --git a/src/include/port.h b/src/include/port.h
index 1d53e4e..26d7fcd 100644
*** a/src/include/port.h
--- b/src/include/port.h
*** extern void srandom(unsigned int seed);
*** 433,439 
  /* thread.h */
  extern char *pqStrerror(int errnum, char *strerrbuf, size_t buflen);
  
! #if !defined(WIN32) || defined(__CYGWIN__)
  extern int pqGetpwuid(uid_t uid, struct passwd * resultbuf, char *buffer,
  		   size_t buflen, struct passwd ** result);
  #endif
--- 433,439 
  /* thread.h */
  extern char *pqStrerror(int errnum, char *strerrbuf, size_t buflen);
  
! #ifndef WIN32
  extern int pqGetpwuid(uid_t uid, struct passwd * resultbuf, char *buffer,
  		   size_t buflen, struct passwd ** result);
  #endif
diff --git a/src/interfaces/libpq/fe-auth.c b/src/interfaces/libpq/fe-auth.c
index 179793e..5a6a1e4 100644
*** a/src/interfaces/libpq/fe-auth.c
--- b/src/interfaces/libpq/fe-auth.c
*** pg_fe_sendauth(AuthRequest areq, PGconn 
*** 714,735 
  /*
   * pg_fe_getauthname
   *
!  * Returns a pointer to dynamic space containing whatever name the user
!  * has authenticated to the system.  If there is an error, return NULL.
   */
  char *
! pg_fe_getauthname(void)
  {
  	const char *name = NULL;
- 	char	   *authn;
  
  #ifdef WIN32
! 	char		username[128];
  	DWORD		namesize = sizeof(username) - 1;
  #else
  	char		

Re: [HACKERS] libpq 9.4 requires /etc/passwd?

2015-01-10 Thread Noah Misch
On Fri, Jan 09, 2015 at 06:57:02PM -0500, Tom Lane wrote:
 Commit a4c8f14364c27508233f8a31ac4b10a4c90235a9 turned
 failure of pg_fe_getauthname() into a hard connection failure, whereas
 previously it was harmless as long as the caller provided a username.
 
 I wonder if we shouldn't just revert that commit in toto.  Yeah,
 identifying an out-of-memory error might be useful, but this cure
 seems a lot worse than the disease.  What's more, this coding reports
 *any* pg_fe_getauthname failure as out of memory, which is much worse
 than useless.

+1 for reverting it as the next step

 Alternatively, maybe don't try to resolve username this way until
 we've found that the caller isn't providing any username.

and for subsequently pursuing this.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] libpq 9.4 requires /etc/passwd?

2015-01-10 Thread Christoph Berg
Re: Tom Lane 2015-01-10 22432.1420915...@sss.pgh.pa.us
 So what I propose we do with this is patch HEAD and 9.4 only.
 We need to do *something* in 9.4 to address Christoph's complaint, and
 that branch is new enough that we can probably get away with changing
 officially-unsupported APIs.  The lack of other field complaints makes
 me okay with not trying to fix these issues further back.

The problem isn't present in 9.3 and earlier (at least with
postfix-pgsql), so there's no need to go back further.

As for the number of complaints, I've received two independent reports
on IRC, and upon googling the problem had been seen as early as in
July [1] and August [2]. All four reports are for postfix-pgsql on
Debian, but that's probably just because we pushed 9.4 into the
next-release branch very early. (And I wish someone had told me about
the problem, instead of only reporting it for postfix...)

[1] https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=756627
[2] https://workaround.org/comment/3415#comment-3415

Christoph
-- 
c...@df7cb.de | http://www.df7cb.de/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] s_lock.h default definitions are rather confused

2015-01-10 Thread Tom Lane
I've not tried to build HEAD on my HPPA dinosaur for awhile, but I did
just now, and I am presented with boatloads of this:

../../../src/include/storage/s_lock.h:759: warning: `S_UNLOCK' redefined
../../../src/include/storage/s_lock.h:679: warning: this is the location of the 
previous definition

which is not too surprising because the default definition at line 679
precedes the HPPA-specific one at line 759.

I'm not particularly interested in untangling the effects of the recent
hackery in s_lock.h enough to figure out how the overall structure got
broken, but I trust one of you will clean up the mess.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] s_lock.h default definitions are rather confused

2015-01-10 Thread Tom Lane
Andres Freund and...@2ndquadrant.com writes:
 Given you got the error above, you used gcc.

Right.

 Have you used non-gcc
 compiler on hppa recently? I seem to recall you mentioning that that
 doesn't work sanely anymore? If so, perhaps we can just remove the !gcc
 variant?

I'll try that shortly ...

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Sloppy SSPI error reporting code

2015-01-10 Thread Tom Lane
While looking at fe-auth.c I noticed quite a few places that weren't
bothering to make error messages localizable (ie, missing libpq_gettext
calls), and/or were failing to add a trailing newline as expected in
libpq error messages.  Perhaps these are intentional but I doubt it.
Most of the instances seemed to be SSPI-related.

I have no intention of fixing these myself, but whoever committed that
code should take a second look.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] libpq 9.4 requires /etc/passwd?

2015-01-10 Thread Tom Lane
Christoph Berg c...@df7cb.de writes:
 Re: Tom Lane 2015-01-10 22432.1420915...@sss.pgh.pa.us
 So what I propose we do with this is patch HEAD and 9.4 only.
 We need to do *something* in 9.4 to address Christoph's complaint, and
 that branch is new enough that we can probably get away with changing
 officially-unsupported APIs.  The lack of other field complaints makes
 me okay with not trying to fix these issues further back.

 The problem isn't present in 9.3 and earlier (at least with
 postfix-pgsql), so there's no need to go back further.

Right, the specific issue you're complaining of is new in 9.4.  The
general issue that failures in /etc/passwd lookups aren't well reported
has been there more or less forever, but your immediate point is that no
such failure should be reported to the user at all if he's supplied a
login name to use.

It looks like the bad handling of Windows' GetUserName() failures has
been there a long time too, but given the lack of field reports I don't
feel a need to back-patch that very far either.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Parallel Seq Scan

2015-01-10 Thread Stephen Frost
* Amit Kapila (amit.kapil...@gmail.com) wrote:
 At this moment if we can ensure that parallel plan should not be selected
 for cases where it will perform poorly is more than enough considering
 we have lots of other work left to even make any parallel operation work.

The problem with this approach is that it doesn't consider any options
between 'serial' and 'parallelize by factor X'.  If the startup cost is
1000 and the factor is 32, then a seqscan which costs 31000 won't ever
be parallelized, even though a factor of 8 would have parallelized it.

You could forget about the per-process startup cost entirely, in fact,
and simply say only parallelize if it's more than X.

Again, I don't like the idea of designing this with the assumption that
the user dictates the right level of parallelization for each and every
query.  I'd love to go out and tell users set the factor to the number
of CPUs you have and we'll just use what makes sense.

The same goes for max number of backends.  If we set the parallel level
to the number of CPUs and set the max backends to the same, then we end
up with only one parallel query running at a time, ever.  That's
terrible.  Now, we could set the parallel level lower or set the max
backends higher, but either way we're going to end up either using less
than we could or over-subscribing, neither of which is good.

I agree that this makes it a bit different from work_mem, but in this
case there's an overall max in the form of the maximum number of
background workers.  If we had something similar for work_mem, then we
could set that higher and still trust the system to only use the amount
of memory necessary (eg: a hashjoin doesn't use all available work_mem
and neither does a sort, unless the set is larger than available
memory).

   Execution:
   Most other databases does partition level scan for partition on
   different disks by each individual parallel worker.  However,
   it seems amazon dynamodb [2] also works on something
   similar to what I have used in patch which means on fixed
   blocks.  I think this kind of strategy seems better than dividing
   the blocks at runtime because dividing randomly the blocks
   among workers could lead to random scan for a parallel
   sequential scan.
 
  Yeah, we also need to consider the i/o side of this, which will
  definitely be tricky.  There are i/o systems out there which are faster
  than a single CPU and ones where a single CPU can manage multiple i/o
  channels.  There are also cases where the i/o system handles sequential
  access nearly as fast as random and cases where sequential is much
  faster than random.  Where we can get an idea of that distinction is
  with seq_page_cost vs. random_page_cost as folks running on SSDs tend to
  lower random_page_cost from the default to indicate that.
 
 I am not clear, do you expect anything different in execution strategy
 than what I have mentioned or does that sound reasonable to you?

What I'd like is a way to figure out the right amount of CPU for each
tablespace (0.25, 1, 2, 4, etc) and then use that many.  Using a single
CPU for each tablespace is likely to starve the CPU or starve the I/O
system and I'm not sure if there's a way to address that.

Note that I intentionally said tablespace there because that's how users
can tell us what the different i/o channels are.  I realize this ends up
going beyond the current scope, but the parallel seqscan at the per
relation level will only ever be using one i/o channel.  It'd be neat if
we could work out how fast that i/o channel is vs. the CPUs and
determine how many CPUs are necessary to keep up with the i/o channel
and then use more-or-less exactly that many for the scan.

I agree that some of this can come later but I worry that starting out
with a design that expects to always be told exactly how many CPUs to
use when running a parallel query will be difficult to move away from
later.

Thanks,

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] libpq 9.4 requires /etc/passwd?

2015-01-10 Thread Tom Lane
One other point here: I realized while testing my patch that it's actually
impossible to provoke the failure mode Christoph is unhappy about in psql.
You can only see it in an application that uses PQsetdb/PQsetdbLogin,
which of course psql doesn't anymore.  The reason is that in the PQconnect
family of functions, conninfo_add_defaults() is only applied after
filling in all available caller-supplied parameters, so if the user has
in one way or another specified the role name to use, we never invoke
pg_fe_getauthname() at all.  It's only called if we have to use the
default role name, and in that context of course a hard failure is
appropriate.  But in the PQsetdbLogin code path, we do pg_fe_getauthname
first and then overwrite (some) values from the parameters, so a failure
during conninfo_add_defaults() is premature.

This is a tad disturbing, because we are not using and therefore not
testing PQsetdb/PQsetdbLogin anywhere, which means that any failure modes
that path acquires that don't exist in the PQconnect code path are
guaranteed to go undetected in our testing.  Now, I rather doubt that
we'd have found the problem with doesn't-handle-lack-of-/etc/passwd-well
even if we had been testing that code path, but we certainly won't find
problems in paths we don't test.

Not entirely sure what to do about this, but I predict this won't be
the last complaint unless we find some way to improve test coverage
in this area.  Or perhaps we could turn PQsetdbLogin into a ***very***
thin wrapper around PQconnectdbParams?

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] INSERT ... ON CONFLICT UPDATE and RLS

2015-01-10 Thread Dean Rasheed
On 10 January 2015 at 15:12, Stephen Frost sfr...@snowman.net wrote:
 NOTE: If we do change RLS CHECKs to be executed prior to modifying any
 data, that's potentially a change that could be made independently of
 the UPSERT patch. We should also probably then stop referring to them
 as WITH CHECK OPTIONs in the docs and in error messages because
 they're not the same thing, and the code might be neater if they had
 their own data-structure rather than overloading WithCheckOption.

 I agree that's an independent change.  I'm not sure if it would end up
 being neater or not offhand but I can see some advantages to breaking it
 up from WithCheck as it might be simpler for users to understand
 (particularly those users who are not already familiar with WithCheck).

A new RlsCheck struct wouldn't need the cascade option that
WithCheckOption has, and it's not nice the way viewname is being
abused. For UPSERT it will need a field indicating the command type,
as Peter pointed out, so I think it's different enough to warrant it's
own struct, even if we weren't changing the firing time.

 Were you thinking about working up a patch for such a change?

OK, I'll take a look at it. I started reading the existing code that
expands RLS policies and spotted a couple of bugs that should be fixed
first:-

1). In prepend_row_security_policies(), defaultDeny should be
initialised to false at the top.

2). In fireRIRrules(), activeRIRs isn't being reset properly after
recursing, which will cause a self-join to fail.

Regards,
Dean


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] s_lock.h default definitions are rather confused

2015-01-10 Thread Andres Freund
On 2015-01-10 16:09:42 -0500, Tom Lane wrote:
 I've not tried to build HEAD on my HPPA dinosaur for awhile, but I did
 just now, and I am presented with boatloads of this:
 
 ../../../src/include/storage/s_lock.h:759: warning: `S_UNLOCK' redefined
 ../../../src/include/storage/s_lock.h:679: warning: this is the location of 
 the previous definition
 
 which is not too surprising because the default definition at line 679
 precedes the HPPA-specific one at line 759.

That's 0709b7ee72e4bc71ad07b7120acd117265ab51d0.

Not too surprising that it broke and wasn't noticed without access to
hppa - the hppa code uses gcc inline assembly outside of the big
defined(__GNUC__) and inside the section headed Platforms that use
non-gcc inline assembly.

 I'm not particularly interested in untangling the effects of the recent
 hackery in s_lock.h enough to figure out how the overall structure got
 broken, but I trust one of you will clean up the mess.

I think it's easiest solved by moving the gcc inline assembly up to the
rest of the gcc inline assembly. That'll require duplicating a couple
lines, but seems easier to understand nonetheless. Not pretty.

Given you got the error above, you used gcc. Have you used non-gcc
compiler on hppa recently? I seem to recall you mentioning that that
doesn't work sanely anymore? If so, perhaps we can just remove the !gcc
variant?

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] s_lock.h default definitions are rather confused

2015-01-10 Thread Andres Freund
On 2015-01-10 17:58:10 -0500, Tom Lane wrote:
 Andres Freund and...@2ndquadrant.com writes:
  Given you got the error above, you used gcc. Have you used non-gcc
  compiler on hppa recently? I seem to recall you mentioning that that
  doesn't work sanely anymore? If so, perhaps we can just remove the !gcc
  variant?
 
 It still compiles, modulo some old and uninteresting warnings, but linking
 the postgres executable fails with
 
 usr/ccs/bin/ld: Unsatisfied symbols:
pg_compiler_barrier_impl (code)
 make[2]: *** [postgres] Error 1

Ick, that one is my failure.

 Curiously, there are no compiler warnings about reference to undeclared
 function, which is odd because I see nothing that would declare that name
 as a function or macro for non-gcc HPPA.

Yea, that's odd.

 But in any case, the fallback logic for compiler barriers evidently
 still needs work.

Will fix (or well, try). It broke due to the atomics patch.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: Fwd: Re: [HACKERS] make check-world regress failed

2015-01-10 Thread Andres Freund
Hi,

On 2015-01-09 04:59:56 +0100, Vladimir Koković wrote:
 Thanks Andres, i686 check-world passed with your
 atomic-uint64-alignment.patch.

Thanks for the report and testing! I've pushed the fix.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] s_lock.h default definitions are rather confused

2015-01-10 Thread Andres Freund
On 2015-01-10 18:40:58 -0500, Tom Lane wrote:
 Andres Freund and...@2ndquadrant.com writes:
  On 2015-01-11 00:06:41 +0100, Andres Freund wrote:
  Ick, that one is my failure.
 
  Actually. It looks like I only translated the logic from barrier.h 1:1
  and it already was broken there. Hm, it looks like the current code
  essentially is from 89779bf2c8f9aa480e0ceb8883f93e9d65c43a6e.
 
 There's a small difference, which is that the code actually worked as
 of that commit.

Are you sure it actually worked on hppa  !gcc? Sure, the s_lock.h gcc
breakage is caused by Robert's s_lock.h commit (making spinlock proper
barriers), but I don't see how the tree as of 89779bf2c8f9aa48 could
work on !gcc hppa?

At least bgworker.c already used read and write barriers back
then. Those were redefined to a compiler barrier by
/* HPPA doesn't do either read or write reordering */
#define pg_memory_barrier() pg_compiler_barrier()

but barrier.h only provided (back then) compiler barriers for icc, gcc,
ia64 (without a compiler restriction?) and win32. So I don't see how
that could have worked.

On a reread, you even noted But note this patch only fixes things for
gcc, not for builds with HP's compiler. in the commit message..

Anyway, i've pushed a fix for that to master. For one I'm not yet sure
if it's actually broken in the backbranches (and if we care), for
another the = 9.4 fix will not have a single file in common anyway.
Any opinion on that?

Could you check whether that heals that problem? I've verified that it
allows me to build with gcc, even if I remove its compiler barrier
definition.

  Unless somebody protests I'm going to introduce a generic fallback
  compiler barrier that's just a extern function.
 
 Seems reasonable to me.  An empty external function should do the job
 for any compiler that isn't doing cross-procedural analysis.

And I really doubt any of those that do fail to provide a compiler
barrier...

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] POLA violation with \c service=

2015-01-10 Thread David Fetter
On Sat, Jan 10, 2015 at 09:30:57AM +0100, Erik Rijkers wrote:
 On Fri, January 9, 2015 20:15, David Fetter wrote:
  [psql_fix_uri_service_003.patch]
 
 Applies on master; the feature (switching services) works well but a \c 
 without any parameters produces a segfault:
 
 (centos 6.6, 4.9.2, 64-bit)
 
 
 $ echo -en $PGSERVICEFILE\n$PGSERVICE\n$PGPORT\n
 /home/aardvark/.pg_service
 service_pola
 6968
 
 $ psql
 Timing is on.
 psql (9.5devel_service_pola_20150109_2340_ac7009abd228)
 Type help for help.
 
 testdb=# \c service=HEAD
 You are now connected to database testdb as user aardvark via socket in 
 /tmp at port 6545.
 testdb=# \c service=service_pola
 You are now connected to database testdb as user aardvark via socket in 
 /tmp at port 6968.
 testdb=# \c
 Segmentation fault (core dumped)

Fixed by running that function only if the argument exists.

More C cleanups, too.

Cheers,
David.
-- 
David Fetter da...@fetter.org http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter  XMPP: david.fet...@gmail.com

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate
diff --git a/doc/src/sgml/ref/psql-ref.sgml b/doc/src/sgml/ref/psql-ref.sgml
index bdfb67c..eb6a57b 100644
--- a/doc/src/sgml/ref/psql-ref.sgml
+++ b/doc/src/sgml/ref/psql-ref.sgml
@@ -796,18 +796,20 @@ testdb=gt;
   /varlistentry
 
   varlistentry
-termliteral\c/literal or literal\connect/literal literal[ 
replaceable class=parameterdbname/replaceable [ replaceable 
class=parameterusername/replaceable ] [ replaceable 
class=parameterhost/replaceable ] [ replaceable 
class=parameterport/replaceable ] ]/literal/term
+termliteral\c/literal or literal\connect/literal literal [ 
{ [ replaceable class=parameterdbname/replaceable [ replaceable 
class=parameterusername/replaceable ] [ replaceable 
class=parameterhost/replaceable ] [ replaceable 
class=parameterport/replaceable ] ] | replaceable 
class=parameterconninfo/replaceable string | replaceable 
class=parameterURI/replaceable } ] /literal/term
 listitem
 para
 Establishes a new connection to a productnamePostgreSQL/
-server. If the new connection is successfully made, the
-previous connection is closed. If any of replaceable
+server using positional parameters as described below, a
+parameterconninfo/parameter string, or a acronymURI/acronym. 
If the new connection is
+successfully made, the
+previous connection is closed.  When using positional parameters, if 
any of replaceable
 class=parameterdbname/replaceable, replaceable
 class=parameterusername/replaceable, replaceable
 class=parameterhost/replaceable or replaceable
 class=parameterport/replaceable are omitted or specified
 as literal-/literal, the value of that parameter from the
-previous connection is used. If there is no previous
+previous connection is used.  If using positional parameters and there 
is no previous
 connection, the applicationlibpq/application default for
 the parameter's value is used.
 /para
diff --git a/src/bin/psql/command.c b/src/bin/psql/command.c
index 4ac21f2..bcdf278 100644
--- a/src/bin/psql/command.c
+++ b/src/bin/psql/command.c
@@ -1610,6 +1610,8 @@ do_connect(char *dbname, char *user, char *host, char 
*port)
PGconn *o_conn = pset.db,
   *n_conn;
char   *password = NULL;
+   boolkeep_password = true;
+   boolhas_connection_string = false;
 
if (!o_conn  (!dbname || !user || !host || !port))
{
@@ -1623,14 +1625,33 @@ do_connect(char *dbname, char *user, char *host, char 
*port)
return false;
}
 
-   if (!dbname)
-   dbname = PQdb(o_conn);
if (!user)
user = PQuser(o_conn);
+   else if (strcmp(user, PQuser(o_conn)) != 0)
+   keep_password = false;
+
if (!host)
host = PQhost(o_conn);
+   else if (strcmp(host, PQhost(o_conn)) != 0)
+   keep_password = false;
+
if (!port)
port = PQport(o_conn);
+   else if (strcmp(port, PQport(o_conn)) != 0)
+   keep_password = false;
+
+   if (dbname)
+   has_connection_string = recognized_connection_string(dbname);
+
+   if (has_connection_string)
+   keep_password = false;
+
+   /*
+* Unlike the previous stanzas, changing only the dbname shouldn't
+* trigger throwing away the password.
+*/
+   if (!dbname)
+   dbname = PQdb(o_conn);
 
/*
 * If the user asked to be prompted for a password, ask for one now. If
@@ -1646,9 +1667,13 @@ do_connect(char *dbname, char *user, char *host, char 
*port)
{
password = prompt_for_password(user);
}
-   else if (o_conn 

Re: [HACKERS] Parallel Seq Scan

2015-01-10 Thread Robert Haas
On Fri, Jan 9, 2015 at 12:24 PM, Stephen Frost sfr...@snowman.net wrote:
 The parameters sound reasonable but I'm a bit worried about the way
 you're describing the implementation.  Specifically this comment:

 Cost of starting up parallel workers with default value as 1000.0
 multiplied by number of workers decided for scan.

 That appears to imply that we'll decide on the number of workers, figure
 out the cost, and then consider parallel as one path and
 not-parallel as another.  [...]
 I'd really like to be able to set the 'max parallel' high and then have
 the optimizer figure out how many workers should actually be spawned for
 a given query.

+1.

 Yeah, we also need to consider the i/o side of this, which will
 definitely be tricky.  There are i/o systems out there which are faster
 than a single CPU and ones where a single CPU can manage multiple i/o
 channels.  There are also cases where the i/o system handles sequential
 access nearly as fast as random and cases where sequential is much
 faster than random.  Where we can get an idea of that distinction is
 with seq_page_cost vs. random_page_cost as folks running on SSDs tend to
 lower random_page_cost from the default to indicate that.

On my MacOS X system, I've already seen cases where my parallel_count
module runs incredibly slowly some of the time.  I believe that this
is because having multiple workers reading the relation block-by-block
at the same time causes the OS to fail to realize that it needs to do
aggressive readahead.  I suspect we're going to need to account for
this somehow.

 Yeah, I agree that's more typical.  Robert's point that the master
 backend should participate is interesting but, as I recall, it was based
 on the idea that the master could finish faster than the worker- but if
 that's the case then we've planned it out wrong from the beginning.

So, if the workers have been started but aren't keeping up, the master
should do nothing until they produce tuples rather than participating?
 That doesn't seem right.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] libpq 9.4 requires /etc/passwd?

2015-01-10 Thread David Fetter
On Sat, Jan 10, 2015 at 02:02:54PM -0500, Tom Lane wrote:
 Not entirely sure what to do about this, but I predict this won't be
 the last complaint unless we find some way to improve test coverage
 in this area.  Or perhaps we could turn PQsetdbLogin into a
 ***very*** thin wrapper around PQconnectdbParams?

+1 for this.  Having a single point of truth here would be a big win.

Cheers,
David.
-- 
David Fetter da...@fetter.org http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter  XMPP: david.fet...@gmail.com

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] s_lock.h default definitions are rather confused

2015-01-10 Thread Tom Lane
Andres Freund and...@2ndquadrant.com writes:
 Given you got the error above, you used gcc. Have you used non-gcc
 compiler on hppa recently? I seem to recall you mentioning that that
 doesn't work sanely anymore? If so, perhaps we can just remove the !gcc
 variant?

It still compiles, modulo some old and uninteresting warnings, but linking
the postgres executable fails with

usr/ccs/bin/ld: Unsatisfied symbols:
   pg_compiler_barrier_impl (code)
make[2]: *** [postgres] Error 1

Curiously, there are no compiler warnings about reference to undeclared
function, which is odd because I see nothing that would declare that name
as a function or macro for non-gcc HPPA.  But in any case, the fallback
logic for compiler barriers evidently still needs work.

... further on ... Looks like somebody has recently broken pg_dump, too:

cc: pg_dump.c, line 319: error 1521: Incorrect initialization.
cc: pg_dump.c, line 320: error 1521: Incorrect initialization.
cc: pg_dump.c, line 321: error 1521: Incorrect initialization.
cc: pg_dump.c, line 322: error 1521: Incorrect initialization.
cc: pg_dump.c, line 323: error 1521: Incorrect initialization.
cc: pg_dump.c, line 324: error 1521: Incorrect initialization.
cc: pg_dump.c, line 326: error 1521: Incorrect initialization.
cc: pg_dump.c, line 327: error 1521: Incorrect initialization.
cc: pg_dump.c, line 329: error 1521: Incorrect initialization.
cc: pg_dump.c, line 333: error 1521: Incorrect initialization.
cc: pg_dump.c, line 335: error 1521: Incorrect initialization.
cc: pg_dump.c, line 336: error 1521: Incorrect initialization.
cc: pg_dump.c, line 337: error 1521: Incorrect initialization.
cc: pg_dump.c, line 338: error 1521: Incorrect initialization.
make[3]: *** [pg_dump.o] Error 1

This one looks like overoptimistic assumptions about what's legal
in an initializer.  We could probably fix that by reverting the
option variables to statics; I see no benefit to be had from
having them as dynamic variables in main().

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] s_lock.h default definitions are rather confused

2015-01-10 Thread Andres Freund
On 2015-01-11 00:06:41 +0100, Andres Freund wrote:
 On 2015-01-10 17:58:10 -0500, Tom Lane wrote:
  Andres Freund and...@2ndquadrant.com writes:
   Given you got the error above, you used gcc. Have you used non-gcc
   compiler on hppa recently? I seem to recall you mentioning that that
   doesn't work sanely anymore? If so, perhaps we can just remove the !gcc
   variant?
  
  It still compiles, modulo some old and uninteresting warnings, but linking
  the postgres executable fails with
  
  usr/ccs/bin/ld: Unsatisfied symbols:
 pg_compiler_barrier_impl (code)
  make[2]: *** [postgres] Error 1
 
 Ick, that one is my failure.

Actually. It looks like I only translated the logic from barrier.h 1:1
and it already was broken there. Hm, it looks like the current code
essentially is from 89779bf2c8f9aa480e0ceb8883f93e9d65c43a6e. That
introduced:
+#elif defined(__hppa) || defined(__hppa__) /* HP PA-RISC */
+
+/* HPPA doesn't do either read or write reordering */
+#define pg_memory_barrier()pg_compiler_barrier()

That works well enough for hppa on gcc because there's generic compiler
barrier for the latter. But for HPUX on own compiler no compiler barrier
is defined, because:

/*
 * If read or write barriers are undefined, we upgrade them to full memory
 * barriers.
 *
 * If a compiler barrier is unavailable, you probably don't want a full
 * memory barrier instead, so if you have a use case for a compiler barrier,
 * you'd better use #ifdef.
 */
#if !defined(pg_read_barrier)
#define pg_read_barrier()   pg_memory_barrier()
#endif
#if !defined(pg_write_barrier)
#define pg_write_barrier()  pg_memory_barrier()
#endif

Unless somebody protests I'm going to introduce a generic fallback
compiler barrier that's just a extern function.

Greetings,

Andres Freund

-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] BRIN range operator class

2015-01-10 Thread Andreas Karlsson

Hi,

I made a quick review for your patch, but I would like to see someone 
who was involved in the BRIN work comment on Emre's design issues. I 
will try to answer them as best as I can below.


I think minimax indexes on range types seems very useful, and inet/cidr 
too. I have no idea about geometric types. But we need to fix the issues 
with empty ranges and IPv4/IPv6 for these indexes to be useful.


= Review

The current code compiles but the brin test suite fails.

I tested the indexes a bit and they seem to work fine, except for cases 
where we know it to be broken like IPv4/IPv6.


The new code is generally clean and readable.

I think some things should be broken out in separate patches since they 
are unrelated to this patch.


- The addition of  and  on inet types.

- The fix in brin_minmax.c.

Your brin tests seems to forget  and  for inet types.

The tests should preferably be extended to support ipv6 and empty ranges 
once we have fixed support for those cases.


The /* If the it is all nulls, it cannot possibly be consistent. */ 
comment is different from the equivalent comment in brin_minmax.c. I do 
not see why they should be different.


In brin_inclusion_union() the if (col_b-bv_allnulls) is done after 
handling has_nulls, which is unlike what is done in brin_minmax_union(), 
which code is right? I am leaning towards the code in 
brin_inclusion_union() since you can have all_nulls without has_nulls.


On 12/14/2014 09:04 PM, Emre Hasegeli wrote:

To support more operators I needed to change amstrategies and
amsupport on the catalog.  It would be nice if amsupport can be set
to 0 like am strategies.


I think it would be nicer to get the functions from the operators
with using the strategy numbers instead of adding them directly as
support functions.  I looked around a bit but couldn't find
a sensible way to support it.  Is it possible without adding them
to the RelationData struct?


Yes that would be nice, but I do not think the current solution is terrible.


This problem remains.  There is also a similar problem with the
range types, namely empty ranges.  There should be special cases
for them on some of the strategies.  I tried to solve the problems
in several different ways, but got a segfault one line or another.
This makes me think that BRIN framework doesn't support to store
different types than the indexed column in the values array.
For example, brin_deform_tuple() iterates over the values array and
copies them using the length of the attr on the index, not the length
of the type defined by OpcInfo function.  If storing another types
aren't supported, why is it required to return oid's on the OpcInfo
function.  I am confused.


I leave this to someone more knowledgable about BRIN to answer.


I didn't try to support other geometric types than box as I couldn't
managed to store a different type on the values array, but it would
be nice to get some feedback about the overall design.  I was
thinking to add a STORAGE parameter to the index to support other
geometric types.  I am not sure that adding the STORAGE parameter
to be used by the opclass implementation is the right way.  It
wouldn't be the actual thing that is stored by the index, it will be
an element in the values array.  Maybe, data type specific opclasses
is the way to go, not a generic one as I am trying.


I think a STORAGE parameter sounds like a good idea. Could it also be 
used to solve the issue with IPv4/IPv6 by setting the storage type to 
custom? Or is that the wrong way to fix things?


--
Andreas Karlsson


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: Custom/Foreign-Join-APIs (Re: [HACKERS] [v9.5] Custom Plan API)

2015-01-10 Thread Jim Nasby

On 1/9/15, 8:51 PM, Kohei KaiGai wrote:

2015-01-10 9:56 GMT+09:00 Jim Nasby jim.na...@bluetreble.com:

On 1/9/15, 6:54 PM, Jim Nasby wrote:


On 1/9/15, 6:44 PM, Petr Jelinek wrote:




Yep, I had a same impression when I looked at the code first time,
however, it is defined as below. Not a manner of custom-scan itself.

/*
   * ==
   * Scan nodes
   * ==
   */
typedef struct Scan
{
  Planplan;
  Index   scanrelid;  /* relid is index into the range table
*/
} Scan;



Yeah there are actually several places in the code where relid means
index in range table and not oid of relation, it still manages to confuse
me. Nothing this patch can do about that.



Well, since it's confused 3 of us now... should we change it (as a
separate patch)? I'm willing to do that work but don't want to waste time if
it'll just be rejected.

Any other examples of this I should fix too?



Sorry, to clarify... any other items besides Scan.scanrelid that I should
fix?


This naming is a little bit confusing, however, I don't think it should be
changed because this structure has been used for a long time, so reworking
will prevent back-patching when we find bugs around scanrelid.


We can still backpatch; it just requires more work. And how many bugs do we 
actually expect to find around this anyway?

If folks think this just isn't worth fixing fine, but I find the backpatching 
argument rather dubious.
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Parallel Seq Scan

2015-01-10 Thread Robert Haas
On Thu, Jan 8, 2015 at 6:42 AM, Amit Kapila amit.kapil...@gmail.com wrote:
 Are we sure that in such cases we will consume work_mem during
 execution?  In cases of parallel_workers we are sure to an extent
 that if we reserve the workers then we will use it during execution.
 Nonetheless, I have proceded and integrated the parallel_seq scan
 patch with v0.3 of parallel_mode patch posted by you at below link:
 http://www.postgresql.org/message-id/CA+TgmoYmp_=xcjehvjzt9p8drbgw-pdpjhxbhza79+m4o-c...@mail.gmail.com

That depends on the costing model.  It makes no sense to do a parallel
sequential scan on a small relation, because the user backend can scan
the whole thing itself faster than the workers can start up.  I
suspect it may also be true that the useful amount of parallelism
increases the larger the relation gets (but maybe not).

 2. To enable two types of shared memory queue's (error queue and
 tuple queue), we need to ensure that we switch to appropriate queue
 during communication of various messages from parallel worker
 to master backend.  There are two ways to do it
a.  Save the information about error queue during startup of parallel
 worker (ParallelMain()) and then during error, set the same (switch
 to error queue in errstart() and switch back to tuple queue in
 errfinish() and errstart() in case errstart() doesn't need to
 propagate
 error).
b.  Do something similar as (a) for tuple queue in printtup or other
 place
 if any for non-error messages.
 I think approach (a) is slightly better as compare to approach (b) as
 we need to switch many times for tuple queue (for each tuple) and
 there could be multiple places where we need to do the same.  For now,
 I have used approach (a) in Patch which needs some more work if we
 agree on the same.

I don't think you should be switching queues.  The tuples should be
sent to the tuple queue, and errors and notices to the error queue.

 3. As per current implementation of Parallel_seqscan, it needs to use
 some information from parallel.c which was not exposed, so I have
 exposed the same by moving it to parallel.h.  Information that is required
 is as follows:
 ParallelWorkerNumber, FixedParallelState and shm keys -
 This is used to decide the blocks that needs to be scanned.
 We might change it in future the way parallel scan/work distribution
 is done, but I don't see any harm in exposing this information.

Hmm.  I can see why ParallelWorkerNumber might need to be exposed, but
the other stuff seems like it shouldn't be.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Parallel Seq Scan

2015-01-10 Thread Robert Haas
On Thu, Jan 8, 2015 at 2:46 PM, Stephen Frost sfr...@snowman.net wrote:
 Yeah, if we come up with a plan for X workers and end up not being able
 to spawn that many then I could see that being worth a warning or notice
 or something.  Not sure what EXPLAIN has to do anything with it..

That seems mighty odd to me.  If there are 8 background worker
processes available, and you allow each session to use at most 4, then
when there are 2 sessions trying to do parallelism at the same time,
they might not all get their workers.  Emitting a notice for that
seems like it would be awfully chatty.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Using 128-bit integers for sum, avg and statistics aggregates

2015-01-10 Thread Andreas Karlsson

On 01/11/2015 02:36 AM, Andres Freund wrote:

a) Afaics only __int128/unsigned __int128 is defined. See
https://gcc.gnu.org/onlinedocs/gcc/_005f_005fint128.html


Both GCC and Clang defines both of them. Which you use seems to just be 
a matter of preference.



b) I'm doubtful that AC_CHECK_TYPES is a sufficiently good test on all
platforms. IIRC gcc will generate calls to functions to do the actual
arithmetic, and I don't think they're guranteed to be available on
platforms. That how it .e.g. behaves for atomic operations. So my
guess is that you'll need to check that a program that does actual
arithmetic still links.


I too thought some about this and decided to look at how other projects 
handled this. The projects I have looked at seems to trust that if 
__int128_t is defined it will also work.


https://github.com/python/cpython/blob/master/configure#L7778
http://cgit.freedesktop.org/cairo/tree/build/configure.ac.system#n88

But after some more searching now I notice that at least gstreamer does 
not trust this to be true.


https://github.com/ensonic/gstreamer/blob/master/configure.ac#L382

Should I fix it to actually compile some code which uses the 128-bit types?


c) Personally I don't see the point of testing __uint128_t. That's just
a pointless test that makes configure run for longer.


Ok, will remove it in the next version of the patch.


@@ -3030,6 +3139,18 @@ int8_avg_accum(PG_FUNCTION_ARGS)
  Datum
  int2_accum_inv(PG_FUNCTION_ARGS)
  {
+#ifdef HAVE_INT128
+   Int16AggState *state;
+
+   state = PG_ARGISNULL(0) ? NULL : (Int16AggState *) PG_GETARG_POINTER(0);
+
+   /* Should not get here with no state */
+   if (state == NULL)
+   elog(ERROR, int2_accum_inv called with NULL state);
+
+   if (!PG_ARGISNULL(1))
+   do_int16_discard(state, (int128) PG_GETARG_INT16(1));
+#else
NumericAggState *state;

state = PG_ARGISNULL(0) ? NULL : (NumericAggState *) 
PG_GETARG_POINTER(0);
@@ -3049,6 +3170,7 @@ int2_accum_inv(PG_FUNCTION_ARGS)
if (!do_numeric_discard(state, newval))
elog(ERROR, do_numeric_discard failed unexpectedly);
}


Hm. It might be nicer to move the if (!state) elog() outside the ifdef,
and add curly parens inside the ifdef.


The reason I did so was because the type of the state differs and I did 
not feel like having two ifdef blocks. I have no strong opinion about 
this though.



pg_config.h.win32 should be updated as well.


Is it possible to update it without running Windows?

--
Andreas Karlsson


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Transactions involving multiple postgres foreign servers

2015-01-10 Thread Jim Nasby

On 1/10/15, 7:11 AM, Michael Paquier wrote:

If we had an independent transaction coordinator then I agree with you
Kevin. I think Robert is proposing that if we are controlling one of the
nodes that's participating as well as coordinating the overall transaction
that we can take some shortcuts. AIUI a PREPARE means you are completely
ready to commit. In essence you're just waiting to write and fsync the
commit message. That is in fact the state that a coordinating PG node would
be in by the time everyone else has done their prepare. So from that
standpoint we're OK.

Now, as soon as ANY of the nodes commit, our coordinating node MUST be able
to commit as well! That would require it to have a real prepared transaction
of it's own created. However, as long as there is zero chance of any other
prepared transactions committing before our local transaction, that step
isn't actually needed. Our local transaction will either commit or abort,
and that will determine what needs to happen on all other nodes.

It is a property of 2PC to ensure that a prepared transaction will
commit. Now, once it is confirmed on the coordinator that all the
remote nodes have successfully PREPAREd, the coordinator issues COMMIT
PREPARED to each node. What do you do if some nodes report ABORT
PREPARED while other nodes report COMMIT PREPARED? Do you abort the
transaction on coordinator, commit it or FATAL? This lets the cluster
in an inconsistent state, meaning that some consistent cluster-wide
recovery point is needed as well (Postgres-XC and XL have introduced
the concept of barriers for such problems, stuff created first by
Pavan Deolassee).


My understanding is that once you get a successful PREPARE that should mean 
that it's basically impossible for the transaction to fail to commit. If that's 
not the case, I fail to see how you can get any decent level of sanity out of 
this...
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Using 128-bit integers for sum, avg and statistics aggregates

2015-01-10 Thread Andres Freund
Hi,

 +# Check if platform support gcc style 128-bit integers.
 +AC_CHECK_TYPES([__int128_t, __uint128_t], [], [], [])

Hm, I'm not sure that's sufficent. Three things:

a) Afaics only __int128/unsigned __int128 is defined. See
   https://gcc.gnu.org/onlinedocs/gcc/_005f_005fint128.html

b) I'm doubtful that AC_CHECK_TYPES is a sufficiently good test on all
   platforms. IIRC gcc will generate calls to functions to do the actual
   arithmetic, and I don't think they're guranteed to be available on
   platforms. That how it .e.g. behaves for atomic operations. So my
   guess is that you'll need to check that a program that does actual
   arithmetic still links.

c) Personally I don't see the point of testing __uint128_t. That's just
   a pointless test that makes configure run for longer.

 +#ifdef HAVE_INT128
 +typedef struct Int16AggState
 +{
 + boolcalcSumX2;  /* if true, calculate sumX2 */
 + int64   N;  /* count of processed numbers */
 + int128  sumX;   /* sum of processed numbers */
 + int128  sumX2;  /* sum of squares of processed numbers */
 +} Int16AggState;

Not the biggest fan of the variable naming here, but that's not your
fault, you're just consistent which is good.


 @@ -3030,6 +3139,18 @@ int8_avg_accum(PG_FUNCTION_ARGS)
  Datum
  int2_accum_inv(PG_FUNCTION_ARGS)
  {
 +#ifdef HAVE_INT128
 + Int16AggState *state;
 +
 + state = PG_ARGISNULL(0) ? NULL : (Int16AggState *) PG_GETARG_POINTER(0);
 +
 + /* Should not get here with no state */
 + if (state == NULL)
 + elog(ERROR, int2_accum_inv called with NULL state);
 +
 + if (!PG_ARGISNULL(1))
 + do_int16_discard(state, (int128) PG_GETARG_INT16(1));
 +#else
   NumericAggState *state;
  
   state = PG_ARGISNULL(0) ? NULL : (NumericAggState *) 
 PG_GETARG_POINTER(0);
 @@ -3049,6 +3170,7 @@ int2_accum_inv(PG_FUNCTION_ARGS)
   if (!do_numeric_discard(state, newval))
   elog(ERROR, do_numeric_discard failed unexpectedly);
   }

Hm. It might be nicer to move the if (!state) elog() outside the ifdef,
and add curly parens inside the ifdef.

 --- a/src/include/pg_config.h.in
 +++ b/src/include/pg_config.h.in
 @@ -678,6 +678,12 @@
  /* Define to 1 if your compiler understands __VA_ARGS__ in macros. */
  #undef HAVE__VA_ARGS
  
z +/* Define to 1 if the system has the type `__int128_t'. */
 +#undef HAVE___INT128_T
 +
 +/* Define to 1 if the system has the type `__uint128_t'. */
 +#undef HAVE___UINT128_T

pg_config.h.win32 should be updated as well.

Greetings,

Andres Freund


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Parallel Seq Scan

2015-01-10 Thread Amit Kapila
On Sun, Jan 11, 2015 at 9:09 AM, Robert Haas robertmh...@gmail.com wrote:

 On Thu, Jan 8, 2015 at 6:42 AM, Amit Kapila amit.kapil...@gmail.com
wrote:
  2. To enable two types of shared memory queue's (error queue and
  tuple queue), we need to ensure that we switch to appropriate queue
  during communication of various messages from parallel worker
  to master backend.  There are two ways to do it
 a.  Save the information about error queue during startup of parallel
  worker (ParallelMain()) and then during error, set the same
(switch
  to error queue in errstart() and switch back to tuple queue in
  errfinish() and errstart() in case errstart() doesn't need to
  propagate
  error).
 b.  Do something similar as (a) for tuple queue in printtup or other
  place
  if any for non-error messages.
  I think approach (a) is slightly better as compare to approach (b) as
  we need to switch many times for tuple queue (for each tuple) and
  there could be multiple places where we need to do the same.  For now,
  I have used approach (a) in Patch which needs some more work if we
  agree on the same.

 I don't think you should be switching queues.  The tuples should be
 sent to the tuple queue, and errors and notices to the error queue.


To achieve what you said (The tuples should be sent to the tuple
queue, and errors and notices to the error queue.), we need to
switch the queues.
The difficulty here is that once we set the queue (using
pq_redirect_to_shm_mq()) through which the communication has to
happen, it will use the same unless we change again the queue
using pq_redirect_to_shm_mq().  For example, assume we have
initially set error queue (using pq_redirect_to_shm_mq()) then to
send tuples, we need to call pq_redirect_to_shm_mq() to
set the tuple queue as the queue that needs to be used for communication
and again if error happens then we need to do the same for error
queue.
Do you have any other idea to achieve the same?

  3. As per current implementation of Parallel_seqscan, it needs to use
  some information from parallel.c which was not exposed, so I have
  exposed the same by moving it to parallel.h.  Information that is
required
  is as follows:
  ParallelWorkerNumber, FixedParallelState and shm keys -
  This is used to decide the blocks that needs to be scanned.
  We might change it in future the way parallel scan/work distribution
  is done, but I don't see any harm in exposing this information.

 Hmm.  I can see why ParallelWorkerNumber might need to be exposed, but
 the other stuff seems like it shouldn't be.

It depends upon how we decide to achieve the scan of blocks
by backend worker.  In current form, the patch needs to know
if myworker is the last worker (and I have used workers_expected
to achieve the same, I know that is not the right thing but I need
something similar if we decide to do in the way I have proposed),
so that it can scan all the remaining blocks.


With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


Re: [HACKERS] s_lock.h default definitions are rather confused

2015-01-10 Thread Tom Lane
Andres Freund and...@2ndquadrant.com writes:
 On 2015-01-11 00:06:41 +0100, Andres Freund wrote:
 Ick, that one is my failure.

 Actually. It looks like I only translated the logic from barrier.h 1:1
 and it already was broken there. Hm, it looks like the current code
 essentially is from 89779bf2c8f9aa480e0ceb8883f93e9d65c43a6e.

There's a small difference, which is that the code actually worked as
of that commit.  I suspect this got broken by Robert's barrier-hacking
of a few months ago.  I don't think I've tried the non-gcc build since
committing 89779bf2c8f9aa48 :-(

 Unless somebody protests I'm going to introduce a generic fallback
 compiler barrier that's just a extern function.

Seems reasonable to me.  An empty external function should do the job
for any compiler that isn't doing cross-procedural analysis.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: Custom/Foreign-Join-APIs (Re: [HACKERS] [v9.5] Custom Plan API)

2015-01-10 Thread Kohei KaiGai
2015-01-11 10:40 GMT+09:00 Jim Nasby jim.na...@bluetreble.com:
 On 1/9/15, 8:51 PM, Kohei KaiGai wrote:

 2015-01-10 9:56 GMT+09:00 Jim Nasby jim.na...@bluetreble.com:

 On 1/9/15, 6:54 PM, Jim Nasby wrote:


 On 1/9/15, 6:44 PM, Petr Jelinek wrote:



 Yep, I had a same impression when I looked at the code first time,
 however, it is defined as below. Not a manner of custom-scan itself.

 /*
* ==
* Scan nodes
* ==
*/
 typedef struct Scan
 {
   Planplan;
   Index   scanrelid;  /* relid is index into the range
 table
 */
 } Scan;


 Yeah there are actually several places in the code where relid means
 index in range table and not oid of relation, it still manages to
 confuse
 me. Nothing this patch can do about that.



 Well, since it's confused 3 of us now... should we change it (as a
 separate patch)? I'm willing to do that work but don't want to waste
 time if
 it'll just be rejected.

 Any other examples of this I should fix too?



 Sorry, to clarify... any other items besides Scan.scanrelid that I should
 fix?

 This naming is a little bit confusing, however, I don't think it should
 be
 changed because this structure has been used for a long time, so reworking
 will prevent back-patching when we find bugs around scanrelid.


 We can still backpatch; it just requires more work. And how many bugs do we
 actually expect to find around this anyway?

 If folks think this just isn't worth fixing fine, but I find the
 backpatching argument rather dubious.

Even though here is no problem around Scan structure itself, a bugfix may
use the variable name of scanrelid to fix it. If we renamed it on v9.5, we
also need a little adjustment to apply this bugfix on prior versions.
It seems to me a waste of time for committers.
-- 
KaiGai Kohei kai...@kaigai.gr.jp


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers