Re: [HACKERS] Logical replication and multimaster

2015-12-11 Thread Craig Ringer
On 10 December 2015 at 03:19, Robert Haas  wrote:

> On Sun, Dec 6, 2015 at 10:24 PM, Craig Ringer 
> wrote:
>


> > * A way to securely make a libpq connection from a bgworker without
> messing
> > with passwords etc. Generate one-time cookies, sometihng like that.
>
> Why would you have the bgworker connect to the database via TCP
> instead of just doing whatever it wants to do directly?



pg_dump and pg_restore, mainly, for copying the initial database state.

PostgreSQL doesn't have SQL-level function equivalents, nor
pg_get_tabledef() etc, and there's been strong opposition to adding
anything of the sort when it's been raised before. We could read a dump in
via pg_restore's text conversion and run the appropriate queries over the
SPI, doing the query splitting, COPY parsing and loading, etc ourselves in
a bgworker. It'd be ugly and duplicate a lot, but it'd work. However, it
wouldn't be possible to do restores in parallel that way, and that's
necessary to get good restore performance on big DBs. For that we'd also
basically rewrite pg_restore's parallel functionality using a bgworker
pool.

The alternative is a massive rewrite of pg_dump and pg_restore to allow
them to be used as libraries, and let them use either libpq or the SPI for
queries, presumably via some level of abstraction layer. As well as further
abtraction for pipelining parallel work. Not very practical, and IIRC
whenever library-ifing pg_dump and pg_restore has been discussed before
it's been pretty firmly rejected.

Also, parallelism at apply time. There are two ways to do apply work in
parallel - a pool of bgworkers that each use the SPI, or using regular
backends managing async libpq connections. At this point I think
Konstantin's approach, with a bgworker pool that processes a work queue, is
probably better for this, and want to explore making that a re-usable
extension for 9.5 and possibly a core part of 9.6 or 9.7.

So it's mainly for pg_restore.

-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


Re: [HACKERS] Error with index on unlogged table

2015-12-11 Thread Andres Freund
On 2015-12-11 15:43:24 +0900, Kyotaro HORIGUCHI wrote:
> What it is doing seems to me reasonable but copying_initfork
> doesn't seems to be necessary. Kicking both of log_newpage() and
> smgrimmedsync() by use_wal, which has the value considering
> INIT_FORKNUM would be more descriptive. (more readable, in other
> words)

The smgrimmedsync() has a different condition, it doesn't, and may not,
check for XLogIsNeeded().

Andres


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: In-core regression tests for replication, cascading, archiving, PITR, etc.

2015-12-11 Thread Alvaro Herrera
Michael Paquier wrote:
> On Fri, Dec 11, 2015 at 5:35 AM, Alvaro Herrera
>  wrote:

> > Since we now have the node name in the log file name, perhaps we no
> > longer need the port number in there
> 
> There is no node name in the log file name as of now, they are built
> using the port number, and the information of a node is dumped into
> the central log file when created (see dump_info).

Yeah, I realized this after posting.  What I thought was the node name,
based on some of the files I had laying around, was actually the test
name.

> I guess that to complete your idea we could allow PostgresNode to get
> a custom name for its log file through an optional parameter like
> logfile => 'myname' or similar. And if nothing is defined, process
> falls back to applname. So this would give the following:
> ${testname}_${logfile}.log

Sure. I don't think we should the name only for the log file, though,
but also for things like the "## " informative messages we print here
and there.  That would make the log file simpler to follow.  Also, I'm
not sure about having it be optional.  (TBH I'm not sure about applname
either; why do we keep that one?)

> It seems that we had better keep the test name as a prefix of the log
> file name though, to avoid an overlap with any other test in the same
> series. Thoughts?

Yes, agreed on that.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Remaining 9.5 open items

2015-12-11 Thread Etsuro Fujita

On 2015/12/11 1:18, Robert Haas wrote:

On Wed, Dec 9, 2015 at 2:52 AM, Etsuro Fujita
 wrote:

Thank you for committing the patch!

Sorry, I overlooked a typo in docs: s/more that one/more than one/ Please
find attached a patch.



Committed, thanks.


Thanks!

Best regards,
Etsuro Fujita




--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch: ResourceOwner optimization for tables with many partitions

2015-12-11 Thread Aleksander Alekseev
>> To be honest, I think this patch is really ugly. [...] I'm not sure
>> exactly what to do about that, but it seems like a problem.

I have an idea. There are actually two types of resources - int-like
(buffers, files) and void*-like (RelationRef, TupleDesc, ...). What if
I split ResourceArray into IntResourceArray and PointerResourceArray? It
would definitely solve ugliness problem --- no more memcpy's, char[]
buffers, etc.

>> It would be advisable for example that hash_any not suddenly become
>> covered by the "must not fail" requirement.

Frankly I can't think of any case when hash_any could or should fail.
Maybe we should just add a "must not fail" constraint to hash_any
description?

Also I could use some other hash implementation. It may be reasonable
in this case since size of data I would like to hash is small and known
in advance.

>> BTW, I do not think you can get away with the requirement that
>> all-zeroes isn't a valid resource representation. It might be okay
>> today but it's hardly future-proof.

Agree. I could store a value that should be considered as "zero" in
ResourceArray. It would be InvalidBuffer for buffers, -1 for files and
NULL for all void*-types. Does such solution sounds OK? 

Best regards,
Aleksander


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] postgres_fdw join pushdown (was Re: Custom/Foreign-Join-APIs)

2015-12-11 Thread Etsuro Fujita

On 2015/12/11 14:16, Ashutosh Bapat wrote:

On Thu, Dec 10, 2015 at 11:20 PM, Robert Haas > wrote:



On Tue, Dec 8, 2015 at 6:40 AM, Etsuro Fujita
>
wrote:
> IMO I want to see the EvalPlanQual fix in the first version for 9.6.

+1.



I think there is still a lot functionality that is offered without
EvalPlanQual fix. As long as we do not push joins when there are
RowMarks involved, implementation of that hook is not required. We won't
be able to push down joins for DMLs and when there are FOR SHARE/UPDATE
clauses in the query. And there are huge number of queries, which will
be benefitted by the push down even without that support. There's
nothing in this patch, which comes in way of implementing the
EvalPlanQual fix. It can be easily added after committing the first
version. On the other hand, getting minimal (it's not really minimal,
it's much more than that) support for postgres_fdw support committed
opens up possibility to work on multiple items (as listed in my mail) in
parallel.



I am not saying that we do not need EvalPlanQual fix in 9.6. But it's
not needed in the first cut. If we get the first cut in first couple of
months of 2016, there's plenty of room for the fix to go in 9.6. It
would be really bad situation if we could not get postgres_fdw join
pushdown supported in 9.6 because EvalPlanQual hook could not be
committed while the rest of the code is ready. EvalPlanQual fix in core
was being discussed since April 2015. It took 8 months to get that
fixed. Hopefully we won't need that long to implement the hook in
postgres_fdw, but that number says something about the complexity of the
feature.


ISTM that further enhancements are of secondary importance. Let's do the 
EvalPlanQual fix first. I'll add the RecheckForeignScan callback routine 
to your version of the postgres_fdw patch as soon as possible.


Best regards,
Etsuro Fujita




--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch: ResourceOwner optimization for tables with many partitions

2015-12-11 Thread Aleksander Alekseev
> It would be InvalidBuffer for buffers, -1 for files and NULL for all
> void*-types. Does such solution sounds OK? 

On second thought I believe there is no reason for storing anything for
void*-like types. I could just hardcode NULL in PointerResourceArray.

Best regards,
Aleksander



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] use_remote_estimate usage for join pushdown in postgres_fdw

2015-12-11 Thread Ashutosh Bapat
Hi All,
postgres_fdw documentation says following about use_remote_estimate (
http://www.postgresql.org/docs/devel/static/postgres-fdw.html)
--
use_remote_estimate
This option, which can be specified for a foreign table or a foreign
server, controls whether postgres_fdw issues remote EXPLAIN commands to
obtain cost estimates. A setting for a foreign table overrides any setting
for its server, but only for that table. The default is false.
--

I am trying to see, how should we use this option in the context of join
pushdown and for
that matter any pushdown involving more than one table.

I came up with following arguments
1. Foreign base relations derive their use_remote_estimate setting either
from the server setting or the per table setting. A join between two
foreign relations should derive its use_remote_estimate setting from the
joining relations (recursively). This means that we will use EXPLAIN to
estimate costs of join if "all" the involved base foreign relations have
use_remote_estimate true (either they derive it from the server level
setting or table level setting).

2. Similar to 1, but use EXPLAIN to estimate costs if "any" of the involved
base foreign relations have use_remote_estimate is true.

3. Since join between two foreign relations is not a table level
phenomenon, but a server level phenomenon, we should use server level
setting. This means that we will use EXPLAIN output to estimate costs of
join if the foreign server has use_remote_estimate true, irrespective of
the setting for individual foreign relations involved in that join.

Unfortunately the documentation and comments in code do not say much about
the intention (i.e. why and how is this setting expected to be used) of
this setting in the context or server.

The intention behind server level setting is more confusing. It does not
override table level setting, so it is not intended to be used for a
prohibitive reason like e.g. server doesn't support EXPLAIN the way it will
be interpreted locally. It seems to act more like a default in case table
level setting is absent. User may set table level use_remote_estimate to
true, if cost of EXPLAIN is very small compared to that of table scan (with
or without indexes) or adding conditional clauses to the query alters the
costs heavily that the cost of EXPLAIN itself is justified. But I can be
wrong about these intentions.

If we go by the above intention behind table level setting, 2nd argument
makes more sense as the table for which use_remote_estimate is true, can
change the cost of join heavily because of the clauses in the join and it's
better to get it from the foreign server than guessing it locally.

Comments/suggestions are welcome.
-- 
Best Wishes,
Ashutosh Bapat
EnterpriseDB Corporation
The Postgres Database Company


Re: [HACKERS] Logical replication and multimaster

2015-12-11 Thread Andres Freund
On 2015-12-11 18:12:55 +0800, Craig Ringer wrote:
> On 10 December 2015 at 03:19, Robert Haas  wrote:
> > On Sun, Dec 6, 2015 at 10:24 PM, Craig Ringer 
> > wrote:
> > > * A way to securely make a libpq connection from a bgworker without
> > messing
> > > with passwords etc. Generate one-time cookies, sometihng like that.
> >
> > Why would you have the bgworker connect to the database via TCP
> > instead of just doing whatever it wants to do directly?

> pg_dump and pg_restore, mainly, for copying the initial database state.

Well, you don't want to necessarily directly connect from the bgworker,
but from processes started from a bgworker. I guess that's where a good
bit of the Robert's confusion originated.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Making tab-complete.c easier to maintain

2015-12-11 Thread Greg Stark
On Fri, Dec 11, 2015 at 8:12 AM, Michael Paquier
 wrote:
> Also, if
> we prioritize a dynamically generated tab completion using gram.y, so
> be it and let's reject both patches then...


Fwiw I poked at the bison output to see if it would be possible to do.
I think it's theoretically possible but a huge project and would
create dependencies on bison internals that we would be unlikelly to
accept. (Unless we can get new API methods added to bison which is not
entirely unreasonable). The problem is that bison is only a small part
of the problem.

You would need

a) A new protocol message to send the partial query to the server and
get back a list of completions
b) Machinery in bison to return both all terminals that could come
next as well as all possible terminals it could reduce to
c) Some kind of reverse lexer to determine for each terminal what the
current partial input would have to match to be accepted
d) Some way to deal with the partially parsed query to find out what
schemas, tables, column aliases, etc should be considered for possible
completion

The machinery to do (b) is actually there in bison for the error
reporting. It's currently hard coded to limit the output to 5 and
there's no API for it, just a function that returns an error string.
But it might be possible to get bison to add an API method for it. But
that's as far as I got. I have no idea what (c) and (d) would look
like.

So I don't think it makes sense to hold up improvements today hoping
for something like this. What might be more realistic is making sure
to design the minilanguage to be easily generated by perl scripts or
the like and then write something picking up easy patterns in gram.y
or possibly poking through the bison table to generate a table of
minilanguage matchers. My instinct is that would be easier to do with
a real minilanguage instead of a regular expression system.


-- 
greg


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] postgres_fdw join pushdown (was Re: Custom/Foreign-Join-APIs)

2015-12-11 Thread Robert Haas
On Fri, Dec 11, 2015 at 12:16 AM, Ashutosh Bapat
 wrote:
>> +1.
>>
> I think there is still a lot functionality that is offered without
> EvalPlanQual fix.

Sure.  But I think that the EvalPlanQual-related fixes might have some
impact on the overall design, and I don't want to commit this with one
design and then have to revise it because we didn't examine the
EvalPlanQual requirements carefully enough.  We've already been down
that path once, and I don't want to go back.  It's not always possible
to get the design right the first time, but it's definitely nicer when
you do.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Patch: fix lock contention for HASHHDR.mutex

2015-12-11 Thread Aleksander Alekseev
Hello all,

Consider following stacktrace:

(gdb) bt
#0  0x7f77c81fae87 in semop () syscall-template.S:81
#1  0x0063b721 in PGSemaphoreLock pg_sema.c:387
#2  0x00692e1b in LWLockAcquire lwlock.c:1026
#3  0x0068d4c5 in LockAcquireExtended lock.c:881
#4  0x0068dcc1 in LockAcquire lock.c:672
#5  0x0068b7a8 in LockRelationOid lmgr.c:112
#6  0x00501d18 in find_inheritance_children pg_inherits.c:120
#7  0x00501d80 in find_all_inheritors pg_inherits.c:182
#8  0x0062db8d in expand_inherited_rtentry prepunion.c:1285
#9  expand_inherited_tables prepunion.c:1212
#10 0x00622705 in subquery_planner planner.c:501
#11 0x00622d31 in standard_planner planner.c:285
#12 0x0069ef0c in pg_plan_query postgres.c:809
#13 0x0069f004 in pg_plan_queries postgres.c:868
#14 0x006a0fc2 in exec_simple_query postgres.c:1033
#15 PostgresMain postgres.c:4032
#16 0x00467479 in BackendRun postmaster.c:4237
#17 BackendStartup postmaster.c:3913
#18 ServerLoop () postmaster.c:1684
#19 0x0064c828 in PostmasterMain postmaster.c:1292
#20 0x00467f3e in main main.c:223

Turns out PostgreSQL can spend a lot of time waiting for a lock in this
particular place, especially if you are running PostgreSQL on 60-core
server. Which obviously is a pretty bad sign.

You can easily reproduce this issue on regular Core i7 server. Use
attached schema.sql file to create a database and run:

pgbench -j 8 -c 8 -f pgbench.sql -T 300 my_database 2>/dev/null &

While this example is running connect to some PostgreSQL process with
gdb and run bt/c from time to time. You will see that PostgreSQL waits
for this particular lock quite often.

The problem is that code between LWLockAcquire (lock.c:881) and
LWLockRelease (lock.c:1020) can _sometimes_ run up to 3-5 ms. Using
old-good gettimeofday and logging method I managed to find a bottleneck:

-- proclock = SetupLockInTable [lock.c:892]
 `-- proclock = (PROCLOCK *) hash_search_with_hash_value [lock.c:1105]
   `-- currBucket = get_hash_entry(hashp) [dynahash.c:985]
 `-- SpinLockAcquire(>mutex) [dynahash.c:1187]

If my measurements are not wrong (I didn't place gettimeofday between
SpinLockAquire/SpinLockRelease, etc) we sometimes spend about 3 ms here
waiting for a spinlock, which doesn't seems right.

I managed to fix this behaviour by modifying choose_nelem_alloc
procedure in dynahash.c (see attached patch). The idea is to double
number of items we allocate when there is no more free items in hash
table. So we need twice less allocations which reduces contention.

This patch doesn't cause any performance degradation according to
pgbench, `make check` passes, etc.

Best regards,
Aleksanderdiff --git a/src/backend/utils/hash/dynahash.c b/src/backend/utils/hash/dynahash.c
index eacffc4..48def5e 100644
--- a/src/backend/utils/hash/dynahash.c
+++ b/src/backend/utils/hash/dynahash.c
@@ -544,19 +544,19 @@ choose_nelem_alloc(Size entrysize)
 	elementSize = MAXALIGN(sizeof(HASHELEMENT)) + MAXALIGN(entrysize);
 
 	/*
-	 * The idea here is to choose nelem_alloc at least 32, but round up so
+	 * The idea here is to choose nelem_alloc at least 64, but round up so
 	 * that the allocation request will be a power of 2 or just less. This
 	 * makes little difference for hash tables in shared memory, but for hash
 	 * tables managed by palloc, the allocation request will be rounded up to
 	 * a power of 2 anyway.  If we fail to take this into account, we'll waste
 	 * as much as half the allocated space.
 	 */
-	allocSize = 32 * 4;			/* assume elementSize at least 8 */
+	allocSize = 64 * 4;			/* assume elementSize at least 8 */
 	do
 	{
 		allocSize <<= 1;
 		nelem_alloc = allocSize / elementSize;
-	} while (nelem_alloc < 32);
+	} while (nelem_alloc < 64);
 
 	return nelem_alloc;
 }


schema.sql
Description: application/sql


pgbench.sql
Description: application/sql

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: In-core regression tests for replication, cascading, archiving, PITR, etc.

2015-12-11 Thread Michael Paquier
On Fri, Dec 11, 2015 at 8:48 PM, Alvaro Herrera
 wrote:
> Michael Paquier wrote:
>> On Fri, Dec 11, 2015 at 5:35 AM, Alvaro Herrera
>>  wrote:
>> I guess that to complete your idea we could allow PostgresNode to get
>> a custom name for its log file through an optional parameter like
>> logfile => 'myname' or similar. And if nothing is defined, process
>> falls back to applname. So this would give the following:
>> ${testname}_${logfile}.log
>
> Sure. I don't think we should the name only for the log file, though,
> but also for things like the "## " informative messages we print here
> and there.  That would make the log file simpler to follow.  Also, I'm
> not sure about having it be optional.  (TBH I'm not sure about applname
> either; why do we keep that one?)

OK, so let's do this: the node name is a mandatory argument of
get_new_node, which is passed to "new PostgresNode" like the port and
the host, and it is then used in the log file name as well as in the
information messages you are mentioning. That's a patch simple enough.
Are you fine with this approach?

Regarding the application name, I still think it is useful to have it
though. pg_rewind should actually use it, and the other patch adding
the recovery routines will use it.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Move PinBuffer and UnpinBuffer to atomics

2015-12-11 Thread Andres Freund
On 2015-12-11 15:56:46 +0300, Alexander Korotkov wrote:
> On Thu, Dec 10, 2015 at 9:26 AM, Amit Kapila 
> wrote:
>  We did see this on big Intel machine in practice. pgbench -S gets
>  shared ProcArrayLock very frequently. Since some number of connections is
>  achieved, new connections hangs on getting exclusive ProcArrayLock. I 
>  think
>  we could do some workaround for this problem. For instance, when 
>  exclusive
>  lock waiter have some timeout it could set some special bit which 
>  prevents
>  others to get new shared locks.

> > Ye thats right, but I think in general the solution to this problem
> > should be don't let any Exclusive locker to starve and still allow
> > as many shared lockers as possible.  I think here it is important
> > how we define starving, should it be based on time or something
> > else?  I find timer based solution somewhat less suitable, but may
> > be it is okay, if there is no other better way.
> >
> 
> ​Yes, we probably should find something better.​

> Another way could be to
> >>> check if the Exclusive locker needs to go for repeated wait for a
> >>> couple of times, then we can set such a bit.
> >>>
> >>
> >> ​I'm not sure what do you mean by repeated wait. Do you mean exclusive
> >> locker was waked twice up by timeout?
> >>
> >
> > I mean to say once the Exclusive locker is woken up, it again
> > re-tries to acquire the lock as it does today, but if it finds that the
> > number of retries is greater than certain threshold (let us say 10),
> > then we sit the bit.
> >
> 
> ​Yes, there is a cycle with retries in LWLockAcquire function. The case of
> retry is when ​waiter is waked up, but someone other steal the lock before
> him. Lock waiter is waked up by lock releaser only when lock becomes free.
> But in the case of high concurrency for shared lock, it almost never
> becomes free. So, exclusive locker would be never waked up. I'm pretty sure
> this happens on big Intel machine while we do the benchmark. So, relying on
> number of retries wouldn't work in this case.
> I'll do the tests to verify if retries happens in our case.

I seriously doubt that making lwlocks fairer is the right way to go
here. In my testing the "unfairness" is essential to performance - the
number of context switches otherwise increases massively.

I think in this case its better to work on making the lock less
contended, rather than making micro-optimizations around the locking
behaviour.


Andres


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Support for N synchronous standby servers - take 2

2015-12-11 Thread Masahiko Sawada
On Wed, Dec 9, 2015 at 8:59 PM, Masahiko Sawada  wrote:
> On Wed, Nov 18, 2015 at 2:06 PM, Masahiko Sawada  
> wrote:
>> On Tue, Nov 17, 2015 at 7:52 PM, Kyotaro HORIGUCHI
>>  wrote:
>>> Oops.
>>>
>>> At Tue, 17 Nov 2015 19:40:10 +0900 (Tokyo Standard Time), Kyotaro HORIGUCHI 
>>>  wrote in 
>>> <20151117.194010.17198448.horiguchi.kyot...@lab.ntt.co.jp>
 Hello,

 At Tue, 17 Nov 2015 18:13:11 +0900, Masahiko Sawada 
  wrote in 
 

Re: [HACKERS] Parallel Aggregate

2015-12-11 Thread Robert Haas
On Fri, Dec 11, 2015 at 1:42 AM, Haribabu Kommi
 wrote:
> Here I attached a POC patch of parallel aggregate based on combine
> aggregate patch. This patch contains the combine aggregate changes
> also. This patch generates and executes the parallel aggregate plan
> as discussed in earlier threads.

Pretty cool.  I'm pretty sure there's some stuff in this patch that's
not right in detail, but I think this is an awfully exciting
direction.

I'd like to commit David Rowley's patch from the other thread first,
and then deal with this one afterwards.  The only thing I feel
strongly needs to be changed in that patch is CFUNC -> COMBINEFUNC,
for clarity.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch: fix lock contention for HASHHDR.mutex

2015-12-11 Thread Tom Lane
Aleksander Alekseev  writes:
> Turns out PostgreSQL can spend a lot of time waiting for a lock in this
> particular place, especially if you are running PostgreSQL on 60-core
> server. Which obviously is a pretty bad sign.
> ...
> I managed to fix this behaviour by modifying choose_nelem_alloc
> procedure in dynahash.c (see attached patch).

TBH, this is just voodoo.  I don't know why this change would have made
any impact on lock acquisition performance, and neither do you, and the
odds are good that it's pure chance that it changed anything.  One likely
theory is that you managed to shift around memory allocations so that
something aligned on a cacheline boundary when it hadn't before.  But, of
course, the next patch that changes allocations anywhere in shared memory
could change that back.  There are lots of effects like this that appear
or disappear based on seemingly unrelated code changes when you're
measuring edge-case performance.

The patch is not necessarily bad in itself.  As time goes by and machines
get bigger, it can make sense to allocate more memory at a time to reduce
memory management overhead.  But arguing for it on the basis that it fixes
lock allocation behavior with 60 cores is just horsepucky.  What you were
measuring there was steady-state hash table behavior, not the speed of the
allocate-some-more-memory code path.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [sqlsmith] Failed to generate plan on lateral subqueries

2015-12-11 Thread Andreas Seltenreich
Tom Lane writes:
> [2. transitive-lateral-fixes-2.patch ]
> [2. remove-lateraljoininfo-2.patch ]

They seem to have fixed the issue for good now.  No errors have been
logged for 2e8 queries since applying the first patch.  (The second one
was applied later and didn't get as much exposure.)  I guess that means
I have to go back to extending the grammar again :-).

regards
Andreas


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch: fix lock contention for HASHHDR.mutex

2015-12-11 Thread Aleksander Alekseev
Hello, Tom

I see your point, but I would like to clarify a few things.

1. Do we consider described measurement method good enough to conclude
that sometimes PostgreSQL really spends 3 ms in a spinlock (like a RTT
between two Internet hosts in the same city)? If not, what method
should be used to approve or disapprove this?

2. If we agree that PostgreSQL does sometimes spend 3 ms in a spinlock
do we consider this a problem?

3. If we consider this a problem, what method is considered appropriate
to find a real reason of such behaviour so we could fix it?

Best regards,
Aleksander


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Should TIDs be typbyval = FLOAT8PASSBYVAL to speed up CREATE INDEX CONCURRENTLY?

2015-12-11 Thread Robert Haas
On Wed, Dec 9, 2015 at 8:16 PM, Peter Geoghegan  wrote:
> On Tue, Nov 17, 2015 at 7:33 PM, Corey Huinker  
> wrote:
>> I'm willing, but I'm too new to the codebase to be an effective reviewer
>> (without guidance). The one thing I can offer in the mean time is this: my
>> company/client nearly always has a few spare AWS machines on the largish
>> side where I can compile uncommitted patches and benchmark stuff for y'all.
>
> I think that this particular patch is close to being a slam-dunk, so I
> don't think it's particularly needed here. But thanks.

It never hurts to have a few extra performance test results - I'm all
in favor of Corey doing some testing.

Also, I'd be in favor of you updating the patch to reflect the
comments from Tom and Simon on November 17th.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [sqlsmith] Failed to generate plan on lateral subqueries

2015-12-11 Thread Andreas Seltenreich
Peter Geoghegan writes:

> On Sun, Dec 6, 2015 at 9:52 AM, Andreas Seltenreich  
> wrote:
>> I've added new grammar rules to sqlsmith and improved some older ones.
>
> Could you possibly teach sqlsmith about INSERT ... ON CONFLICT DO
> UPDATE/IGNORE? I think that that could be very helpful, especially if
> it could be done in advance of any stable release of 9.5.

In summary, it can't be added ad-hoc, but might still happen in advance
of the release of 9.5.

Adding upsert needs significiant effort because historically,
non-boolean value expression productions yield a random type.  This is
not a problem for generating queries, but it is for inserts.  Also,
sqlsmith can at the moment only generate sensible value expressions from
column references.  Generating a proper upsert would require supporting
type-constraining of productions as well as adding productions for
pulling values out of thin air (e.g., generating atomic value subselects
or calling argumentless functions).

regards,
Andreas


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Tab-comletion for RLS

2015-12-11 Thread Masahiko Sawada
On Thu, Dec 10, 2015 at 11:07 PM, Robert Haas  wrote:
> On Tue, Dec 8, 2015 at 8:32 AM, Masahiko Sawada  wrote:
>> I found some lacks of tab-completion for RLS in 9.5.
>>
>> * ALTER POLICY [TAB]
>> I expected to appear the list of policy name, but nothing is appeared.
>>
>> * ALTER POLICY hoge_policy ON [TAB]
>> I expected to appear the list of table name related to specified policy, but
>> all table names are appeared.
>>
>> * ALTER POLICY ... ON ... TO [TAB]
>> I expected to appear { role_name | PUBLIC | CURRENT_USER | SESSION_USER },
>> but only role_name and PUBLIC are appeared.
>> Same problem is exists in
>> "
>> CREATE POLICY ... ON ... TO [TAB]
>> "
>> .
>>
>> #1 and #2 problems are exist in 9.5 or later, but #3 is exist in only 9.5
>> because it's unintentionally fixed by
>> 2f8880704a697312d8d10ab3a2ad7ffe4b5e3dfd commit.
>> I think we should apply the necessary part of this commit for 9.5 as well,
>> though?
>>
>> Attached patches are:
>> * 000_fix_tab_completion_rls.patch
>>   fixes #1, #2 problem, and is for master branch and REL9_5_STABLE.
>> * 001_fix_tab_completion_rls_for_95.patch
>>   fixes #3 problem, and is for only REL9_5_STABLE.
>
> I've committed 000 and back-patched it to 9.5.  I'm not quite sure
> what to do about 001; maybe it's better to back-port the whole commit
> rather than just bits of it.
>

Yes, I agree with back-port the whole commit.

Regards,

--
Masahiko Sawada


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PROPOSAL] VACUUM Progress Checker.

2015-12-11 Thread Robert Haas
On Fri, Dec 11, 2015 at 1:25 AM, Michael Paquier
 wrote:
>> For another thing, there are definitely going to be
>> some people that want the detailed information - and I can practically
>> guarantee that if we don't make it available, at least one person will
>> write a tool that tries to reverse-engineer the detailed progress
>> information from whatever we do report.
>
> OK, so this justifies the fact of having detailed information, but
> does it justify the fact of having precise and accurate data? ISTM
> that having detailed information and precise information are two
> different things. The level of details is defined depending on how
> verbose we want the information to be, and the list you are giving
> would fulfill this requirement nicely for VACUUM. The level of
> precision/accuracy at which this information is provided though
> depends at which frequency we want to send this information. For
> long-running VACUUM it does not seem necessary to update the fields of
> the progress tracker each time a counter needs to be incremented.
> CLUSTER has been mentioned as well as a potential target for the
> progress facility, but it seems that it enters as well in the category
> of things that would need to be reported on a slower frequency pace
> than "each-time-a-counter-is-incremented".
>
> My impression is just based on the needs of VACUUM and CLUSTER.
> Perhaps I am lacking imagination regarding the potential use cases of
> the progress facility though in cases where we'd want to provide
> extremely precise progress information :)
> It just seems to me that this is not a requirement for VACUUM or
> CLUSTER. That's all.

It's not a hard requirement, but it should be quite easy to do without
adding any significant overhead.  All you need to do is something
like:

foo->changecount++;
pg_write_barrier();
foo->count_of_blocks++;
pg_write_barrier();
foo->changecount++;

I suspect that's plenty cheap enough to do for every block.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [sqlsmith] Failed to generate plan on lateral subqueries

2015-12-11 Thread Tom Lane
Andreas Seltenreich  writes:
> Tom Lane writes:
>> [2. transitive-lateral-fixes-2.patch ]
>> [2. remove-lateraljoininfo-2.patch ]

> They seem to have fixed the issue for good now.  No errors have been
> logged for 2e8 queries since applying the first patch.  (The second one
> was applied later and didn't get as much exposure.)

Thanks.  It's good that you tested both states of the code, since I intend
to back-patch transitive-lateral-fixes into 9.4 and 9.3, but not the
second patch.

> I guess that means I have to go back to extending the grammar again :-).

I await the results with interest.  Did you note the suggestion about
trying to stress the ON CONFLICT code with this?  You'd need it to
issue non-SELECT queries, which might create some reproducibility
issues...

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] More on the libxml2 update situation

2015-12-11 Thread Tom Lane
So I did a routine software update on my RHEL6 workstation, and noticed
a security update for libxml2 go by.  And guess what: now an XML-enabled
build of Postgres fails regression tests for me, just as previously
discussed in
http://www.postgresql.org/message-id/flat/cafj8pra4xjqfgnqcqmcygx-umgmr3stt3xfeuw7kbsoiovg...@mail.gmail.com

A little bit of digging shows that the behavior we're unhappy about was
introduced as part of the official patch for CVE-2015-7499.  This means
that, whether or not we can persuade Veillard that it was a bad idea and
he should undo it, the bogus behavior is likely to spread into mainstream
distributions a lot faster than any followup fix will :-(.  Bugfix updates
just don't get accepted as quickly as security updates.

I'm starting to think that maybe we'd better knuckle under and provide
a variant expected file that matches this behavior.  We're likely to be
seeing it in the wild for some time to come.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Rework the way multixact truncations work

2015-12-11 Thread Robert Haas
On Thu, Dec 10, 2015 at 9:32 AM, Robert Haas  wrote:
> On Thu, Dec 10, 2015 at 9:04 AM, Andres Freund  wrote:
>> On 2015-12-10 08:55:54 -0500, Robert Haas wrote:
>>> Maybe.  But I think we could use a little more vigorous discussion of
>>> that issue, since Andres doesn't seem to be very convinced by your
>>> analysis, and I don't currently understand what you've fixed because I
>>> can't, as mentioned several times, follow your patch stack.
>>
>> The issue at hand is that the following block
>> oldestOffsetKnown =
>> find_multixact_start(oldestMultiXactId, 
>> );
>>
>> ...
>> else if (prevOldestOffsetKnown)
>> {
>> /*
>>  * If we failed to get the oldest offset this time, but we 
>> have a
>>  * value from a previous pass through this function, use the 
>> old value
>>  * rather than automatically forcing it.
>>  */
>> oldestOffset = prevOldestOffset;
>> oldestOffsetKnown = true;
>> }
>> in SetOffsetVacuumLimit() fails to restore offsetStopLimit, which then
>> is set in shared memory:
>> /* Install the computed values */
>> LWLockAcquire(MultiXactGenLock, LW_EXCLUSIVE);
>> MultiXactState->oldestOffset = oldestOffset;
>> MultiXactState->oldestOffsetKnown = oldestOffsetKnown;
>> MultiXactState->offsetStopLimit = offsetStopLimit;
>> LWLockRelease(MultiXactGenLock);
>>
>> so, if find_multixact_start() failed - a "should never happen" occurance
>> - we install a wrong stop limit. It does get 'repaired' upon the next
>> suceeding find_multixact_start() in SetOffsetVacuumLimit() or a restart
>> though.
>>
>> Adding a 'prevOffsetStopLimit' and using it seems like a ~5 line patch.
>
> So let's do that, then.

Who is going to take care of this?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Making tab-complete.c easier to maintain

2015-12-11 Thread Tom Lane
Greg Stark  writes:
> Fwiw I poked at the bison output to see if it would be possible to do.
> I think it's theoretically possible but a huge project and would
> create dependencies on bison internals that we would be unlikelly to
> accept.

That's the impression I got when I looked at it briefly, too.  Without
some new APIs from bison it seems like it'd be way too messy/fragile.

> You would need
> a) A new protocol message to send the partial query to the server and
> get back a list of completions

As far as that goes, I'd imagined the functionality continuing to be
on the psql side.  If we make it wait for a protocol upgrade, that
makes it even more improbable that it will ever happen.  psql already
has its own copy of the lexer, so making it have something derived
from the grammar doesn't seem like a maintainability problem.

> b) Machinery in bison to return both all terminals that could come
> next as well as all possible terminals it could reduce to

Yeah, this is the hard part.

> d) Some way to deal with the partially parsed query to find out what
> schemas, tables, column aliases, etc should be considered for possible
> completion

I was imagining that some of that knowledge could be pushed back into the
grammar.  That is, instead of just using generic nonterminals like ColId,
we'd need to have TableId, SchemaId, etc and be careful to use the
appropriate one(s) in each production of the grammar.  Then, psql would
know which completion query to issue by noting which of these particular
nonterminals is a candidate for the next token right now.  However, that
moves the goalposts in terms of what we'd have to be able to get back from
the alternate bison machinery.

Also, it's not just a SMOP to modify the grammar like that: it's not
at all unlikely that attempting to introduce such a finer categorization
would lead to a broken grammar, ie shift/reduce or reduce/reduce
conflicts.  We couldn't be sure it would work till we've tried it.

> So I don't think it makes sense to hold up improvements today hoping
> for something like this.

Yeah, it's certainly a wishlist item rather than something that should
block shorter-term improvements.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Bootstrap DATA is a pita

2015-12-11 Thread Mark Dilger

> On Dec 11, 2015, at 1:46 PM, Tom Lane  wrote:
> 
> Alvaro Herrera  writes:
>> Crazy idea: we could just have a CSV file which can be loaded into a
>> table for mass changes using regular DDL commands, then dumped back from
>> there into the file.  We already know how to do these things, using
>> \copy etc.  Since CSV uses one line per entry, there would be no merge
>> problems either (or rather: all merge problems would become conflicts,
>> which is what we want.)
> 
> That's an interesting proposal.  It would mean that the catalog files
> stay at more or less their current semantic level (direct representations
> of bootstrap catalog contents), but it does sound like a more attractive
> way to perform complex edits than writing Emacs macros ;-).

I would be happy to work on this, if there is much chance of the community
accepting a patch.  Do you think replacing the numeric Oids for functions,
operators, opclasses and such in the source files with their names would
be ok, with the SQL converting those to Oids in the output?  My eyes have
gotten tired more than once trying to read head files in src/include/catalog
looking for mistakes in what largely amounts to a big table of numbers.

For example, in pg_amop.h:

/* default operators int2 */
DATA(insert (   1976   21 21 1 s95  403 0 ));
DATA(insert (   1976   21 21 2 s522 403 0 ));
DATA(insert (   1976   21 21 3 s94  403 0 ));
DATA(insert (   1976   21 21 4 s524 403 0 ));
DATA(insert (   1976   21 21 5 s520 403 0 ));

Would become something like:

amopfamily  amoplefttypeamoprighttype   amopstrategy
amoppurpose amopopr amopmethod  amopsortfamily
integer_ops int2int21   
search  "<" btree   0
integer_ops int2int22   
search  "<="btree   0
integer_ops int2int23   
search  "=" btree   0
integer_ops int2int24   
search  ">="btree   0
integer_ops int2int25   
search  ">" btree   0

Note that I prefer to use tabs and a headerline, as the tabstop can be set to
line them up nicely, and the headerline allows you to see which column is
which, and what it is for.  Csv is always harder for me to use that way, though
maybe that is just a matter of which editor i use.  (vim)

And yes, I'd need to allow the HEADER option for copying tab delimited
files, since it is currently only allowed for csv, I believe.

mark






-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Should TIDs be typbyval = FLOAT8PASSBYVAL to speed up CREATE INDEX CONCURRENTLY?

2015-12-11 Thread Peter Geoghegan
On Fri, Dec 11, 2015 at 2:26 PM, Corey Huinker  wrote:
> Sure, the machine we called "ninefivealpha", which incidentally, failed to
> find a single bug in alpha2 thru beta2, is currently idle, and concurrent
> index creation times are a bugbear around these parts. Can somebody, either
> in this thread or privately, outline what sort of a test they'd like to see?

Any kind of CREATE INDEX CONCURRENTLY test, before and after.

I looked at a simple, random int4 column. That seems like a good case
to focus on, since there isn't too much other overhead.  I think I
performed my test on an unlogged table, to make sure other overhead
was even further minimized.

-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Bootstrap DATA is a pita

2015-12-11 Thread Tom Lane
Mark Dilger  writes:
>> On Dec 11, 2015, at 1:46 PM, Tom Lane  wrote:
>> That's an interesting proposal.  It would mean that the catalog files
>> stay at more or less their current semantic level (direct representations
>> of bootstrap catalog contents), but it does sound like a more attractive
>> way to perform complex edits than writing Emacs macros ;-).

> I would be happy to work on this, if there is much chance of the community
> accepting a patch.  Do you think replacing the numeric Oids for functions,
> operators, opclasses and such in the source files with their names would
> be ok, with the SQL converting those to Oids in the output?

Huh?  Those files are the definition of that mapping, no?  Isn't what
you're proposing circular?

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Bootstrap DATA is a pita

2015-12-11 Thread Caleb Welton
I took a look at a few of the most recent bulk edit cases for pg_proc.h:

There were two this year:
* The addition of proparallel  [1]
* The addition of protransform [2]

And prior to that the most recent seems to be from 2012:
* The addition of proleakproof [3]

Quick TLDR - the changes needed to reflect these are super simple to
reflect when generating SQL for CREATE FUNCTION statements.

Attached is the SQL that would generate function definitions prior to
proleakproof and the diffs that would be required after adding support for
proleakproof, protransform and proparallel.

Each of the diffs indicates the changes that would be needed after the new
column is added, the question of how to populate default values for the new
columns is beyond the scope that can easily be expressed in general terms
and depends entirely on what the nature of the new column is.

Note: Currently I have focused on the 'pure' functions, e.g. not the
drivers of type serialization, language validation, operators, or other
object types.  I would want to deal with each of those while handling the
conversion for each of those object types in turn.  Additional
modifications would likely be needed for other types of functions.


[1]
https://github.com/postgres/postgres/commit/7aea8e4f2daa4b39ca9d1309a0c4aadb0f7ed81b
[2]
https://github.com/postgres/postgres/commit/8f9fe6edce358f7904e0db119416b4d1080a83aa
[3]
https://github.com/postgres/postgres/commit/cd30728fb2ed7c367d545fc14ab850b5fa2a4850


On Fri, Dec 11, 2015 at 12:55 PM, Caleb Welton  wrote:

> Makes sense.
>
> During my own prototyping what I did was generate the sql statements via
> sql querying the existing catalog.  Way easier than hand writing 1000+
> function definitions and not difficult to modify for future changes.  As
> affirmed that it was very easy to adapt my existing sql to account for some
> of the newer features in master.
>
> The biggest challenge was establishing a sort order that ensures both a
> unique ordering and that the dependencies needed for SQL functions have
> been processed before trying to define them.  Which effects about 4/1000
> functions based on a natural oid ordering.
>
> > On Dec 11, 2015, at 11:43 AM, Alvaro Herrera 
> wrote:
> >
> > Caleb Welton wrote:
> >> I'm happy working these ideas forward if there is interest.
> >>
> >> Basic design proposal is:
> >>  - keep a minimal amount of bootstrap to avoid intrusive changes to core
> >> components
> >>  - Add capabilities of creating objects with specific OIDs via DDL
> during
> >> initdb
> >>  - Update the caching/resolution mechanism for builtin functions to be
> >> more dynamic.
> >>  - Move as much of bootstrap as possible into SQL files and create
> catalog
> >> via DDL
> >
> > I think the point we got stuck last time at was deciding on a good
> > format for the data coming from the DATA lines.  One of the objections
> > raised for formats such as JSON is that it's trivial for "git merge" (or
> > similar tools) to make a mistake because object-end/object-start lines
> > are all identical.  And as for the SQL-format version, the objection was
> > that it's hard to modify the lines en-masse when modifying the catalog
> > definition (new column, etc).  Ideally we would like a format that can
> > be bulk-edited without too much trouble.
> >
> > A SQL file would presumably not have the merge issue, but mass-editing
> > would be a pain.
> >
> > Crazy idea: we could just have a CSV file which can be loaded into a
> > table for mass changes using regular DDL commands, then dumped back from
> > there into the file.  We already know how to do these things, using
> > \copy etc.  Since CSV uses one line per entry, there would be no merge
> > problems either (or rather: all merge problems would become conflicts,
> > which is what we want.)
> >
> > --
> > Álvaro Herrerahttp://www.2ndQuadrant.com/
> > PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
>
*** gen_protransform.sql2015-12-11 14:36:25.0 -0800
--- gen_proparallel.sql 2015-12-11 14:35:41.0 -0800
***
*** 14,19 
--- 14,23 
ELSE '' END 
|| CASE WHEN proisstrict THEN ' STRICT' ELSE '' END 
|| CASE WHEN proleakproof THEN ' LEAKPROOF' ELSE '' END
+   || CASE proparallel WHEN 's' THEN ' PARALLEL SAFE'
+   WHEN 'r' THEN ' PARALLEL RESTRICTED'
+   WHEN 'u' THEN '' -- PARALLEL UNSAFE is DEFAULT
+   ELSE '' END
|| CASE WHEN (procost != 1   and lanname = 'internal') OR
 (procost != 100 and lanname = 'sql')
THEN ' COST ' 


gen_start.sql
Description: Binary data
*** gen_leakproof.sql   2015-12-11 14:36:09.0 -0800
--- gen_protransform.sql2015-12-11 14:36:25.0 -0800
***
*** 72,77 
  AND prorettype != 'anyenum'::regtype   /* Enum is special */
  AND 'anyenum'::regtype 

Re: [HACKERS] Bootstrap DATA is a pita

2015-12-11 Thread Caleb Welton
The current semantic level is pretty low level, somewhat cumbersome, and
requires filling in values that most of the time the system has a pretty
good idea how to fill in default values.

Compare:

CREATE FUNCTION lo_export(oid, text) RETURNS integer LANGUAGE internal
STRICT AS 'lo_export' WITH (OID=765);
DATA(insert OID = 765 (  lo_export   PGNSP PGUID 12 1 0 0 0 f f f f t f v u
2 0 23 "26 25" _null_ _null_ _null_ _null_ _null_ lo_export _null_ _null_
_null_ ));


In the first one someone has indicated:
   1. a function name,
   2. two parameter type names
   3. a return type
   4. a language
   5. null handling
   6. a symbol
   7. an oid

In the second case 30 separate items have been indicated, and yet both of
them will generate identical end results within the catalog.

The former is more immune to even needing modification in the event that
the catalog structure changes.
  - adding proleakproof?  No change needed, default value is correct
  - adding protransform?  No change needed, not relevant
  - adding proparallel?  No change needed, default value is correct
  - adding procost? No change needed, default value is correct




On Fri, Dec 11, 2015 at 1:46 PM, Tom Lane  wrote:

> Alvaro Herrera  writes:
> > Crazy idea: we could just have a CSV file which can be loaded into a
> > table for mass changes using regular DDL commands, then dumped back from
> > there into the file.  We already know how to do these things, using
> > \copy etc.  Since CSV uses one line per entry, there would be no merge
> > problems either (or rather: all merge problems would become conflicts,
> > which is what we want.)
>
> That's an interesting proposal.  It would mean that the catalog files
> stay at more or less their current semantic level (direct representations
> of bootstrap catalog contents), but it does sound like a more attractive
> way to perform complex edits than writing Emacs macros ;-).
>
> You could actually do that the hard way right now, with a bit of script
> to convert between DATA lines and CSV format.  But if we anticipate that
> becoming the standard approach, it would definitely make sense to migrate
> the master copies into CSV or traditional COPY format, and teach BKI mode
> to read that (or, perhaps, leave bootstrap.c alone and modify the code
> that produces the .bki file).
>
> This is somewhat orthogonal to the question of whether we want to do
> things like converting noncritical operator-class definitions into
> regular CREATE OPERATOR CLASS syntax.  There's almost certainly going
> to be some hard core of catalog entries that aren't amenable to that,
> and will still need to be loaded from data files of some sort.
>
> regards, tom lane
>


Re: [HACKERS] Using quicksort for every external sort run

2015-12-11 Thread Greg Stark
On Fri, Dec 11, 2015 at 10:41 PM, Greg Stark  wrote:
>
> Interestingly it looks like we could raise the threshold to switching
> to insertion sort. At least on my machine the insertion sort is faster
> in real time as well as fewer comparisons up to 9 elements. It's
> actually faster up to 16 elements despite doing more comparisons than
> quicksort.
>
> Note also how our quicksort does more comparisons than the libc
> quicksort (which is actually merge sort in glibc I hear) which is
> probably due to the "presorted" check.


Heh. And if I comment out the presorted check the breakeven point is
*exactly* where the threshold is today at 7 elements -- presumably
because Hoare chose it on purpose.

7
using insertion sort 145.517ns per sort of 7 24-byte items 14.9
compares/sort 10.5 swaps/sort
using sort networks sort 146.764ns per sort of 7 24-byte items 16.0
compares/sort 7.3 swaps/sort
using libc quicksort sort 282.659ns per sort of 7 24-byte items 12.7
compares/sort
using qsort_ssup sort 141.817ns per sort of 7 24-byte items 14.3 compares/sort


-- 
greg


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Bootstrap DATA is a pita

2015-12-11 Thread Mark Dilger

> On Dec 11, 2015, at 2:40 PM, Tom Lane  wrote:
> 
> Mark Dilger  writes:
>>> On Dec 11, 2015, at 1:46 PM, Tom Lane  wrote:
>>> That's an interesting proposal.  It would mean that the catalog files
>>> stay at more or less their current semantic level (direct representations
>>> of bootstrap catalog contents), but it does sound like a more attractive
>>> way to perform complex edits than writing Emacs macros ;-).
> 
>> I would be happy to work on this, if there is much chance of the community
>> accepting a patch.  Do you think replacing the numeric Oids for functions,
>> operators, opclasses and such in the source files with their names would
>> be ok, with the SQL converting those to Oids in the output?
> 
> Huh?  Those files are the definition of that mapping, no?  Isn't what
> you're proposing circular?

No, there are far more references to Oids than there are definitions of them.

For example, the line in pg_operator.h:

DATA(insert OID =  15 ( "="PGNSP PGUID b t t23  20  16 416  36 
int48eq eqsel eqjoinsel ));

defines 15 as the oid for the equals operator for (int8,int4) returning bool, 
but the
fact that 23 is the Oid for int4, 20 is the Oid for int8, and 16 is the Oid for 
bool
is already defined elsewhere (int pg_type.h) and need not be duplicated here.

I'm just proposing that we don't keep specifying things by number everywhere.
Once you've established the Oid for something (operator, type, function) you
should use the name everywhere else.

mark

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Bootstrap DATA is a pita

2015-12-11 Thread Caleb Welton
Yes, that alone without any other changes would be a marked improvement and
could be implemented in many places, pg_operator is a good example.

... but there is some circularity especially with respect to type
definitions and the functions that define those types.  If you changed the
definition of prorettype into a regtype then bootstrap would try to lookup
the type before the pg_type entry exists and throw a fit.  That's handled
in SQL via shell types.  If we wanted bootstrap to be able to handle this
then we'd have to make two passes of pg_type, the first to create the
shells and the second to handle populating the serialization functions.

Unfortunately types and functions tend to be the more volatile areas of the
catalog so this particular circularity is particularly vexing.

On Fri, Dec 11, 2015 at 2:53 PM, Mark Dilger 
wrote:

>
> > On Dec 11, 2015, at 2:40 PM, Tom Lane  wrote:
> >
> > Mark Dilger  writes:
> >>> On Dec 11, 2015, at 1:46 PM, Tom Lane  wrote:
> >>> That's an interesting proposal.  It would mean that the catalog files
> >>> stay at more or less their current semantic level (direct
> representations
> >>> of bootstrap catalog contents), but it does sound like a more
> attractive
> >>> way to perform complex edits than writing Emacs macros ;-).
> >
> >> I would be happy to work on this, if there is much chance of the
> community
> >> accepting a patch.  Do you think replacing the numeric Oids for
> functions,
> >> operators, opclasses and such in the source files with their names would
> >> be ok, with the SQL converting those to Oids in the output?
> >
> > Huh?  Those files are the definition of that mapping, no?  Isn't what
> > you're proposing circular?
>
> No, there are far more references to Oids than there are definitions of
> them.
>
> For example, the line in pg_operator.h:
>
> DATA(insert OID =  15 ( "="PGNSP PGUID b t t23  20  16 416  36
> int48eq eqsel eqjoinsel ));
>
> defines 15 as the oid for the equals operator for (int8,int4) returning
> bool, but the
> fact that 23 is the Oid for int4, 20 is the Oid for int8, and 16 is the
> Oid for bool
> is already defined elsewhere (int pg_type.h) and need not be duplicated
> here.
>
> I'm just proposing that we don't keep specifying things by number
> everywhere.
> Once you've established the Oid for something (operator, type, function)
> you
> should use the name everywhere else.
>
> mark


Re: [HACKERS] Bootstrap DATA is a pita

2015-12-11 Thread Tom Lane
Mark Dilger  writes:
>> On Dec 11, 2015, at 2:40 PM, Tom Lane  wrote:
>> Huh?  Those files are the definition of that mapping, no?  Isn't what
>> you're proposing circular?

> No, there are far more references to Oids than there are definitions of them.

Well, you're still not being very clear, but I *think* what you're
proposing is to put a lot more smarts into the script that converts
the master source files into .bki format.  That is, we might have
"=(int8,int4)" in an entry in the master source file for pg_amop, but
the script would look up that entry using the source data for pg_type
and pg_operator, and then emit a simple numeric OID into the .bki file.
(Presumably, it would know to do this because we'd redefine the
pg_amop.amopopr column as of regoperator type not plain OID.)

Yeah, that could work, though I'd be a bit concerned about the complexity
and speed of the script.  Still, one doesn't usually rebuild postgres.bki
many times a day, so speed might not be a big problem.

This seems more or less orthogonal to the question of whether to get rid
of the DATA() lines in favor of a COPY-friendly data format.  I'd suggest
treating those as separate patches.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Using quicksort for every external sort run

2015-12-11 Thread Peter Geoghegan
On Fri, Dec 11, 2015 at 2:41 PM, Greg Stark  wrote:
> However the number of comparisons is significantly higher. And in the
> non-"abbreviated keys" case where the compare is going to be a
> function pointer call the number of comparisons is probably more
> important than the actual time spent when benchmarking comparing
> int64s. In that case insertion sort does seem to be better than using
> the sort networks.

Back when I wrote a prototype of Timsort, pre-abbreviated keys, it
required significantly fewer text comparisons [1] in fair and
representative cases (i.e. not particularly tickling our quicksort's
precheck thing), and yet was significantly slower.

[1] 
http://www.postgresql.org/message-id/caeylb_w++uhrcwprzg9tybvf7sn-c1s9olbabvavpgdep2d...@mail.gmail.com
-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Parallel Aggregate

2015-12-11 Thread David Rowley
On 12 December 2015 at 04:00, Robert Haas  wrote:
>
> I'd like to commit David Rowley's patch from the other thread first,
> and then deal with this one afterwards.  The only thing I feel
> strongly needs to be changed in that patch is CFUNC -> COMBINEFUNC,
> for clarity.


I have addressed that in my local copy. I'm now just working on adding some
test code which uses the new infrastructure. Perhaps I'll just experiment
with the parallel aggregate stuff instead now.

-- 
 David Rowley   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


Re: [HACKERS] Bootstrap DATA is a pita

2015-12-11 Thread Tom Lane
Alvaro Herrera  writes:
> Crazy idea: we could just have a CSV file which can be loaded into a
> table for mass changes using regular DDL commands, then dumped back from
> there into the file.  We already know how to do these things, using
> \copy etc.  Since CSV uses one line per entry, there would be no merge
> problems either (or rather: all merge problems would become conflicts,
> which is what we want.)

That's an interesting proposal.  It would mean that the catalog files
stay at more or less their current semantic level (direct representations
of bootstrap catalog contents), but it does sound like a more attractive
way to perform complex edits than writing Emacs macros ;-).

You could actually do that the hard way right now, with a bit of script
to convert between DATA lines and CSV format.  But if we anticipate that
becoming the standard approach, it would definitely make sense to migrate
the master copies into CSV or traditional COPY format, and teach BKI mode
to read that (or, perhaps, leave bootstrap.c alone and modify the code
that produces the .bki file).

This is somewhat orthogonal to the question of whether we want to do
things like converting noncritical operator-class definitions into
regular CREATE OPERATOR CLASS syntax.  There's almost certainly going
to be some hard core of catalog entries that aren't amenable to that,
and will still need to be loaded from data files of some sort.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Add IS (NOT) DISTINCT to subquery_Op

2015-12-11 Thread Jim Nasby

On 12/10/15 7:03 PM, Tom Lane wrote:

Jim Nasby  writes:

Is there any reason we couldn't/shouldn't support IS DISTINCT in
subquery_Op? (Or really, just add support to ANY()/ALL()/(SELECT ...)?)


It's not an operator (in the sense of something with a pg_operator OID),
which means this would be quite a bit less than trivial as far as internal
representation/implementation goes.  I'm not sure if there would be
grammar issues, either.


make_distinct_op() simply calls make_op() and then changes the tag of 
the result node to T_DistinctExpr. So I was hoping something similar 
could be done for ANY/ALL?

--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Using quicksort for every external sort run

2015-12-11 Thread Greg Stark
On Wed, Dec 9, 2015 at 2:44 AM, Peter Geoghegan  wrote:
> On Tue, Dec 8, 2015 at 6:39 PM, Greg Stark  wrote:
>
> I guess you mean insertion sort. What's the theoretical justification
> for the change?

Well my thinking was that hard coding a series of comparisons would be
faster than a loop doing a O(n^2) algorithm even for small constants.
And sort networks are perfect for hard coded sorts because they do the
same comparisons regardless of the results of previous comparisons so
there are no branches. And even better the comparisons are as much as
possible independent of each other -- sort networks are typically
measured by the depth which assumes any comparisons between disjoint
pairs can be done in parallel. Even if it's implemented in serial the
processor is probably parallelizing some of the work.

So I implemented a quick benchmark outside Postgres based on sorting
actual SortTuples with datum1 defined to be random 64-bit integers (no
nulls). Indeed the sort networks perform faster on average despite
doing more comparisons. That makes me think the cpu is indeed doing
some of the work in parallel.

However the number of comparisons is significantly higher. And in the
non-"abbreviated keys" case where the compare is going to be a
function pointer call the number of comparisons is probably more
important than the actual time spent when benchmarking comparing
int64s. In that case insertion sort does seem to be better than using
the sort networks.

Interestingly it looks like we could raise the threshold to switching
to insertion sort. At least on my machine the insertion sort is faster
in real time as well as fewer comparisons up to 9 elements. It's
actually faster up to 16 elements despite doing more comparisons than
quicksort.

Note also how our quicksort does more comparisons than the libc
quicksort (which is actually merge sort in glibc I hear) which is
probably due to the "presorted" check.


$ for i in `seq 2 32` ; do echo ; echo $i ; ./a.out $i ; done

2
using bitonic sort 32.781ns per sort of 2 24-byte items 1.0
compares/sort 0.5 swaps/sort
using insertion sort 29.805ns per sort of 2 24-byte items 1.0
compares/sort 0.5 swaps/sort
using sort networks sort 26.392ns per sort of 2 24-byte items 1.0
compares/sort 0.5 swaps/sort
using libc quicksort sort 54.250ns per sort of 2 24-byte items 1.0 compares/sort
using qsort_ssup sort 46.666ns per sort of 2 24-byte items 1.0 compares/sort

3
using insertion sort 42.090ns per sort of 3 24-byte items 2.7
compares/sort 1.5 swaps/sort
using sort networks sort 38.442ns per sort of 3 24-byte items 3.0
compares/sort 1.5 swaps/sort
using libc quicksort sort 86.759ns per sort of 3 24-byte items 2.7 compares/sort
using qsort_ssup sort 41.238ns per sort of 3 24-byte items 2.7 compares/sort

4
using bitonic sort 73.420ns per sort of 4 24-byte items 6.0
compares/sort 3.0 swaps/sort
using insertion sort 61.087ns per sort of 4 24-byte items 4.9
compares/sort 3.0 swaps/sort
using sort networks sort 58.930ns per sort of 4 24-byte items 5.0
compares/sort 2.7 swaps/sort
using libc quicksort sort 135.930ns per sort of 4 24-byte items 4.7
compares/sort
using qsort_ssup sort 59.669ns per sort of 4 24-byte items 4.9 compares/sort

5
using insertion sort 88.345ns per sort of 5 24-byte items 7.7
compares/sort 5.0 swaps/sort
using sort networks sort 90.034ns per sort of 5 24-byte items 9.0
compares/sort 4.4 swaps/sort
using libc quicksort sort 180.367ns per sort of 5 24-byte items 7.2
compares/sort
using qsort_ssup sort 85.603ns per sort of 5 24-byte items 7.7 compares/sort

6
using insertion sort 119.697ns per sort of 6 24-byte items 11.0
compares/sort 7.5 swaps/sort
using sort networks sort 122.071ns per sort of 6 24-byte items 12.0
compares/sort 5.4 swaps/sort
using libc quicksort sort 234.436ns per sort of 6 24-byte items 9.8
compares/sort
using qsort_ssup sort 115.407ns per sort of 6 24-byte items 11.0 compares/sort

7
using insertion sort 152.639ns per sort of 7 24-byte items 14.9
compares/sort 10.5 swaps/sort
using sort networks sort 155.357ns per sort of 7 24-byte items 16.0
compares/sort 7.3 swaps/sort
using libc quicksort sort 303.738ns per sort of 7 24-byte items 12.7
compares/sort
using qsort_ssup sort 166.174ns per sort of 7 24-byte items 16.0 compares/sort

8
using bitonic sort 248.527ns per sort of 8 24-byte items 24.0
compares/sort 12.0 swaps/sort
using insertion sort 193.057ns per sort of 8 24-byte items 19.3
compares/sort 14.0 swaps/sort
using sort networks sort 230.738ns per sort of 8 24-byte items 24.0
compares/sort 12.0 swaps/sort
using libc quicksort sort 360.852ns per sort of 8 24-byte items 15.7
compares/sort
using qsort_ssup sort 211.729ns per sort of 8 24-byte items 20.6 compares/sort

9
using insertion sort 222.475ns per sort of 9 24-byte items 24.2
compares/sort 18.0 swaps/sort
using libc quicksort sort 427.760ns per sort of 9 24-byte items 19.2
compares/sort
using qsort_ssup sort 249.668ns per sort of 9 24-byte items 24.6 

Re: [HACKERS] Bootstrap DATA is a pita

2015-12-11 Thread Andres Freund
On 2015-12-11 18:12:16 -0500, Tom Lane wrote:
> I think what Mark is proposing is to do the lookups while preparing the
> .bki file, which would eliminate the circularity ... at the cost of having
> to, essentially, reimplement regprocedure_in and friends in Perl.

FWIW, I did that, when this came up last. Rather interesting, because it
leads to rather noticeable speedups - currently initdb spents a
significant amount of its time doing reproc lookups. Especially
interesting because at that stage we're largely not using indexes yet, IIRC.


Andres


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Remaining 9.5 open items

2015-12-11 Thread Stephen Frost
* Robert Haas (robertmh...@gmail.com) wrote:
> On Fri, Dec 4, 2015 at 3:22 PM, Stephen Frost  wrote:
> > * Alvaro Herrera (alvhe...@2ndquadrant.com) wrote:
> >> Stephen Frost wrote:
> >> > Still, I'll get a patch worked up for it and then we can discuss the
> >> > merits of that patch going in to 9.5 now versus just into HEAD.
> >>
> >> Cool.
> >
> > While working on the DROP OWNED BY patch, and part of what took me a bit
> > longer with it, I came to the realiziation that ALTER POLICY wasn't
> > handling dependencies quite right.  All of the policy's dependencies
> > would be dropped, but then only those objects referred to in the ALTER
> > POLICY command would have dependencies recreated for them.
> >
> > The attached patch fixes that (using the same approach that I used in
> > the DROP OWNED BY patch).
> >
> > Comments welcome, as always.
> >
> > I'll plan to apply these two patches in a couple of days.
> 
> It's been a week?

I've pushed these now.

Thanks!

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] REASSIGN OWNED doesn't know how to deal with USER MAPPINGs

2015-12-11 Thread Alvaro Herrera
Jaime Casanova wrote:
> On 10 December 2015 at 13:04, Jaime Casanova
>  wrote:
> > Hi,
> >
> > We just notice $SUBJECT. Attached patch fixes it by ignoring USER
> > MAPPINGs in shdepReassignOwned() just like it happens with default
> > ACLs.

Yep, I had already posted this patch elsewhere, and I just pushed it.

> BTW, shouldn't we at least give a warning on those cases instead of
> asuming that the user will know that some objects were ignored?

I don't think so.  This kind of thing is supposed to silent when nothing
is to be done.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Using quicksort for every external sort run

2015-12-11 Thread Peter Geoghegan
On Fri, Dec 11, 2015 at 2:52 PM, Greg Stark  wrote:
> Heh. And if I comment out the presorted check the breakeven point is
> *exactly* where the threshold is today at 7 elements -- presumably
> because Hoare chose it on purpose.

I think it was Sedgewick, but yes. I'd be very hesitant to mess with
the number of elements that we fallback to insertion sort on. I've
heard of people removing that optimization on the theory that it no
longer applies, but I think they were wrong to.

-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Bootstrap DATA is a pita

2015-12-11 Thread Tom Lane
Caleb Welton  writes:
> ... but there is some circularity especially with respect to type
> definitions and the functions that define those types.  If you changed the
> definition of prorettype into a regtype then bootstrap would try to lookup
> the type before the pg_type entry exists and throw a fit.  That's handled
> in SQL via shell types.  If we wanted bootstrap to be able to handle this
> then we'd have to make two passes of pg_type, the first to create the
> shells and the second to handle populating the serialization functions.

I think what Mark is proposing is to do the lookups while preparing the
.bki file, which would eliminate the circularity ... at the cost of having
to, essentially, reimplement regprocedure_in and friends in Perl.

If we push hard on doing the other thing that you're proposing, which is
to take as much as possible out of the pure bootstrap-data phase, then
maybe it wouldn't be worth the work to do that.  Not sure.

On the other hand, I'm not very much in love with the thought of having
two different notations for "core" and "not so core" built-in function
creation.  There's something to be said for keeping all the data in one
format.  If we push on making the .bki creation script smarter, then in
addition to the name lookup facilities Mark envisions, we could have
things like default column values.  That would take us a long way toward
the same ease-of-use as full SQL definitions.  We'd still be lacking
some error checks that the SQL commands could perform; but we've
traditionally used sanity checks in the regression tests to do
cross-checking that covers more or less those same bases.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Bootstrap DATA is a pita

2015-12-11 Thread Mark Dilger

> On Dec 11, 2015, at 3:02 PM, Tom Lane  wrote:
> 
> Mark Dilger  writes:
>>> On Dec 11, 2015, at 2:40 PM, Tom Lane  wrote:
>>> Huh?  Those files are the definition of that mapping, no?  Isn't what
>>> you're proposing circular?
> 
>> No, there are far more references to Oids than there are definitions of them.
> 
> Well, you're still not being very clear, but I *think* what you're
> proposing is to put a lot more smarts into the script that converts
> the master source files into .bki format.  That is, we might have
> "=(int8,int4)" in an entry in the master source file for pg_amop, but
> the script would look up that entry using the source data for pg_type
> and pg_operator, and then emit a simple numeric OID into the .bki file.
> (Presumably, it would know to do this because we'd redefine the
> pg_amop.amopopr column as of regoperator type not plain OID.)
> 
> Yeah, that could work, though I'd be a bit concerned about the complexity
> and speed of the script.  Still, one doesn't usually rebuild postgres.bki
> many times a day, so speed might not be a big problem.

I am proposing that each of the catalog headers that currently has DATA
lines instead have a COPY loadable file that contains the same information.
So, for pg_type.h, there would be a pg_type.dat file.  All the DATA lines
would be pulled out of pg_type.h and a corresponding tab delimited row
would be written to pg_type.dat.  Henceforth, if you cloned the git repository,
you'd find no DATA lines in pg_type.h, but would find a pg_type.dat file
in the src/include/catalog directory.  Likewise for the other header files.

There would be some script, SQL or perl or whatever, that would convert
these .dat files into the .bki file.

Now, if we know that pg_type.dat will be processed before pg_proc.dat,
we can replace all the Oids representing datatypes in pg_proc.dat with the
names for those types, given that we already have a name <=> oid
mapping for types.

Likewise, if we know that pg_proc.dat will be processed before pg_operator.dat,
we can specify both functions and datatypes by name rather than by Oid
in that file, making it much easier to read.  By the time pg_operator.dat is
read, pg_type.dat and pg_proc.dat will already have been read and processed,
so there shouldn't be ambiguity. 

By the time pg_amop.dat is processed, the operators, procs, datatypes,
opfamilies and so forth would already be know.  The example I gave up
thread would be easy to parse:

amopfamily  amoplefttypeamoprighttype   amopstrategy
amoppurpose amopopr amopmethod  amopsortfamily
integer_ops int2int21   
search  "<" btree   0
integer_ops int2int22   
search  "<="btree   0
integer_ops int2int23   
search  "=" btree   0
integer_ops int2int24   
search  ">="btree   0
integer_ops int2int25   
search  ">" btree   0

And if I came along and defined a new datatype, int384, I could add rows to
this file much more easily, as:

amopfamily  amoplefttypeamoprighttype   amopstrategy
amoppurpose amopopr amopmethod  amopsortfamily
integer_ops int384int3841   
search  "<" btree   0
integer_ops int384int3842   
search  "<="btree   0
integer_ops int384int3843   
search  "=" btree   0
integer_ops int384int3844   
search  ">="btree   0
integer_ops int384int3845   
search  ">" btree   0

I don't see how this creates all that much complication, and I clearly see
how it makes files like pg_operator.{h,dat} and pg_amop.{h,dat} easier to read.


mark



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Should TIDs be typbyval = FLOAT8PASSBYVAL to speed up CREATE INDEX CONCURRENTLY?

2015-12-11 Thread Corey Huinker
On Fri, Dec 11, 2015 at 12:13 PM, Robert Haas  wrote:

> On Wed, Dec 9, 2015 at 8:16 PM, Peter Geoghegan  wrote:
> > On Tue, Nov 17, 2015 at 7:33 PM, Corey Huinker 
> wrote:
> >> I'm willing, but I'm too new to the codebase to be an effective reviewer
> >> (without guidance). The one thing I can offer in the mean time is this:
> my
> >> company/client nearly always has a few spare AWS machines on the largish
> >> side where I can compile uncommitted patches and benchmark stuff for
> y'all.
> >
> > I think that this particular patch is close to being a slam-dunk, so I
> > don't think it's particularly needed here. But thanks.
>
> It never hurts to have a few extra performance test results - I'm all
> in favor of Corey doing some testing.
>
> Also, I'd be in favor of you updating the patch to reflect the
> comments from Tom and Simon on November 17th.
>
> --
> Robert Haas
> EnterpriseDB: http://www.enterprisedb.com
> The Enterprise PostgreSQL Company
>

Sure, the machine we called "ninefivealpha", which incidentally, failed to
find a single bug in alpha2 thru beta2, is currently idle, and concurrent
index creation times are a bugbear around these parts. Can somebody, either
in this thread or privately, outline what sort of a test they'd like to see?


Re: [HACKERS] More on the libxml2 update situation

2015-12-11 Thread Noah Misch
On Fri, Dec 11, 2015 at 10:55:40AM -0500, Tom Lane wrote:
> So I did a routine software update on my RHEL6 workstation, and noticed
> a security update for libxml2 go by.  And guess what: now an XML-enabled
> build of Postgres fails regression tests for me, just as previously
> discussed in
> http://www.postgresql.org/message-id/flat/cafj8pra4xjqfgnqcqmcygx-umgmr3stt3xfeuw7kbsoiovg...@mail.gmail.com
> 
> A little bit of digging shows that the behavior we're unhappy about was
> introduced as part of the official patch for CVE-2015-7499.  This means
> that, whether or not we can persuade Veillard that it was a bad idea and
> he should undo it, the bogus behavior is likely to spread into mainstream
> distributions a lot faster than any followup fix will :-(.  Bugfix updates
> just don't get accepted as quickly as security updates.

That settles PostgreSQL's need to accept this variation.

> I'm starting to think that maybe we'd better knuckle under and provide
> a variant expected file that matches this behavior.  We're likely to be
> seeing it in the wild for some time to come.

I would look at handling this by suppressing the exact error message from the
output.  Route affected tests through a wrapper function:

SELECT expect_errdetail($$INSERT INTO xmltest VALUES (3, '

[HACKERS] strange CREATE INDEX tab completion cases

2015-12-11 Thread Peter Eisentraut
These two tab completion pieces look strange to me:

/* If we have CREATE|UNIQUE INDEX  CONCURRENTLY, then add "ON" */
 else if ((pg_strcasecmp(prev3_wd, "INDEX") == 0 ||
   pg_strcasecmp(prev2_wd, "INDEX") == 0) &&
  pg_strcasecmp(prev_wd, "CONCURRENTLY") == 0)
 COMPLETE_WITH_CONST("ON");
 /* If we have CREATE|UNIQUE INDEX , then add "ON" or "CONCURRENTLY" */
 else if ((pg_strcasecmp(prev3_wd, "CREATE") == 0 ||
   pg_strcasecmp(prev3_wd, "UNIQUE") == 0) &&
  pg_strcasecmp(prev2_wd, "INDEX") == 0)
{
 static const char *const list_CREATE_INDEX[] =
 {"CONCURRENTLY", "ON", NULL};

 COMPLETE_WITH_LIST(list_CREATE_INDEX);
 }

They appear to support a syntax along the lines of

CREATE INDEX name CONCURRENTLY

which is not the actual syntax.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Fwd: [GENERAL] pgxs/config/missing is... missing

2015-12-11 Thread Jim Nasby

On 12/11/15 6:25 PM, Jim Nasby wrote:

On 12/10/15 7:09 PM, Tom Lane wrote:

Jim Nasby  writes:

AFAICT the problem is that missing wasn't included in install or
uninstall in config/Makefile. Attached patch fixes that, and results in
missing being properly installed in lib/pgxs/config.


I thought we'd more or less rejected that approach in the previous
thread.


David Wheeler and I worked on a way to work around this in the pgTap
extension, but AFAICT there's a bug here. The FreeBSD packages seems to
be built without having PERL on the system, so if you try and use it
with PGXS to set PERL, you end up with

PERL = /bin/sh
/usr/local/lib/postgresql/pgxs/src/makefiles/../../config/missing perl

which is coming out of the PGXS makefile. And that would work fine, if
we were actually installing config/missing.

If instead of installing config/missing we want to just drop that file
completely we can do that, but then we should remove it from sorce and
from the makefiles.


Grr, right after sending this I found the thread you were talking about.

I'm not really sure our missing is better than just letting the error 
bubble up. If folks think that's better then lets just rip missing out 
entirely.


If we do decide to keep missing, we should probably clarify it's 
messages to indicate that the relevant file was missing when *configure 
was run*.

--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Bootstrap DATA is a pita

2015-12-11 Thread Andres Freund
On 2015-12-11 19:26:38 -0500, Tom Lane wrote:
> I believe it's soluble, but it's going to take something more like
> loading up all the data at once and then doing lookups as we write
> out the .bki entries for each catalog.  Fortunately, the volume of
> bootstrap data is small enough that that won't be a problem on any
> machine capable of running modern Postgres ...

I think that's exactly the right approach. Just building a few perl
hashes worked well enough, in my prototype of that.

If additionally a few more plain oid fields are converted into reg*
types, the source data fields are easier to understand and the catalogs
get much nicer to query...


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Rework the way multixact truncations work

2015-12-11 Thread Noah Misch
On Thu, Dec 10, 2015 at 08:55:54AM -0500, Robert Haas wrote:
> I don't know have anything to add to what others have said in response
> to this point, except this: the whole point of using a source code
> management system is to tell you what changed and when.  What you are
> proposing to do makes it unusable for that purpose.

Based on your comments, I'm calling the patch series returned with feedback.
I built the series around the goal of making history maximally reviewable for
persons not insiders to commit 4f627f8.  Having spent 90% of my 2015
PostgreSQL contribution time finding or fixing committed defects, my judgment
of how best to achieve that is no shout from the peanut gallery.  (Neither is
your judgment.)  In particular, I had in view two works, RLS and pg_audit,
that used the post-commit repair strategy you've advocated.  But you gave me a
fair chance to make the case, and you stayed convinced that my repairs oppose
my goal.  I can now follow your development of that belief, which is enough.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Disabling an index temporarily

2015-12-11 Thread Tatsuo Ishii
Sometimes I need to repeat creating and dropping indexes while doing
an SQL tuning work. As you might know, creating a large index takes
long time. So dropping the index and re-creating it is pain and
counter productive.

What about inventing a new SET command something like:

SET disabled_index to 

This adds  to "disabled index list". The disabled index
list let the planner to disregard the indexes in the list.

SET enabled_index to 

This removes  from the disabled index list.

SHOW disabled_index

This shows the content of the disabled index list.

Best regards,
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese:http://www.sraoss.co.jp


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Fwd: [GENERAL] pgxs/config/missing is... missing

2015-12-11 Thread Tom Lane
Jim Nasby  writes:
> Grr, right after sending this I found the thread you were talking about.

> I'm not really sure our missing is better than just letting the error 
> bubble up. If folks think that's better then lets just rip missing out 
> entirely.

Well, that's what I was suggesting in the other thread, but it wasn't
drawing consensus.

Don't know if you noticed, but I committed your earlier patch a few
hours ago.  We can revert it if we somehow get to a consensus that
we don't need "missing".

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Disabling an index temporarily

2015-12-11 Thread Joshua D. Drake

On 12/11/2015 06:25 PM, Tatsuo Ishii wrote:


What about inventing a new SET command something like:

SET disabled_index to 

This adds  to "disabled index list". The disabled index
list let the planner to disregard the indexes in the list.

SET enabled_index to 

This removes  from the disabled index list.

SHOW disabled_index

This shows the content of the disabled index list.


Wouldn't something like:

ALTER INDEX foo SET DISABLED;

See more in line with our grammar?

I assume the index is only disabled as far as the planner is concerned 
and all updates/inserts/deletes will still actually update the index 
appropriately?


JD




--
Command Prompt, Inc. - http://www.commandprompt.com/  503-667-4564
PostgreSQL Centered full stack support, consulting and development.
Announcing "I'm offended" is basically telling the world you can't
control your own emotions, so everyone else should do it for you.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PoC] Asynchronous execution again (which is not parallel)

2015-12-11 Thread Amit Kapila
On Tue, Dec 8, 2015 at 9:10 PM, Robert Haas  wrote:
>
> On Mon, Nov 30, 2015 at 7:47 AM, Kyotaro HORIGUCHI
>  wrote:
>
> But is it important enough to be worthwhile?  Maybe, maybe not.  I
> think we should be working toward a world where the Gather is at the
> top of the plan tree as often as possible, in which case
> asynchronously kicking off a Gather node won't be that exciting any
> more - see notes on the "parallelism + sorting" thread where I talk
> about primitives that would allow massively parallel merge joins,
> rather than 2 or 3 way parallel.  From my point of view, the case
> where we really need some kind of asynchronous execution solution is a
> ForeignScan, and in particular a ForeignScan which is the child of an
> Append.  In that case it's obviously really useful to be able to kick
> off all the foreign scans and then return a tuple from whichever one
> coughs it up first.
>

How will this be better than doing the same thing in a way we have done
Parallel Sequential Scan at ExecutorRun() time?



With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


Re: [HACKERS] Fwd: [GENERAL] pgxs/config/missing is... missing

2015-12-11 Thread Jim Nasby

On 12/10/15 7:09 PM, Tom Lane wrote:

Jim Nasby  writes:

AFAICT the problem is that missing wasn't included in install or
uninstall in config/Makefile. Attached patch fixes that, and results in
missing being properly installed in lib/pgxs/config.


I thought we'd more or less rejected that approach in the previous thread.


David Wheeler and I worked on a way to work around this in the pgTap 
extension, but AFAICT there's a bug here. The FreeBSD packages seems to 
be built without having PERL on the system, so if you try and use it 
with PGXS to set PERL, you end up with


PERL = /bin/sh
/usr/local/lib/postgresql/pgxs/src/makefiles/../../config/missing perl

which is coming out of the PGXS makefile. And that would work fine, if 
we were actually installing config/missing.


If instead of installing config/missing we want to just drop that file 
completely we can do that, but then we should remove it from sorce and 
from the makefiles.

--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Bootstrap DATA is a pita

2015-12-11 Thread Tom Lane
Mark Dilger  writes:
> Now, if we know that pg_type.dat will be processed before pg_proc.dat,
> we can replace all the Oids representing datatypes in pg_proc.dat with the
> names for those types, given that we already have a name <=> oid
> mapping for types.

I don't think this is quite as simple as you paint it.  How can you
process pg_type.dat first, when it contains pg_proc references?  Doing
pg_proc first is no better, because it contains pg_type references.

I believe it's soluble, but it's going to take something more like
loading up all the data at once and then doing lookups as we write
out the .bki entries for each catalog.  Fortunately, the volume of
bootstrap data is small enough that that won't be a problem on any
machine capable of running modern Postgres ...

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Bootstrap DATA is a pita

2015-12-11 Thread Tom Lane
Andres Freund  writes:
> On 2015-12-11 19:26:38 -0500, Tom Lane wrote:
>> I believe it's soluble, but it's going to take something more like
>> loading up all the data at once and then doing lookups as we write
>> out the .bki entries for each catalog.  Fortunately, the volume of
>> bootstrap data is small enough that that won't be a problem on any
>> machine capable of running modern Postgres ...

> I think that's exactly the right approach. Just building a few perl
> hashes worked well enough, in my prototype of that.

Right.  I would draw Mark's attention to src/backend/catalog/Catalog.pm
and the things that use that.  Presumably all that would have be
rewritten, but the existing code would be a useful starting point
perhaps.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Disabling an index temporarily

2015-12-11 Thread Tatsuo Ishii
> On 12/11/2015 06:25 PM, Tatsuo Ishii wrote:
> 
>> What about inventing a new SET command something like:
>>
>> SET disabled_index to 
>>
>> This adds  to "disabled index list". The disabled index
>> list let the planner to disregard the indexes in the list.
>>
>> SET enabled_index to 
>>
>> This removes  from the disabled index list.
>>
>> SHOW disabled_index
>>
>> This shows the content of the disabled index list.
> 
> Wouldn't something like:
> 
> ALTER INDEX foo SET DISABLED;
>
> See more in line with our grammar?

But this will affect other sessions, no?

> I assume the index is only disabled as far as the planner is concerned
> and all updates/inserts/deletes will still actually update the index
> appropriately?

Yes.

Best regards,
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese:http://www.sraoss.co.jp


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: In-core regression tests for replication, cascading, archiving, PITR, etc.

2015-12-11 Thread Noah Misch
On Fri, Dec 11, 2015 at 09:34:34PM +0900, Michael Paquier wrote:
> On Fri, Dec 11, 2015 at 8:48 PM, Alvaro Herrera
>  wrote:
> > Michael Paquier wrote:
> >> On Fri, Dec 11, 2015 at 5:35 AM, Alvaro Herrera
> >>  wrote:
> >> I guess that to complete your idea we could allow PostgresNode to get
> >> a custom name for its log file through an optional parameter like
> >> logfile => 'myname' or similar. And if nothing is defined, process
> >> falls back to applname. So this would give the following:
> >> ${testname}_${logfile}.log
> >
> > Sure. I don't think we should the name only for the log file, though,
> > but also for things like the "## " informative messages we print here
> > and there.  That would make the log file simpler to follow.  Also, I'm
> > not sure about having it be optional.  (TBH I'm not sure about applname
> > either; why do we keep that one?)
> 
> OK, so let's do this: the node name is a mandatory argument of
> get_new_node, which is passed to "new PostgresNode" like the port and
> the host, and it is then used in the log file name as well as in the
> information messages you are mentioning. That's a patch simple enough.
> Are you fine with this approach?

Sounds reasonable so far.

> Regarding the application name, I still think it is useful to have it
> though. pg_rewind should actually use it, and the other patch adding
> the recovery routines will use it.

Using the application_name connection parameter is fine, but I can't think of
a reason to set it to "node_".$node->port instead of $node->name.  And I can't
think of a use for the $node->applname field once you have $node->name.  What
use case would benefit?


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] More on the libxml2 update situation

2015-12-11 Thread Tom Lane
Noah Misch  writes:
> On Fri, Dec 11, 2015 at 10:55:40AM -0500, Tom Lane wrote:
>> I'm starting to think that maybe we'd better knuckle under and provide
>> a variant expected file that matches this behavior.  We're likely to be
>> seeing it in the wild for some time to come.

> I would look at handling this by suppressing the exact error message from the
> output.

I had actually thought a little bit about whether we could put back the
missing output, along the lines of "if it's a syntax-type error and no
error cursor was supplied, assume we should put a cursor at end of input".
It might be worth pursuing if Veillard indicates an unwillingness to
change libxml2 going forward, because a syntax error with no location
can be pretty unfriendly.  But I'll wait to see some response from him
before expending a lot of energy here.

> A variant expected output would be okay, though.

I pushed a set of those an hour ago.  I hope it's just a short-term
hack, but we'll see.  In any case there were now two buildfarm critters
failing, reinforcing the idea that this behavior is spreading.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [DOCS] max_worker_processes on the standby

2015-12-11 Thread Robert Haas
On Wed, Dec 9, 2015 at 4:21 PM, Alvaro Herrera  wrote:
> Robert Haas wrote:
>> On Mon, Dec 7, 2015 at 8:33 AM, Fujii Masao  wrote:
>
>> > So firstly you will push those "latest" changes soon?
>>
>> It seems like these changes haven't been pushed yet, and unfortunately
>> that's probably a beta blocker.
>
> I'm on this.

Uh, when are you going to do this?  At this point we've probably lost
another week getting rc1 out the door.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Remaining 9.5 open items

2015-12-11 Thread Robert Haas
On Fri, Dec 4, 2015 at 3:22 PM, Stephen Frost  wrote:
> * Alvaro Herrera (alvhe...@2ndquadrant.com) wrote:
>> Stephen Frost wrote:
>> > Still, I'll get a patch worked up for it and then we can discuss the
>> > merits of that patch going in to 9.5 now versus just into HEAD.
>>
>> Cool.
>
> While working on the DROP OWNED BY patch, and part of what took me a bit
> longer with it, I came to the realiziation that ALTER POLICY wasn't
> handling dependencies quite right.  All of the policy's dependencies
> would be dropped, but then only those objects referred to in the ALTER
> POLICY command would have dependencies recreated for them.
>
> The attached patch fixes that (using the same approach that I used in
> the DROP OWNED BY patch).
>
> Comments welcome, as always.
>
> I'll plan to apply these two patches in a couple of days.

It's been a week?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Isolation of table creation

2015-12-11 Thread Andres Freund
On 2015-12-11 11:46:11 +0300, Alexander Korotkov wrote:
> I discovered interesting issue with PostgreSQL transaction isolation.
> When transaction is in repeatable read isolation level, I can't see table
> which was created after transaction obtained snapshot. But I can run DML
> statements with this table. See example below.

> Is it a bug?

No, that's pretty much expected. Parse analysis, planning use fresh
catalog snapshot, whereas psql's schema queries use the transaction
snapshot.

Andres


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Bootstrap DATA is a pita

2015-12-11 Thread Alvaro Herrera
Caleb Welton wrote:
> I'm happy working these ideas forward if there is interest.
> 
> Basic design proposal is:
>   - keep a minimal amount of bootstrap to avoid intrusive changes to core
> components
>   - Add capabilities of creating objects with specific OIDs via DDL during
> initdb
>   - Update the caching/resolution mechanism for builtin functions to be
> more dynamic.
>   - Move as much of bootstrap as possible into SQL files and create catalog
> via DDL

I think the point we got stuck last time at was deciding on a good
format for the data coming from the DATA lines.  One of the objections
raised for formats such as JSON is that it's trivial for "git merge" (or
similar tools) to make a mistake because object-end/object-start lines
are all identical.  And as for the SQL-format version, the objection was
that it's hard to modify the lines en-masse when modifying the catalog
definition (new column, etc).  Ideally we would like a format that can
be bulk-edited without too much trouble.

A SQL file would presumably not have the merge issue, but mass-editing
would be a pain.

Crazy idea: we could just have a CSV file which can be loaded into a
table for mass changes using regular DDL commands, then dumped back from
there into the file.  We already know how to do these things, using
\copy etc.  Since CSV uses one line per entry, there would be no merge
problems either (or rather: all merge problems would become conflicts,
which is what we want.)

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Logical replication and multimaster

2015-12-11 Thread Petr Jelinek

On 2015-12-11 19:24, Robert Haas wrote:

On Fri, Dec 11, 2015 at 5:16 AM, Andres Freund  wrote:

On 2015-12-11 18:12:55 +0800, Craig Ringer wrote:

On 10 December 2015 at 03:19, Robert Haas  wrote:

On Sun, Dec 6, 2015 at 10:24 PM, Craig Ringer 
wrote:

* A way to securely make a libpq connection from a bgworker without

messing

with passwords etc. Generate one-time cookies, sometihng like that.


Why would you have the bgworker connect to the database via TCP
instead of just doing whatever it wants to do directly?



pg_dump and pg_restore, mainly, for copying the initial database state.


Well, you don't want to necessarily directly connect from the bgworker,
but from processes started from a bgworker. I guess that's where a good
bit of the Robert's confusion originated.


That's part of it, yeah.  I'm a little scared of this design.  I mean,
I understand now why Craig wants to do this (thanks for explaining,
Craig!), but it seems like it's going to have a lot of the same
reliability problems that pg_upgrade does.  I'm not saying there's a
better way to get the functionality, but it's pretty obvious that
depending on tools other than the server itself, and in particular
pg_dump, vastly increases the failure surface area.



Well, it's better than trying to write completely new catalogs dump tool 
for this. As Craig said, it would be best if pg_dump functionality was 
moved to functions in a backend, but that's probably not gonna happen 
tomorrow.


--
 Petr Jelinek  http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [DOCS] max_worker_processes on the standby

2015-12-11 Thread Alvaro Herrera
Alvaro Herrera wrote:

> Not sure what is going on; my reading of the code certainly says that
> the data should be there.  I'm looking into it.
> 
> I also noticed that I didn't actually push the whole of the patch
> yesterday -- I neglected to "git add" the latest changes, the ones that
> fix the promotion scenario :-( so the commit messages is misleading
> because it describes something that's not there.

Pushed a fix.

I also wrote some tests using the RecoveryNode stuff submitted by
Michael Paquier.  These aren't yet pushed, because we don't have the
framework; once we have that I can push them too.  As far as I can tell,
these tests exercise all the cases that have been pointed out so far; I
can see some of them fail if I run on previous commits.

Thanks for the continued testing.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch: fix lock contention for HASHHDR.mutex

2015-12-11 Thread Aleksander Alekseev
Oops. s/approve or disapprove/confirm or deny/

On Fri, 11 Dec 2015 19:14:41 +0300
Aleksander Alekseev  wrote:

> Hello, Tom
> 
> I see your point, but I would like to clarify a few things.
> 
> 1. Do we consider described measurement method good enough to conclude
> that sometimes PostgreSQL really spends 3 ms in a spinlock (like a RTT
> between two Internet hosts in the same city)? If not, what method
> should be used to approve or disapprove this?
> 
> 2. If we agree that PostgreSQL does sometimes spend 3 ms in a spinlock
> do we consider this a problem?
> 
> 3. If we consider this a problem, what method is considered
> appropriate to find a real reason of such behaviour so we could fix
> it?
> 
> Best regards,
> Aleksander
> 
> 



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Uninterruptible slow geo_ops.c

2015-12-11 Thread Alvaro Herrera
Hi,

A customer of ours hit some very slow code while running the
@>(polygon, polygon) operator with some big polygons.  I'm not familiar
with this stuff but I think the problem is that the algorithm converges
too slowly to a solution and also has some pretty expensive calls
somewhere.  (Perhaps there is also a problem that the algorithm *never*
converges for some inputs ...)

While I'm not familiar with the code itself, and can't post the exact
slow query just yet, I have noticed that it is missing a
CHECK_FOR_INTERRUPTS() call to enable cancelling the slow query.  I'd
backpatch this all the way back.  (The exact issue they hit is mutual
recursion between touched_lseg_between_poly and lseg_between_poly.
Since the latter also recurses on itself, the best way forward seem to
add a check for interrupts in the loop there.)

I will follow up on the actual slowness later, as warranted.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
diff --git a/src/backend/utils/adt/geo_ops.c b/src/backend/utils/adt/geo_ops.c
index 6ef420d..77871b1 100644
--- a/src/backend/utils/adt/geo_ops.c
+++ b/src/backend/utils/adt/geo_ops.c
@@ -20,6 +20,7 @@
 #include 
 
 #include "libpq/pqformat.h"
+#include "miscadmin.h"
 #include "utils/builtins.h"
 #include "utils/geo_decls.h"
 
@@ -3894,6 +3895,8 @@ lseg_inside_poly(Point *a, Point *b, POLYGON *poly, int start)
 	{
 		Point	   *interpt;
 
+		CHECK_FOR_INTERRUPTS();
+
 		s.p[1] = poly->p[i];
 
 		if (on_ps_internal(t.p, ))

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Uninterruptible slow geo_ops.c

2015-12-11 Thread Marco Nenciarini
On 11/12/15 18:48, Alvaro Herrera wrote:
> Hi,
> 
> A customer of ours hit some very slow code while running the
> @>(polygon, polygon) operator with some big polygons.  I'm not familiar
> with this stuff but I think the problem is that the algorithm converges
> too slowly to a solution and also has some pretty expensive calls
> somewhere.  (Perhaps there is also a problem that the algorithm *never*
> converges for some inputs ...)
> 
> While I'm not familiar with the code itself, and can't post the exact
> slow query just yet, I have noticed that it is missing a
> CHECK_FOR_INTERRUPTS() call to enable cancelling the slow query.  I'd
> backpatch this all the way back.  (The exact issue they hit is mutual
> recursion between touched_lseg_between_poly and lseg_between_poly.
> Since the latter also recurses on itself, the best way forward seem to
> add a check for interrupts in the loop there.)
> 
> I will follow up on the actual slowness later, as warranted.
> 

I would add that it was not simply a slow computation, but more probably they 
hit a case where the algorithm doesn't converge at all.

I've killed it manually by calling ProcessInterrupts() through gdb after 7 days 
and half of CPU time (100% of one CPU).
The server CPU is an Intel(R) Xeon(R) CPU E5-2660 v2 @ 2.20GHz.

The query doesn't involve any table and is a simple call of @>(polygon, 
polygon) operator.

SELECT polygon 'poligon literal with 522 points' @> polygon 'poligon box'

I'm checking if we can share the full query.

Regards,
Marco

-- 
Marco Nenciarini - 2ndQuadrant Italy
PostgreSQL Training, Services and Support
marco.nenciar...@2ndquadrant.it | www.2ndQuadrant.it



signature.asc
Description: OpenPGP digital signature


[HACKERS] Remove array_nulls?

2015-12-11 Thread Jim Nasby
A quick doc search indicates this config was created in 9.0, though the 
docs state it's for a change that happened in 8.2[1]. Both versions are 
now supported, and 8.2 is obviously ancient.


Is it time to remove this GUC?

[1] http://www.postgresql.org/docs/9.0/static/runtime-config-compatible.html
--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Cluster "stuck" in "not accepting commands to avoid wraparound data loss"

2015-12-11 Thread Jeff Janes
On Thu, Dec 10, 2015 at 1:55 PM, Andres Freund  wrote:
> Hi,
>
> I recently started a pgbench benchmark (to evaluate a piece of hardware,
> not postgres) with master. Unfortunately, by accident, I started
> postgres in a shell, not screen like pgbench.
>
> Just logged back in and saw:
> client 71 aborted in state 8: ERROR:  database is not accepting commands to 
> avoid wraparound data loss in database "postgres"
> HINT:  Stop the postmaster and vacuum that database in single-user mode.
> You might also need to commit or roll back old prepared transactions.
> transaction type: TPC-B (sort of)
> scaling factor: 300
> query mode: prepared
> number of clients: 97
> number of threads: 97
> duration: 30 s
> number of transactions actually processed: 2566862424
> latency average: 3.214 ms
> latency stddev: 7.336 ms
> tps = 30169.374133 (including connections establishing)
> tps = 30169.378406 (excluding connections establishing)
>
> Hm. Bad news. We apparently didn't keep up vacuuming. But worse news is
> that even now, days later, autovacuum hasn't progressed:
> postgres=# select txid_current();
> ERROR:  database is not accepting commands to avoid wraparound data loss in 
> database "postgres"
> HINT:  Stop the postmaster and vacuum that database in single-user mode.
> You might also need to commit or roll back old prepared transactions.

This is still in regular mode, correct?

I don't think this has ever worked.  Vacuum needs to start a
transaction in order to record its update of datfrozenxid and
relfrozenxid to the catalogs (or at least, starts one for something).
Once you are within 1,000,000 of wraparound, you have to do the vacuum
in single-user mode, you can no longer just wait for autovacuum to do
its thing.  Otherwise the vacuum will do all the work of the vacuum,
but then fail to clear the error condition.



>
> Looking at datfrozenxid:
> postgres=# select datname, datfrozenxid, age(datfrozenxid) FROM pg_database ;
>   datname  | datfrozenxid |age
> ---+--+---
>  template1 |   3357685367 | 0
>  template0 |   3357685367 | 0
>  postgres  |   3159867733 | 197817634
> (3 rows)
> reveals that the launcher doesn't do squat because it doesn't think it
> needs to do anything.
>
> (gdb) p *ShmemVariableCache
> $3 = {nextOid = 24576, oidCount = 0, nextXid = 3357685367, oldestXid = 
> 1211201715, xidVacLimit = 1411201715, xidWarnLimit = 3347685362,
>   xidStopLimit = 3357685362, xidWrapLimit = 3358685362, oldestXidDB = 12380, 
> oldestCommitTs = 0, newestCommitTs = 0,
>   latestCompletedXid = 3357685366}
>
> 'oldestXid' shows the problem: We're indeed pretty short before a
> wraparound.
>
>
> The question is, how did we get here?

Could the database have undergone a crash and recovery cycle?

Since changes to datfrozenxid are WAL logged at the time they occur,
but the supposedly-synchronous change to ShmemVariableCache is not WAL
logged until the next checkpoint, a well timed crash can leave you in
the state where the system is in a tizzy about wraparound but each
database says "Nope, not me".

Since with default settings each database/table gets frozen 10 times
per real wrap-around, this is usually not going to be a problem as
having 10 consecutive well timed crashes is very unlikely.

But if you increase autovacuum_freeze_max_age a lot, or if the freeze
scan takes so long that there is only time to complete one and a
fraction of them during a single real wrap-around interval, then just
a single crash can you leave you destined for trouble.

Cheers,

Jeff


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Making tab-complete.c easier to maintain

2015-12-11 Thread Michael Paquier
On Thu, Dec 10, 2015 at 5:38 PM, Kyotaro HORIGUCHI
 wrote:
> I'm unhappy with context matching using previous_words in two
> points. Current code needs human-readable comments describing
> almost all matchings. It is hard to maintain and some of them
> actually are wrong. The hardness is largely alleviated by
> Thomas's approach exept for complex ones. Another is that
> previous_words method is not-enough adaptable for optional words
> in syntax. For example, CREATE INDEX has a complex syntax and
> current rather complex code does not cover it fully (or enough).

Yep.

> On the other hand, regexp is quite heavy-weight. Current code
> does one completion in 1 milliseconds but regexps simplly
> replaced with current matching code takes nearly 100ms on my
> environment. But appropriate refactoring reduces it to under 10
> ms.

That's quite a difference in performance. A good responsiveness is
always nice for such things to make the user confortable.

> If we need more powerful completion (which means it covers more
> wide area of syntax including more optional words), Thomas's
> approach would face difficulties of another level of
> complexity. I'd like to overcome it.

That's a valid concern for sure because the patch of Thomas is not
much smart in emulating negative checks, still the main idea to not
rely anymore on some checks based on pg_strcmp or similar but have
something that is list-based, with a primitive sub-language in it is
very appealing.

As a next step, more committer and hacker input (people who have
worked on tab completion of psql) would be a nice next step. IMO, as
someone who has hacked tab-complete.c a couple of times I think that
Thomas' patch has merit, now it would make backpatch harder. Also, if
we prioritize a dynamically generated tab completion using gram.y, so
be it and let's reject both patches then...
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Move PinBuffer and UnpinBuffer to atomics

2015-12-11 Thread Alexander Korotkov
On Thu, Dec 10, 2015 at 9:26 AM, Amit Kapila 
wrote:

> On Wed, Dec 9, 2015 at 2:17 PM, Alexander Korotkov <
> a.korot...@postgrespro.ru> wrote:
>
>> On Tue, Dec 8, 2015 at 6:00 PM, Amit Kapila 
>> wrote:
>>
>>> On Tue, Dec 8, 2015 at 3:56 PM, Alexander Korotkov <
>>> a.korot...@postgrespro.ru> wrote:

 ​Agree. This patch need to be carefully verified. Current experiments
 just show that it is promising direction for improvement. I'll come with
 better version of this patch.

 Also, after testing on large machines I have another observation to
 share. For now, LWLock doesn't guarantee that exclusive lock would be ever
 acquired (assuming each shared lock duration is finite). It because when
 there is no exclusive lock, new shared locks aren't queued and LWLock state
 is changed directly. Thus, process which tries to acquire exclusive lock
 have to wait for gap in shared locks.

>>>
>>> I think this has the potential to starve exclusive lockers in worst case.
>>>
>>>
 But with high concurrency for shared lock that could happen very rare,
 say never.

 We did see this on big Intel machine in practice. pgbench -S gets
 shared ProcArrayLock very frequently. Since some number of connections is
 achieved, new connections hangs on getting exclusive ProcArrayLock. I think
 we could do some workaround for this problem. For instance, when exclusive
 lock waiter have some timeout it could set some special bit which prevents
 others to get new shared locks.


>>> I think timeout based solution would lead to giving priority to
>>> exclusive lock waiters (assume a case where each of exclusive
>>> lock waiter timesout one after another) and make shared lockers
>>> wait and a timer based solution might turn out to be costly for
>>> general cases where wait is not so long.
>>>
>>
>> ​Since all lwlock waiters are ordered in the queue, we can let only first
>> waiter to set this bit.​
>>
>
> Thats okay, but still every time an Exclusive locker woke up, the
> threshold time for its wait might be already over and it will set the
> bit.  In theory, that looks okay, but as compare to current algorithm
> it will make more shared lockers to be added into wait queue.
>
>
>> Anyway, once bit is set, shared lockers would be added to the queue. They
>> would get the lock in queue order.
>>
>>
>
> Ye thats right, but I think in general the solution to this problem
> should be don't let any Exclusive locker to starve and still allow
> as many shared lockers as possible.  I think here it is important
> how we define starving, should it be based on time or something
> else?  I find timer based solution somewhat less suitable, but may
> be it is okay, if there is no other better way.
>

​Yes, we probably should find something better.​

Another way could be to
>>> check if the Exclusive locker needs to go for repeated wait for a
>>> couple of times, then we can set such a bit.
>>>
>>
>> ​I'm not sure what do you mean by repeated wait. Do you mean exclusive
>> locker was waked twice up by timeout?
>>
>
> I mean to say once the Exclusive locker is woken up, it again
> re-tries to acquire the lock as it does today, but if it finds that the
> number of retries is greater than certain threshold (let us say 10),
> then we sit the bit.
>

​Yes, there is a cycle with retries in LWLockAcquire function. The case of
retry is when ​waiter is waked up, but someone other steal the lock before
him. Lock waiter is waked up by lock releaser only when lock becomes free.
But in the case of high concurrency for shared lock, it almost never
becomes free. So, exclusive locker would be never waked up. I'm pretty sure
this happens on big Intel machine while we do the benchmark. So, relying on
number of retries wouldn't work in this case.
I'll do the tests to verify if retries happens in our case.

--
Alexander Korotkov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company


Re: [HACKERS] Bootstrap DATA is a pita

2015-12-11 Thread Caleb Welton
I'm happy working these ideas forward if there is interest.

Basic design proposal is:
  - keep a minimal amount of bootstrap to avoid intrusive changes to core
components
  - Add capabilities of creating objects with specific OIDs via DDL during
initdb
  - Update the caching/resolution mechanism for builtin functions to be
more dynamic.
  - Move as much of bootstrap as possible into SQL files and create catalog
via DDL

Feedback appreciated.

I can provide a sample patch if there is interest, about ~500 lines of
combined diff for the needed infrastructure to support the above, not
including the modifications to pg_proc.h that would follow.

Thanks,
  Caleb

On Thu, Dec 10, 2015 at 11:47 AM, Caleb Welton wrote:
>
>
> Hello Hackers,
>
>   Reviving an old thread on simplifying the bootstrap process.
>
>   I'm a developer from the GPDB / HAWQ side of the world where we did some
> work a while back to enable catalog definition via SQL files and we have
> found it valuable from a dev perspective.  The mechanism currently in those
> products is a bit.. convoluted where SQL is processed in perl to create the
> existing DATA statements, which are then processed as they are today in
> Postgres... I wouldn't suggest this route, but having worked with both the
> DATA mechanism and the SQL based one I've certainly found SQL to be a more
> convenient way of interacting with the catalog.
>
>   I'd propose:
>  - Keep enough of the existing bootstrap mechanism functional to get a
> small tidy core, essentially you need enough of pg_type, pg_proc, pg_class,
> pg_attribute to support the 25 types used by catalog tables and most
> everything else can be moved into SQL processing like how system_views.sql
> is handled today.
>
>   The above was largely proposed back in March and rejected based on
> concerns that
>
>   1. initdb would be slower.
>   2. It would introduce too much special purpose bootstrap cruft into the
> code.
>   3. Editing SQL commands is not comfortable in bulk
>
> On 1.
>
> I have a prototype that handles about 1000 functions (all the functions in
> pg_proc.h that are not used by other catalog tables, e.g. pg_type,
> pg_language, pg_range, pg_aggregate, window functions, pg_ts_parser, etc).
>
> All of initdb can be processed in 1.53s. This compares to 1.37s with the
> current bootstrap approach.  So yes, this is slower, but not 'noticeably
> slower' - I certainly didn't notice the 0.16s until I saw the concern and
> then timed it.
>
> On 2.
>
> So far the amount of cruft has been:
>   - Enabling adding functions with specific OIDs when creating functions.
> 1 line changes in pg_aggregate.c, proclang.c, typecmds.c
> about dozen lines of code in functioncmds.c
> 3 lines changed in pg_proc.c
>   - Update the fmgr_internal_validator for builtin functions while the
> catalog is mutable
> 3 lines changed in pg_proc.c
>   - Update how the builtin function cache is built
> Some significant work in fmgr.c that honestly still needs cleanup
> before it would be ready to propose as a patch that would be worthy of
> committing.
>   - Update how builtin functions are resolved outside of bootstrap
> Minor updates to dynloader for lookup of symbols within the current
> executable, so far I've only done darwin.c for my prototype, this would
> need to be extended to the other ports.
>   - Initializitation of the builtin cache
> 2 line change in postinit.c
>   - Addition of a stage in initdb to process the sql directives similar in
> scope to the processing of system_views.sql.
>
> No changes needed in the parser, planner, etc.  My assessment is that this
> worry is not a major concern in practice with the right implementation.
>
> On 3.
>
> Having worked with both SQL and bki DATA directives I have personally found
> the convenience of SQL outweighs the pain.  In many cases changes, such as
> adding a new column to pg_proc, have minimal impact on the SQL
> representation and what changes are needed are often simple to implement.
> E.g. accounting for COST only needs to be done for the functions that need
> something other than the default value.  This however is somewhat
> subjective.
>
> On the Pros side:
>
>   a. Debugging bootstrap is extremely painful, debugging once initdb has
> gotten to 'postgres --single' is way easier.
>
>   b. It is easier to introduce minor issues with DATA directives than it is
> when using the SQL processing used for all other user objects.
>
>Example: currently in Postgres all builtin functions default to COST 1,
> and all SQL functions default to cost 100. However the following SQL
> functions included in bootstrap inexplicably are initialized with a COST of
> 1:
>age(timestamp with time zone)
>age(timestamp without time zone)
>bit_length(bytea)
>bit_length(text)
>bit_length(bit)
>date_part(text, abstime)
>date_part(text, reltime)
>date_part(text, date)
>... and 26 other examples
>
>   c. SQL files are significantly 

Re: [HACKERS] [sqlsmith] Failed to generate plan on lateral subqueries

2015-12-11 Thread Peter Geoghegan
On Fri, Dec 11, 2015 at 7:58 AM, Tom Lane  wrote:
>> I guess that means I have to go back to extending the grammar again :-).
>
> I await the results with interest.  Did you note the suggestion about
> trying to stress the ON CONFLICT code with this?  You'd need it to
> issue non-SELECT queries, which might create some reproducibility
> issues...

About 80% of the bugs we've seen so far are the type that a tool like
sqlsmith could plausibly catch: bugs that trigger defensive "can't
happen" elog(ERROR, ... ) calls within the planner and rewriter. While
I've been vigilant, I certainly wouldn't be surprised if more were
found, given the total flexibility of the ON CONFLICT syntax.

-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [sqlsmith] Failed to generate plan on lateral subqueries

2015-12-11 Thread Greg Stark
There may be other errors that would be surprising for Tom or Robert. What
I did with the string argument fuzzer was printed  counts for each sqlstate
for the errors and watched for errors that only occurred occasionally or
didn't make sense to me.

Also, do you have any timeouts? Do you have any stats on how long these
queries are taking to plan? What's the longest query to plan you've found?

Do you have coverage data for the corpus? Maybe we could suggest syntaxes
specifically aimed at getting coverage for sections of chose that don't
have any yet.
On 11 Dec 2015 19:25, "Peter Geoghegan"  wrote:

> On Fri, Dec 11, 2015 at 7:58 AM, Tom Lane  wrote:
> >> I guess that means I have to go back to extending the grammar again :-).
> >
> > I await the results with interest.  Did you note the suggestion about
> > trying to stress the ON CONFLICT code with this?  You'd need it to
> > issue non-SELECT queries, which might create some reproducibility
> > issues...
>
> About 80% of the bugs we've seen so far are the type that a tool like
> sqlsmith could plausibly catch: bugs that trigger defensive "can't
> happen" elog(ERROR, ... ) calls within the planner and rewriter. While
> I've been vigilant, I certainly wouldn't be surprised if more were
> found, given the total flexibility of the ON CONFLICT syntax.
>
> --
> Peter Geoghegan
>
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers
>


Re: [HACKERS] Logical replication and multimaster

2015-12-11 Thread Robert Haas
On Fri, Dec 11, 2015 at 5:16 AM, Andres Freund  wrote:
> On 2015-12-11 18:12:55 +0800, Craig Ringer wrote:
>> On 10 December 2015 at 03:19, Robert Haas  wrote:
>> > On Sun, Dec 6, 2015 at 10:24 PM, Craig Ringer 
>> > wrote:
>> > > * A way to securely make a libpq connection from a bgworker without
>> > messing
>> > > with passwords etc. Generate one-time cookies, sometihng like that.
>> >
>> > Why would you have the bgworker connect to the database via TCP
>> > instead of just doing whatever it wants to do directly?
>
>> pg_dump and pg_restore, mainly, for copying the initial database state.
>
> Well, you don't want to necessarily directly connect from the bgworker,
> but from processes started from a bgworker. I guess that's where a good
> bit of the Robert's confusion originated.

That's part of it, yeah.  I'm a little scared of this design.  I mean,
I understand now why Craig wants to do this (thanks for explaining,
Craig!), but it seems like it's going to have a lot of the same
reliability problems that pg_upgrade does.  I'm not saying there's a
better way to get the functionality, but it's pretty obvious that
depending on tools other than the server itself, and in particular
pg_dump, vastly increases the failure surface area.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Isolation of table creation

2015-12-11 Thread Alexander Korotkov
Hackers,

I discovered interesting issue with PostgreSQL transaction isolation.
When transaction is in repeatable read isolation level, I can't see table
which was created after transaction obtained snapshot. But I can run DML
statements with this table. See example below.

Session 1
# begin transaction isolation level repeatable read;
BEGIN
# \dt
No relations found.
Session 2
# create table tmp (i int not null);
CREATE TABLE
# insert into tmp values (1);
INSERT 0 1
# \dt
No relations found.
# select * from tmp;
 i
---
(0 rows)

# insert into tmp values (2);
INSERT 0 1
# select * from tmp;
 i
---
 2
(1 row)
# commit;
COMMIT

Is it a bug?

--
Alexander Korotkov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company


Re: [HACKERS] Bootstrap DATA is a pita

2015-12-11 Thread Mark Dilger

> On Dec 11, 2015, at 2:54 PM, Caleb Welton  wrote:
> 
> The current semantic level is pretty low level, somewhat cumbersome, and 
> requires filling in values that most of the time the system has a pretty good 
> idea how to fill in default values.
> 
> Compare:
> CREATE FUNCTION lo_export(oid, text) RETURNS integer LANGUAGE internal STRICT 
> AS 'lo_export' WITH (OID=765);  
> 
> DATA(insert OID = 765 (  lo_export   PGNSP PGUID 12 1 0 0 0 f f f 
> f t f v u 2 0 23 "26 25" _null_ _null_ _null_ _null_ _null_ lo_export _null_ 
> _null_ _null_ ));

I would like to hear more about this idea.  Are you proposing that we use 
something
like the above CREATE FUNCTION format to express what is currently being 
expressed
with DATA statements?  That is an interesting idea, though I don't know what 
exactly
that would look like.  If you want to forward this idea, I'd be eager to hear 
your thoughts.
If not, I'll try to make progress with my idea of tab delimited files and such 
(or really,
Alvaro's idea of csv files that I only slightly corrupted).

mark



Re: [HACKERS] Bootstrap DATA is a pita

2015-12-11 Thread Caleb Welton
Makes sense.  

During my own prototyping what I did was generate the sql statements via sql 
querying the existing catalog.  Way easier than hand writing 1000+ function 
definitions and not difficult to modify for future changes.  As affirmed that 
it was very easy to adapt my existing sql to account for some of the newer 
features in master. 

The biggest challenge was establishing a sort order that ensures both a unique 
ordering and that the dependencies needed for SQL functions have been processed 
before trying to define them.  Which effects about 4/1000 functions based on a 
natural oid ordering.  

> On Dec 11, 2015, at 11:43 AM, Alvaro Herrera  wrote:
> 
> Caleb Welton wrote:
>> I'm happy working these ideas forward if there is interest.
>> 
>> Basic design proposal is:
>>  - keep a minimal amount of bootstrap to avoid intrusive changes to core
>> components
>>  - Add capabilities of creating objects with specific OIDs via DDL during
>> initdb
>>  - Update the caching/resolution mechanism for builtin functions to be
>> more dynamic.
>>  - Move as much of bootstrap as possible into SQL files and create catalog
>> via DDL
> 
> I think the point we got stuck last time at was deciding on a good
> format for the data coming from the DATA lines.  One of the objections
> raised for formats such as JSON is that it's trivial for "git merge" (or
> similar tools) to make a mistake because object-end/object-start lines
> are all identical.  And as for the SQL-format version, the objection was
> that it's hard to modify the lines en-masse when modifying the catalog
> definition (new column, etc).  Ideally we would like a format that can
> be bulk-edited without too much trouble.
> 
> A SQL file would presumably not have the merge issue, but mass-editing
> would be a pain.
> 
> Crazy idea: we could just have a CSV file which can be loaded into a
> table for mass changes using regular DDL commands, then dumped back from
> there into the file.  We already know how to do these things, using
> \copy etc.  Since CSV uses one line per entry, there would be no merge
> problems either (or rather: all merge problems would become conflicts,
> which is what we want.)
> 
> -- 
> Álvaro Herrerahttp://www.2ndQuadrant.com/
> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Remove array_nulls?

2015-12-11 Thread Tom Lane
Jim Nasby  writes:
> A quick doc search indicates this config was created in 9.0, though the 
> docs state it's for a change that happened in 8.2[1].

Don't know what you're looking at, but the GUC is definitely there (and
documented) in 8.2.

> Is it time to remove this GUC?

Perhaps, but I'd like to have a less ad-hoc process about it.  What's
our policy for dropping backwards-compatibility GUCs?  Are there any
others that should be removed now as well?

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers