Re: [HACKERS] Postgres_fdw join pushdown - wrong results with whole-row reference
On Sat, Jun 25, 2016 at 12:44 AM, Robert Haas wrote: > On Wed, Jun 22, 2016 at 4:11 AM, Amit Langote > wrote: > >> In an outer join we have to differentiate between a row being null > (because > >> there was no joining row on nullable side) and a non-null row with all > >> column values being null. If we cast the whole-row expression to a text > >> e.g. r.*::text and test the resultant value for nullness, it gives us > what > >> we want. A null row casted to text is null and a row with all null > values > >> casted to text is not null. > > > > You are right. There may be non-null rows with all columns null which > are > > handled wrongly (as Rushabh reports) and the hack I proposed is not right > > for. Especially if from non-nullable side as in the reported case, NULL > > test for such a whole-row-var would produce the wrong result. Casting to > > text as your patch does produces the correct behavior. > > I agree, but I think we'd better cast to pg_catalog.text instead, just > to be safe. Committed that way. > postgres_fdw resets the search path to pg_catalog while opening connection to the server. The reason behind this is explained in deparse.c * We assume that the remote session's search_path is exactly "pg_catalog", * and thus we need schema-qualify all and only names outside pg_catalog. So, we should not be schema-qualifying types while casting. From deparse.c /* * Convert type OID + typmod info into a type name we can ship to the remote * server. Someplace else had better have verified that this type name is * expected to be known on the remote end. * * This is almost just format_type_with_typemod(), except that if left to its * own devices, that function will make schema-qualification decisions based * on the local search_path, which is wrong. We must schema-qualify all * type names that are not in pg_catalog. We assume here that built-in types * are all in pg_catalog and need not be qualified; otherwise, qualify. */ static char * deparse_type_name(Oid type_oid, int32 typemod) { if (is_builtin(type_oid)) return format_type_with_typemod(type_oid, typemod); else return format_type_with_typemod_qualified(type_oid, typemod); } -- Best Wishes, Ashutosh Bapat EnterpriseDB Corporation The Postgres Database Company
Re: [HACKERS] Rename max_parallel_degree?
On Mon, Jun 27, 2016 at 10:21 AM, Amit Kapila wrote: > On Sun, Jun 26, 2016 at 3:57 PM, Julien Rouhaud > wrote: >> On 26/06/2016 08:37, Amit Kapila wrote: >>> >>> @@ -370,6 +379,8 @@ ForgetBackgroundWorker(slist_mutable_iter *cur) >>> Assert(rw->rw_shmem_slot < >>> max_worker_processes); >>> slot = &BackgroundWorkerData->slot[rw->rw_shmem_slot]; >>> slot->in_use = >>> false; >>> + if (slot->parallel) >>> + BackgroundWorkerData->parallel_terminate_count++; >>> >>> I think operations on parallel_terminate_count are not safe. >>> ForgetBackgroundWorker() and RegisterDynamicBackgroundWorker() can try >>> to read write at same time. It seems you need to use atomic >>> operations to ensure safety. >>> >>> >> >> But operations on parallel_terminate_count are done by the postmaster, >> and per Robert's previous email postmaster can't use atomics: >> > > I think as the parallel_terminate_count is only modified by postmaster > and read by other processes, such an operation will be considered > atomic. We don't need to specifically make it atomic unless multiple > processes are allowed to modify such a counter. Although, I think we > need to use some barriers in code to make it safe. > > 1. > @@ -272,6 +279,8 @@ BackgroundWorkerStateChange(void) > pg_memory_barrier(); > > slot->pid = 0; > slot->in_use = false; > + if (slot->parallel) > + > BackgroundWorkerData->parallel_terminate_count++; > > I think the check of slot->parallel and increment to > parallel_terminate_count should be done before marking slot->in_use to > false. Consider the case if slot->in_use is marked as false and then > before next line execution, one of the backends registers dynamic > worker (non-parallel worker), so that > backend can see this slot as free and use it. It will also mark the > parallel flag of slot as false. Now when postmaster will check the > slot->parallel flag, it will find it false and then skip the increment > to parallel_terminate_count. > If you agree with above theory, then you need to use write barrier as well after incrementing the parallel_terminate_count to avoid re-ordering with slot->in_use flag. -- With Regards, Amit Kapila. EnterpriseDB: http://www.enterprisedb.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Rename max_parallel_degree?
On Sun, Jun 26, 2016 at 3:57 PM, Julien Rouhaud wrote: > On 26/06/2016 08:37, Amit Kapila wrote: >> >> @@ -370,6 +379,8 @@ ForgetBackgroundWorker(slist_mutable_iter *cur) >> Assert(rw->rw_shmem_slot < >> max_worker_processes); >> slot = &BackgroundWorkerData->slot[rw->rw_shmem_slot]; >> slot->in_use = >> false; >> + if (slot->parallel) >> + BackgroundWorkerData->parallel_terminate_count++; >> >> I think operations on parallel_terminate_count are not safe. >> ForgetBackgroundWorker() and RegisterDynamicBackgroundWorker() can try >> to read write at same time. It seems you need to use atomic >> operations to ensure safety. >> >> > > But operations on parallel_terminate_count are done by the postmaster, > and per Robert's previous email postmaster can't use atomics: > I think as the parallel_terminate_count is only modified by postmaster and read by other processes, such an operation will be considered atomic. We don't need to specifically make it atomic unless multiple processes are allowed to modify such a counter. Although, I think we need to use some barriers in code to make it safe. 1. @@ -272,6 +279,8 @@ BackgroundWorkerStateChange(void) pg_memory_barrier(); slot->pid = 0; slot->in_use = false; + if (slot->parallel) + BackgroundWorkerData->parallel_terminate_count++; I think the check of slot->parallel and increment to parallel_terminate_count should be done before marking slot->in_use to false. Consider the case if slot->in_use is marked as false and then before next line execution, one of the backends registers dynamic worker (non-parallel worker), so that backend can see this slot as free and use it. It will also mark the parallel flag of slot as false. Now when postmaster will check the slot->parallel flag, it will find it false and then skip the increment to parallel_terminate_count. 2. + if (parallel && (BackgroundWorkerData->parallel_register_count - + BackgroundWorkerData->parallel_terminate_count) >= + max_parallel_workers) + { + LWLockRelease(BackgroundWorkerLock); + return false; + } + I think we need a read barrier here, so that this check doesn't get reordered with the for loop below it. Also, see if you find the code more readable by moving the after && part of check to next line. 3. typedef struct BackgroundWorkerArray { int total_slots; + uint32 parallel_register_count; + uint32 parallel_terminate_count; BackgroundWorkerSlot slot[FLEXIBLE_ARRAY_MEMBER]; } BackgroundWorkerArray; See, if you can add comments on top of this structure to explain the usage/meaning of newly added parallel_* counters? -- With Regards, Amit Kapila. EnterpriseDB: http://www.enterprisedb.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] Bug in batch tuplesort memory CLUSTER case (9.6 only)
In general, moving tuplesort.c batch memory caller tuples around happens when batch memory needs to be recycled, or freed outright with pfree(). I failed to take into account that CLUSTER tuplesorts need an extra step when moving caller tuples to a new location (i.e. when moving HeapTuple caller tuples using memmove()), because their particular variety of caller tuple happens to itself contain a pointer to palloc()'d memory. Attached patch fixes this use-after-free bug. -- Peter Geoghegan From aa68ed9e3f952bc19aa3bc8e31d0e216e13e0fae Mon Sep 17 00:00:00 2001 From: Peter Geoghegan Date: Sun, 26 Jun 2016 16:25:07 -0700 Subject: [PATCH] Fix bug in batch tuplesort memory with CLUSTER Commit 0011c0091 optimized tuplesort.c's use of memory during external sort final on-the-fly merge steps. This included handling exhaustion of large batch memory allocations by moving existing caller tuples to a new, usable location using memmove(). This failed to take into account one particular special case, though: CLUSTER tuplesorts. The CLUSTER case involves caller tuples that themselves contain a pointer to an offset into the selfsame caller tuple. A use-after-free could therefore happen when the CLUSTER tuplesort client dereferenced this contained pointer. Instead of calling memmove() directly, affected codepaths now call a generic MOVETUP() interface routine, reaching a caller-tuple specific implementation. The CLUSTER implementation performs a memmove() and handles this extra detail. In all other cases, a memmove() shim implementation is reached. --- src/backend/utils/sort/tuplesort.c | 56 -- 1 file changed, 53 insertions(+), 3 deletions(-) diff --git a/src/backend/utils/sort/tuplesort.c b/src/backend/utils/sort/tuplesort.c index 7878660..4c502bb 100644 --- a/src/backend/utils/sort/tuplesort.c +++ b/src/backend/utils/sort/tuplesort.c @@ -300,11 +300,21 @@ struct Tuplesortstate * the already-read length of the stored tuple. Create a palloc'd copy, * initialize tuple/datum1/isnull1 in the target SortTuple struct, and * decrease state->availMem by the amount of memory space consumed. + * (See batchUsed notes for details on how memory is handled when + * incremental accounting is abandoned.) */ void (*readtup) (Tuplesortstate *state, SortTuple *stup, int tapenum, unsigned int len); /* + * Function to move a caller tuple. This is usually implemented as a + * memmove() shim, but function may also perform additional fix-up of + * caller tuple where needed. Batch memory support requires the + * movement of caller tuples from one location in memory to another. + */ + void (*movetup) (void *dest, void *src, unsigned int len); + + /* * This array holds the tuples now in sort memory. If we are in state * INITIAL, the tuples are in no particular order; if we are in state * SORTEDINMEM, the tuples are in final sorted order; in states BUILDRUNS @@ -475,6 +485,7 @@ struct Tuplesortstate #define COPYTUP(state,stup,tup) ((*(state)->copytup) (state, stup, tup)) #define WRITETUP(state,tape,stup) ((*(state)->writetup) (state, tape, stup)) #define READTUP(state,stup,tape,len) ((*(state)->readtup) (state, stup, tape, len)) +#define MOVETUP(dest,src,len) ((*(state)->movetup) (dest, src, len)) #define LACKMEM(state) ((state)->availMem < 0 && !(state)->batchUsed) #define USEMEM(state,amt) ((state)->availMem -= (amt)) #define FREEMEM(state,amt) ((state)->availMem += (amt)) @@ -542,7 +553,7 @@ static void inittapes(Tuplesortstate *state); static void selectnewtape(Tuplesortstate *state); static void mergeruns(Tuplesortstate *state); static void mergeonerun(Tuplesortstate *state); -static void beginmerge(Tuplesortstate *state, bool finalMerge); +static void beginmerge(Tuplesortstate *state, bool finalMergeBatch); static void batchmemtuples(Tuplesortstate *state); static void mergebatch(Tuplesortstate *state, int64 spacePerTape); static void mergebatchone(Tuplesortstate *state, int srcTape, @@ -571,6 +582,7 @@ static void writetup_heap(Tuplesortstate *state, int tapenum, SortTuple *stup); static void readtup_heap(Tuplesortstate *state, SortTuple *stup, int tapenum, unsigned int len); +static void movetup_heap(void *dest, void *src, unsigned int len); static int comparetup_cluster(const SortTuple *a, const SortTuple *b, Tuplesortstate *state); static void copytup_cluster(Tuplesortstate *state, SortTuple *stup, void *tup); @@ -578,6 +590,7 @@ static void writetup_cluster(Tuplesortstate *state, int tapenum, SortTuple *stup); static void readtup_cluster(Tuplesortstate *state, SortTuple *stup, int tapenum, unsigned int len); +static void movetup_cluster(void *dest, void *src, unsigned int len); static int comparetup_index_btree(const SortTuple *a, const SortTuple *b, Tuplesortstate *state); static int comparetup_index_hash(const SortTuple *a, const SortTuple *b, @@ -587,6 +600,7 @@ static void writ
Re: [HACKERS] primary_conninfo missing from pg_stat_wal_receiver
On Sun, Jun 19, 2016 at 05:56:12PM +0900, Michael Paquier wrote: > The new pg_stat_wal_receiver does not include primary_conninfo. > Looking at that now, it looks almost stupid not to include it... > Adding it now would require a catalog bump, so I am not sure if this > is acceptable at this stage for 9.6... There is no value in avoiding catversion bumps at this time. [Action required within 72 hours. This is a generic notification.] The above-described topic is currently a PostgreSQL 9.6 open item. Alvaro, since you committed the patch believed to have created it, you own this open item. If some other commit is more relevant or if this does not belong as a 9.6 open item, please let us know. Otherwise, please observe the policy on open item ownership[1] and send a status update within 72 hours of this message. Include a date for your subsequent status update. Testers may discover new open items at any time, and I want to plan to get them all fixed well in advance of shipping 9.6rc1. Consequently, I will appreciate your efforts toward speedy resolution. Thanks. [1] http://www.postgresql.org/message-id/20160527025039.ga447...@tornado.leadboat.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] ERROR: ORDER/GROUP BY expression not found in targetlist
On Mon, Jun 13, 2016 at 05:27:13PM -0400, Robert Haas wrote: > One problem that I've realized that is related to this is that the way > that the consider_parallel flag is being set for upper rels is almost > totally bogus, which I believe accounts for your complaint at PGCon > that force_parallel_mode was not doing as much as it ought to do. > When I originally wrote a lot of this logic, there were no upper rels, > which led to this code: > > if (force_parallel_mode == FORCE_PARALLEL_OFF || > !glob->parallelModeOK) > { > glob->parallelModeNeeded = false; > glob->wholePlanParallelSafe = false;/* either > false or don't care */ > } > else > { > glob->parallelModeNeeded = true; > glob->wholePlanParallelSafe = > !has_parallel_hazard((Node *) parse, false); > } > > The main way that wholePlanParallelSafe is supposed to be cleared is > in create_plan: > > /* Update parallel safety information if needed. */ > if (!best_path->parallel_safe) > root->glob->wholePlanParallelSafe = false; > > However, at the time that code was written, it was impossible to rely > entirely on the latter mechanism. Since there were no upper rels and > no paths for upper plan nodes, you could have the case where every > path was parallel-safe but the whole plan was node. Therefore the > code shown above was needed to scan the whole darned plan for > parallel-unsafe things. Post-pathification, this whole thing is > pretty bogus: upper rels mostly don't get consider_parallel set at > all, which means plans involving upper rels get marked parallel-unsafe > even if they are not, which means the wholePlanParallelSafe flag gets > cleared in a bunch of cases where it shouldn't. And on the flip side > I think that the first chunk of code quoted above would be totally > unnecessary if we were actually setting consider_parallel correctly on > the upper rels. [Action required within 72 hours. This is a generic notification.] The above-described topic is currently a PostgreSQL 9.6 open item ("set consider_parallel correctly for upper rels"). Robert, since you committed the patch believed to have created it, you own this open item. If some other commit is more relevant or if this does not belong as a 9.6 open item, please let us know. Otherwise, please observe the policy on open item ownership[1] and send a status update within 72 hours of this message. Include a date for your subsequent status update. Testers may discover new open items at any time, and I want to plan to get them all fixed well in advance of shipping 9.6rc1. Consequently, I will appreciate your efforts toward speedy resolution. Thanks. [1] http://www.postgresql.org/message-id/20160527025039.ga447...@tornado.leadboat.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] ERROR: ORDER/GROUP BY expression not found in targetlist
On Mon, Jun 13, 2016 at 04:46:19PM -0400, Tom Lane wrote: > Robert Haas writes: > > In practice, we don't yet have the ability for > > parallel-safe paths from subqueries to affect planning at higher query > > levels, but that's because the pathification stuff landed too late in > > the cycle for me to fully react to it. > > I wonder if that's not just from confusion between subplans and > subqueries. I don't believe any of the claims made in the comment > adjusted in the patch below (other than "Subplans currently aren't passed > to workers", which is true but irrelevant). Nested Gather nodes is > something that you would need, and already have, a defense for anyway. [Action required within 72 hours. This is a generic notification.] The above-described topic is currently a PostgreSQL 9.6 open item ("fix possible confusion between subqueries and subplans"). Robert, since you committed the patch believed to have created it, you own this open item. If some other commit is more relevant or if this does not belong as a 9.6 open item, please let us know. Otherwise, please observe the policy on open item ownership[1] and send a status update within 72 hours of this message. Include a date for your subsequent status update. Testers may discover new open items at any time, and I want to plan to get them all fixed well in advance of shipping 9.6rc1. Consequently, I will appreciate your efforts toward speedy resolution. Thanks. [1] http://www.postgresql.org/message-id/20160527025039.ga447...@tornado.leadboat.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] ERROR: ORDER/GROUP BY expression not found in targetlist
On Mon, Jun 13, 2016 at 03:42:49PM -0400, Tom Lane wrote: > I wrote: > > ... there was also an unexplainable plan change: > > > *** /home/postgres/pgsql/src/test/regress/expected/aggregates.out Thu Apr > > 7 21:13:14 2016 > > --- /home/postgres/pgsql/src/test/regress/results/aggregates.outMon Jun > > 13 11:54:01 2016 > > *** > > *** 577,590 > > > explain (costs off) > > select max(unique1) from tenk1 where unique1 > 42000; > > ! QUERY PLAN > > > > ! > > --- > > ! Result > > !InitPlan 1 (returns $0) > > ! -> Limit > > !-> Index Only Scan Backward using tenk1_unique1 on tenk1 > > ! Index Cond: ((unique1 IS NOT NULL) AND (unique1 > 42000)) > > ! (5 rows) > > > select max(unique1) from tenk1 where unique1 > 42000; > >max > > --- 577,588 > > > explain (costs off) > > select max(unique1) from tenk1 where unique1 > 42000; > > ! QUERY PLAN > > ! > > ! Aggregate > > !-> Index Only Scan using tenk1_unique1 on tenk1 > > ! Index Cond: (unique1 > 42000) > > ! (3 rows) > > > select max(unique1) from tenk1 where unique1 > 42000; > >max > > > I would not be surprised at a change to a parallel-query plan, but there's > > no parallelism here, so what happened? This looks like a bug to me. > > (Also, doing this query without COSTS OFF shows that the newly selected > > plan actually has a greater estimated cost than the expected plan, which > > makes it definitely a bug.) > > I looked into this and found that the costs are considered fuzzily the > same, and then add_path prefers the slightly-worse path on the grounds > that it is marked parallel_safe while the MinMaxAgg path is not. It seems > to me that there is some fuzzy thinking going on there. On exactly what > grounds is a path to be preferred merely because it is parallel safe, and > not actually parallelized? Or perhaps the question to ask is whether a > MinMaxAgg path can be marked parallel-safe. [Action required within 72 hours. This is a generic notification.] The above-described topic is currently a PostgreSQL 9.6 open item ("consider whether MinMaxAggPath might fail to be parallel-safe"). Robert, since you committed the patch believed to have created it, you own this open item. If some other commit is more relevant or if this does not belong as a 9.6 open item, please let us know. Otherwise, please observe the policy on open item ownership[1] and send a status update within 72 hours of this message. Include a date for your subsequent status update. Testers may discover new open items at any time, and I want to plan to get them all fixed well in advance of shipping 9.6rc1. Consequently, I will appreciate your efforts toward speedy resolution. Thanks. [1] http://www.postgresql.org/message-id/20160527025039.ga447...@tornado.leadboat.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] Re: Should phraseto_tsquery('simple', 'blue blue') @@ to_tsvector('simple', 'blue') be true ?
On Wed, Jun 15, 2016 at 11:08:54AM -0400, Noah Misch wrote: > On Wed, Jun 15, 2016 at 03:02:15PM +0300, Teodor Sigaev wrote: > > On Wed, Jun 15, 2016 at 02:54:33AM -0400, Noah Misch wrote: > > > On Mon, Jun 13, 2016 at 10:44:06PM -0400, Noah Misch wrote: > > > > On Fri, Jun 10, 2016 at 03:10:40AM -0400, Noah Misch wrote: > > > > > [Action required within 72 hours. This is a generic notification.] > > > > > > > > > > The above-described topic is currently a PostgreSQL 9.6 open item. > > > > > Teodor, > > > > > since you committed the patch believed to have created it, you own > > > > > this open > > > > > item. If some other commit is more relevant or if this does not > > > > > belong as a > > > > > 9.6 open item, please let us know. Otherwise, please observe the > > > > > policy on > > > > > open item ownership[1] and send a status update within 72 hours of > > > > > this > > > > > message. Include a date for your subsequent status update. Testers > > > > > may > > > > > discover new open items at any time, and I want to plan to get them > > > > > all fixed > > > > > well in advance of shipping 9.6rc1. Consequently, I will appreciate > > > > > your > > > > > efforts toward speedy resolution. Thanks. > > > > > > > > > > [1] > > > > > http://www.postgresql.org/message-id/20160527025039.ga447...@tornado.leadboat.com > > > > > > > > This PostgreSQL 9.6 open item is past due for your status update. > > > > Kindly send > > > > a status update within 24 hours, and include a date for your subsequent > > > > status > > > > update. Refer to the policy on open item ownership: > > > > http://www.postgresql.org/message-id/20160527025039.ga447...@tornado.leadboat.com > > > > > >IMMEDIATE ATTENTION REQUIRED. This PostgreSQL 9.6 open item is long past > > >due > > >for your status update. Please reacquaint yourself with the policy on open > > >item ownership[1] and then reply immediately. If I do not hear from you by > > >2016-06-16 07:00 UTC, I will transfer this item to release management team > > >ownership without further notice. > > > > > >[1] > > >http://www.postgresql.org/message-id/20160527025039.ga447...@tornado.leadboat.com > > > > I'm working on it right now. > > That is good news, but it is not a valid status update. In particular, it > does not specify a date for your next update. You still have not delivered the status update due thirteen days ago. If I do not hear from you a fully-conforming status update by 2016-06-28 03:00 UTC, or if this item ever again becomes overdue for a status update, I will transfer the item to release management team ownership. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Parallelized polymorphic aggs, and aggtype vs aggoutputtype
On Thu, Jun 23, 2016 at 10:57:26AM -0400, Tom Lane wrote: > David Rowley writes: > > On 23 June 2016 at 11:22, Tom Lane wrote: > >> The behavior that I'd expect (and that I documented) for a deserialization > >> function is that it just allocates its result in the current, short-lived > >> memory context, since it will be the combine function's responsibility to > >> merge that into the long-lived transition state. But it looks to me like > >> the deserialization functions in numeric.c are allocating their results > >> in the aggregate context, which will mean a leak. (For example, > >> numeric_avg_deserialize creates its result using makeNumericAggState > >> which forces the result into the agg context.) > > > Yes, you're right. > > > In the end I decided to add a makeNumericAggStateCurrentContext() > > function which does not perform any memory context switching at all. > > It seems like this can be used for the combine functions too, since > > they've already switched to the aggregate memory context. This should > > save a few cycles during aggregate combine, and not expend any extra > > as some alternatives, like adding a flag to makeNumericAggState(). > > You missed the ones using makePolyNumAggState --- I fixed that and > pushed it. What, if anything, is yet required to close this 9.6 open item? -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Rethinking representation of partial-aggregate steps
On 27 June 2016 at 03:36, Tom Lane wrote: > After having worked on the patch some, I think that AggSplit is a pretty > good choice, because it is about how we split up the calculation of an > aggregate. And it's short, which is a good thing for something that's > going to be a component of assorted names. I wish to thank you for working hard to improve this area of the code. I had a read over your changes and I think there's quite a large problem with your code which realy needs some more thinking: I can't help wonder how plan to allow future expansions of non-serialized partial aggregates giving that in setrefs.c you're making a hard assumption that mark_partial_aggref() should always receive the SERIAL versions of the aggsplit. This outright wrong as the Agg node may not be a serialized aggregate node at all. The attached very rough patch hacks together some code which has the planner generate some partial aggregates which are not serialized. Here's what I get: CREATE AGGREGATE myavg (numeric) ( stype = internal, sfunc = numeric_avg_accum, finalfunc = numeric_avg, combinefunc = numeric_avg_combine ); create table a (num numeric not null); insert into a select generate_series(1,10); select myavg(num) from a; ERROR: invalid memory alloc request size 18446744072025250716 This is down to the aggtype being set to BYTEA regardless of the fact that it's not a serialized aggregate. Passing the non-serial versions of the enum to mark_partial_aggref() makes this case work, but that's certainly not the fix. David -- David Rowley http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services aggtest.patch Description: Binary data -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Reviewing freeze map code
On Tue, Jun 21, 2016 at 10:59:25AM +1200, Thomas Munro wrote: > On Fri, Jun 17, 2016 at 3:36 PM, Noah Misch wrote: > > I agree the non-atomic, unlogged change is a problem. A related threat > > doesn't require a torn page: > > > > AssignTransactionId() xid=123 > > heapam.c:3931 HeapTupleHeaderSetXmax(oldtup.t_data, 123); > > some ERROR before heap_update() finishes > > rollback; -- xid=123 > > some backend flushes the modified page > > immediate shutdown > > AssignTransactionId() xid=123 > > commit; -- xid=123 > > > > If nothing wrote an xlog record that witnesses xid 123, the cluster can > > reassign it after recovery. The failed update is now considered a > > successful > > update, and the row improperly becomes dead. That's important. > > I wonder if that was originally supposed to be handled with the > HEAP_XMAX_UNLOGGED flag which was removed in 11919160. A comment in > the heap WAL logging commit f2bfe8a2 said that tqual routines would > see the HEAP_XMAX_UNLOGGED flag in the event of a crash before logging > (though I'm not sure if the tqual routines ever actually did that). HEAP_XMAX_UNLOGGED does appear to have originated in contemplation of this same hazard. Looking at the three commits in "git log -S HEAP_XMAX_UNLOGGED" (f2bfe8a b58c041 1191916), nothing ever completed the implementation by testing for that flag. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Rethinking representation of partial-aggregate steps
On 27 June 2016 at 08:28, Tom Lane wrote: > David Rowley writes: >> On 27 June 2016 at 03:36, Tom Lane wrote: >>> Looking at this in the light of morning, I'm rather strongly tempted to >>> invert the sense of the FINALIZE option, so that "simple" mode works out >>> as zero, ie, select no options. Maybe call it SKIPFINAL instead of >>> FINALIZE? > >> Aggref calls this aggpartial, and I was tempted to invert that many >> times and make it aggfinalize, but in the end didn't. >> It seems nicer to me to keep it as a list of things that are done, >> rather than to make one exception to that just so we can have the >> simple mode as 0. > > [ shrug... ] I do not buy that argument, because it doesn't justify > the COMBINE option: why shouldn't that be inverted, ie USEFINALFUNC? > The only way to decide that except by fiat is to say that we're > enumerating the non-default or non-simple-mode behaviors. hmm ok. I guess what exists today is there because I tried to make the options "on" for additional tasks which aggregate must perform. Whether to use transfunc or combine func is more of an alternative than additional, so I have let what is standard tiebreak the decision on which way around to have that flag. So perhaps you're right and we should just normalise the flag to the standard aggregate behaviour to keep it consistent. -- David Rowley http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] parallel workers and client encoding
On Mon, Jun 13, 2016 at 9:39 AM, Robert Haas wrote: > There is no realistic way that I am going to have this fixed before > beta2. There are currently 10 open items listed on the open items > page, of which 8 relate to my commits and 5 to parallel query in > particular. I am not going to get 8 issues fixed in the next 4 days, > or even the next 6 days if you assume I can work through the weekend > on this (and that it would be desirable that I be slinging fixes into > the tree just before the wrap, which seems doubtful). Furthermore, of > those issues, I judge this to be least important (except for the > documentation update, but that's pending further from Peter Geoghegan > about exactly what he thinks needs to be changed). I got sidetracked on that, in part due to investigating a bug in the 9.6 external sort work. I'll have more information on that, soon. -- Peter Geoghegan -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Rethinking representation of partial-aggregate steps
I wrote: > [ shrug... ] I do not buy that argument, because it doesn't justify > the COMBINE option: why shouldn't that be inverted, ie USEFINALFUNC? Sorry, I meant USETRANSFUNC of course. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Rethinking representation of partial-aggregate steps
David Rowley writes: > On 27 June 2016 at 03:36, Tom Lane wrote: >> Looking at this in the light of morning, I'm rather strongly tempted to >> invert the sense of the FINALIZE option, so that "simple" mode works out >> as zero, ie, select no options. Maybe call it SKIPFINAL instead of >> FINALIZE? > Aggref calls this aggpartial, and I was tempted to invert that many > times and make it aggfinalize, but in the end didn't. > It seems nicer to me to keep it as a list of things that are done, > rather than to make one exception to that just so we can have the > simple mode as 0. [ shrug... ] I do not buy that argument, because it doesn't justify the COMBINE option: why shouldn't that be inverted, ie USEFINALFUNC? The only way to decide that except by fiat is to say that we're enumerating the non-default or non-simple-mode behaviors. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] Non-text EXPLAIN output for partial aggregation
I noticed that the EXPLAIN code is set up so that in non-text output modes, you get output like this for partial-aggregate plans: "Node Type": "Aggregate", + "Strategy": "Plain", + "Operation": "Finalize", + ... "Node Type": "Aggregate", + "Strategy": "Plain", + "Operation": "Partial",+ That is, the "Operation" field type has been commandeered to indicate partial-aggregation cases. This seems like a pretty bad idea to me, for two reasons: 1. In other plan node types, "Operation" refers to SQL-visible semantics, in fact always Select/Insert/Update/Delete. Re-using it for an implementation detail doesn't seem very consistent. 2. As coded, the field is not printed at all for a non-partial aggregate node. This is just wrong. A given node type should have a fixed set of attributes. I think we should use some other field name, maybe "Step" or "PartialMode", and have "Simple" or "Plain" as the default field contents. It's also arguable that this field should distinguish all the values of the AggSplit enum we just invented, though I'm not sure how important it is to report serialization options. I'm not wedded to any particular ideas here, other than that omitting the field in the simple case is bad. Comments, naming ideas? regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Rethinking representation of partial-aggregate steps
On 27 June 2016 at 03:36, Tom Lane wrote: > Looking at this in the light of morning, I'm rather strongly tempted to > invert the sense of the FINALIZE option, so that "simple" mode works out > as zero, ie, select no options. Maybe call it SKIPFINAL instead of > FINALIZE? Aggref calls this aggpartial, and I was tempted to invert that many times and make it aggfinalize, but in the end didn't. It seems nicer to me to keep it as a list of things that are done, rather than to make one exception to that just so we can have the simple mode as 0. -- David Rowley http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Memory leak in Pl/Python
Andrey Zhidenkov writes: > It's very strange, but when I use expression 'update test set test = > 'test' where id = 1' as argument of plpy.execute() memory do not > growing at all... Well, that suggests it's not particularly plpython's fault at all, but a leak somewhere in the table update. You still haven't provided a self-contained test case, and this bit of information strongly suggests that something about the test table is critical to reproducing the problem. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Memory leak in Pl/Python
It's very strange, but when I use expression 'update test set test = 'test' where id = 1' as argument of plpy.execute() memory do not growing at all... On Sun, Jun 26, 2016 at 9:05 PM, Andrey Zhidenkov wrote: > Thank you for your answer, Tom. > > I've tried code in your example and I still see an always growing > memory consumption (1Mb per second). As it was before, I do not see > growing memory if > I use 'select 1' query as argument of plpy.execute(). Table test does > not has any triggers or foreign keys, I just created it with script > you provided. > > I think that is a leak because my system became use a swap file and > filnally OOM killer kills one of database process and database goes > into recovery mode. That is the problem... > > On Sat, Jun 25, 2016 at 6:15 PM, Tom Lane wrote: >> Andrey Zhidenkov writes: >>> I see memory consumption in htop and pg_activity tools. >> >> "top" can be pretty misleading if you don't know how to interpret its >> output, specifically that you have to discount whatever it shows as >> SHR space. That just represents the amount of the shared memory block >> that this process has touched so far in its lifetime; even if it appears >> to be growing, it's not a leak. That growth will stop eventually, once >> the process has touched every available shared buffer. RES minus SHR >> is a fairer estimate of the process's own memory consumption. >> >> I tried to reduce your example to a self-contained test case, thus: >> >> create extension if not exists plpythonu; >> create table test (test text); >> create or replace >> function test() returns bigint as $$ >> plpy.execute("insert into test(test) values ('test')") >> return 1 >> $$ language plpythonu; >> do $$ >> begin >> for i in 1..1000 loop >> perform test(); >> end loop; >> end; >> $$; >> >> I do not see any significant leakage with this example. There is some >> memory growth, approximately 4 bytes per plpy.execute(), due to having to >> keep track of a subtransaction XID for each uncommitted subtransaction. >> That's not plpython's fault --- it would happen with any PL that executes >> each SQL command as a separate subtransaction, which is probably all of >> them other than plpgsql. And it really ought to be negligible anyway in >> any sane usage. >> >> It's possible that you're seeing some other, larger memory consumption; >> for instance, if there were triggers or foreign keys on the "test" table >> then perhaps there would be an opportunity for leakage in those. >> But without a self-contained test case or any indication of the rate >> of leakage you're seeing, it's hard to guess about the problem. >> >> regards, tom lane > > > > -- > Andrey Zhidenkov / Database developer > +79265617190/ andrey.zhiden...@gmail.com > > > > > This e-mail message may contain confidential or legally privileged > information and is intended only for the use of the intended > recipient(s). Any unauthorized disclosure, dissemination, > distribution, copying or the taking of any action in reliance on the > information herein is prohibited. E-mails are not secure and cannot be > guaranteed to be error free as they can be intercepted, amended, or > contain viruses. Anyone who communicates with us by e-mail is deemed > to have accepted these risks. Company Name is not responsible for > errors or omissions in this message and denies any responsibility for > any damage arising from the use of e-mail. Any opinion and other > statement contained in this message and any attachment are solely > those of the author and do not necessarily represent those of the > company. -- Andrey Zhidenkov / Database developer +79265617190/ andrey.zhiden...@gmail.com This e-mail message may contain confidential or legally privileged information and is intended only for the use of the intended recipient(s). Any unauthorized disclosure, dissemination, distribution, copying or the taking of any action in reliance on the information herein is prohibited. E-mails are not secure and cannot be guaranteed to be error free as they can be intercepted, amended, or contain viruses. Anyone who communicates with us by e-mail is deemed to have accepted these risks. Company Name is not responsible for errors or omissions in this message and denies any responsibility for any damage arising from the use of e-mail. Any opinion and other statement contained in this message and any attachment are solely those of the author and do not necessarily represent those of the company. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Memory leak in Pl/Python
Thank you for your answer, Tom. I've tried code in your example and I still see an always growing memory consumption (1Mb per second). As it was before, I do not see growing memory if I use 'select 1' query as argument of plpy.execute(). Table test does not has any triggers or foreign keys, I just created it with script you provided. I think that is a leak because my system became use a swap file and filnally OOM killer kills one of database process and database goes into recovery mode. That is the problem... On Sat, Jun 25, 2016 at 6:15 PM, Tom Lane wrote: > Andrey Zhidenkov writes: >> I see memory consumption in htop and pg_activity tools. > > "top" can be pretty misleading if you don't know how to interpret its > output, specifically that you have to discount whatever it shows as > SHR space. That just represents the amount of the shared memory block > that this process has touched so far in its lifetime; even if it appears > to be growing, it's not a leak. That growth will stop eventually, once > the process has touched every available shared buffer. RES minus SHR > is a fairer estimate of the process's own memory consumption. > > I tried to reduce your example to a self-contained test case, thus: > > create extension if not exists plpythonu; > create table test (test text); > create or replace > function test() returns bigint as $$ > plpy.execute("insert into test(test) values ('test')") > return 1 > $$ language plpythonu; > do $$ > begin > for i in 1..1000 loop > perform test(); > end loop; > end; > $$; > > I do not see any significant leakage with this example. There is some > memory growth, approximately 4 bytes per plpy.execute(), due to having to > keep track of a subtransaction XID for each uncommitted subtransaction. > That's not plpython's fault --- it would happen with any PL that executes > each SQL command as a separate subtransaction, which is probably all of > them other than plpgsql. And it really ought to be negligible anyway in > any sane usage. > > It's possible that you're seeing some other, larger memory consumption; > for instance, if there were triggers or foreign keys on the "test" table > then perhaps there would be an opportunity for leakage in those. > But without a self-contained test case or any indication of the rate > of leakage you're seeing, it's hard to guess about the problem. > > regards, tom lane -- Andrey Zhidenkov / Database developer +79265617190/ andrey.zhiden...@gmail.com This e-mail message may contain confidential or legally privileged information and is intended only for the use of the intended recipient(s). Any unauthorized disclosure, dissemination, distribution, copying or the taking of any action in reliance on the information herein is prohibited. E-mails are not secure and cannot be guaranteed to be error free as they can be intercepted, amended, or contain viruses. Anyone who communicates with us by e-mail is deemed to have accepted these risks. Company Name is not responsible for errors or omissions in this message and denies any responsibility for any damage arising from the use of e-mail. Any opinion and other statement contained in this message and any attachment are solely those of the author and do not necessarily represent those of the company. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Rethinking representation of partial-aggregate steps
David Rowley writes: > On 26 June 2016 at 04:07, Tom Lane wrote: >> After a bit of thought, maybe AggDivision or AggSplit or something >> along those lines? > How about AggCompletion? It's seems to fit well in the sense of the > aggregation being partial or not, but less well when you consider > serialisation and combining states. After having worked on the patch some, I think that AggSplit is a pretty good choice, because it is about how we split up the calculation of an aggregate. And it's short, which is a good thing for something that's going to be a component of assorted names. What I've got at the moment looks like: /* Primitive options supported by nodeAgg.c: */ #define AGGSPLITOP_COMBINE 0x1 /* substitute combinefn for transfn */ #define AGGSPLITOP_SERIALIZE0x2 /* apply serializefn to output */ #define AGGSPLITOP_DESERIALIZE 0x4 /* apply deserializefn to input */ #define AGGSPLITOP_FINALIZE 0x8 /* run finalfn */ /* Supported operating modes (i.e., useful combinations of these options): */ typedef enum AggSplit { /* Basic, non-split aggregation: */ AGGSPLIT_SIMPLE = AGGSPLITOP_FINALIZE, /* Initial phase of partial aggregation, with serialization: */ AGGSPLIT_PARTIAL_SERIAL = AGGSPLITOP_SERIALIZE, /* Final phase of partial aggregation, with deserialization: */ AGGSPLIT_FINAL_DESERIAL = AGGSPLITOP_COMBINE | AGGSPLITOP_DESERIALIZE | AGGSPLITOP_FINALIZE } AggSplit; /* Test macros for the primitive options: */ #define DO_AGGSPLIT_COMBINE(as) (((as) & AGGSPLITOP_COMBINE) != 0) #define DO_AGGSPLIT_SERIALIZE(as) (((as) & AGGSPLITOP_SERIALIZE) != 0) #define DO_AGGSPLIT_DESERIALIZE(as) (((as) & AGGSPLITOP_DESERIALIZE) != 0) #define DO_AGGSPLIT_FINALIZE(as)(((as) & AGGSPLITOP_FINALIZE) != 0) Looking at this in the light of morning, I'm rather strongly tempted to invert the sense of the FINALIZE option, so that "simple" mode works out as zero, ie, select no options. Maybe call it SKIPFINAL instead of FINALIZE? regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Rename max_parallel_degree?
On 26/06/2016 08:37, Amit Kapila wrote: > On Sat, Jun 25, 2016 at 2:27 PM, Julien Rouhaud > wrote: >>> >> >> It's better thanks. Should we document somewhere the link between this >> parameter and custom dynamic background workers or is it pretty >> self-explanatory? >> > > How about if add an additiona line like: > Parallel workers are taken from the pool of processes established by > guc-max-worker-processes. > > I think one might feel some duplication of text between this and what > we have for max_parallel_workers_per_gather, but it seems genuine to > me. > Ok, I'll add it. > > @@ -370,6 +379,8 @@ ForgetBackgroundWorker(slist_mutable_iter *cur) > Assert(rw->rw_shmem_slot < > max_worker_processes); > slot = &BackgroundWorkerData->slot[rw->rw_shmem_slot]; > slot->in_use = > false; > + if (slot->parallel) > + BackgroundWorkerData->parallel_terminate_count++; > > I think operations on parallel_terminate_count are not safe. > ForgetBackgroundWorker() and RegisterDynamicBackgroundWorker() can try > to read write at same time. It seems you need to use atomic > operations to ensure safety. > > But operations on parallel_terminate_count are done by the postmaster, and per Robert's previous email postmaster can't use atomics: > We can't have the postmaster and the > backends share the same counter, because then it would need locking, > and the postmaster can't try to take spinlocks - can't even use > atomics, because those might be emulated using spinlocks. Do we support platforms on which 32bits operations are not atomic? -- Julien Rouhaud http://dalibo.com - http://dalibo.org -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] A couple of cosmetic changes around shared memory code
On 2016-05-16 21:40, Piotr Stefaniak wrote: Hello, while investigating the shm_mq code and its testing module I made some cosmetic improvements there. You can see them in the attached diff file. Revised patch attached. commit 3ff1afa84e4bc22f153c876e2f0588327a8a004e Author: Piotr Stefaniak Date: Thu Apr 28 18:36:16 2016 +0200 Cosmetic improvements around shm_mq and test_shm_mq. diff --git a/src/backend/storage/ipc/shm_mq.c b/src/backend/storage/ipc/shm_mq.c index cc1f04d..64a5475 100644 --- a/src/backend/storage/ipc/shm_mq.c +++ b/src/backend/storage/ipc/shm_mq.c @@ -103,7 +103,7 @@ struct shm_mq * locally by copying the chunks into a backend-local buffer. mqh_buffer is * the buffer, and mqh_buflen is the number of bytes allocated for it. * - * mqh_partial_message_bytes, mqh_expected_bytes, and mqh_length_word_complete + * mqh_partial_bytes, mqh_expected_bytes, and mqh_length_word_complete * are used to track the state of non-blocking operations. When the caller * attempts a non-blocking operation that returns SHM_MQ_WOULD_BLOCK, they * are expected to retry the call at a later time with the same argument; diff --git a/src/test/modules/test_shm_mq/setup.c b/src/test/modules/test_shm_mq/setup.c index 5bd2820..a0f3962 100644 --- a/src/test/modules/test_shm_mq/setup.c +++ b/src/test/modules/test_shm_mq/setup.c @@ -125,7 +125,7 @@ setup_dynamic_shared_memory(int64 queue_size, int nworkers, segsize = shm_toc_estimate(&e); /* Create the shared memory segment and establish a table of contents. */ - seg = dsm_create(shm_toc_estimate(&e), 0); + seg = dsm_create(segsize, 0); toc = shm_toc_create(PG_TEST_SHM_MQ_MAGIC, dsm_segment_address(seg), segsize); diff --git a/src/test/modules/test_shm_mq/worker.c b/src/test/modules/test_shm_mq/worker.c index 638649b..a94414a 100644 --- a/src/test/modules/test_shm_mq/worker.c +++ b/src/test/modules/test_shm_mq/worker.c @@ -139,7 +139,7 @@ test_shm_mq_main(Datum main_arg) * we can go ahead and exit. */ dsm_detach(seg); - proc_exit(1); + proc_exit(0); } /* -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pg_bsd_indent - improvements around offsetof and sizeof
On 2016-05-27 08:13, Piotr Stefaniak wrote: I'm trying to see if FreeBSD indent can successfully do pg_bsd_indent's job. So far I had to fix one thing, which is not adding a space after a cast operator, for which they added no option to turn it off. Currently I'm fighting one other bug, but I think that'll be it. So... after fixing 12 times more bugs that I had anticipated (see the list at the end of this email; also see attached patches.tgz if you want to apply the patches yourself), my "fork" of FreeBSD indent(1) can do pg_bsd_indent's job if you pass it three additional parameters (-nut -cli1 -sac), producing a 6600-line unified diff, mostly of desired changes (see freebsd-indent.diff.gz for details). I'm in the process of pushing my changes upstream, but I was already told that it's too late to get them into 11.0-RELEASE. Personally, I don't mind that, hoping that the upstream will accept them eventually. I'm also hoping it'll be easier to reinvent GNU indent's -tsn ("set tabsize to n spaces") option for FreeBSD indent than it would be for any other of the forks that aren't GNU. I envision that to be the first step to getting rid of some of the work-arounds pgindent does, mainly running entab and detab as pre- and post-processing steps. That and more I'll probably do later. If porting FreeBSD indent to PostgreSQL's sources turns out to be successful, there will be a choice between rebasing pg_bsd_indent on that and picking specific patches and applying it on PG's fork of indent(1). At this point I think it wouldn't make any sense to port any changes to current pg_bsd_indent. The full list of changes I made to FreeBSD's indent(1) as of r289677: [bugfix] Fix typo in keyword "typedef". [bugfix] Avoid out of bound access of array codebuf pointed into by s_code. [bugfix] Avoid out of bound access of array ps.paren_indents. [bugfix] Avoid out of bound access of array in_buffer. [bugfix] Avoid potential use-after-free. [bugfix] Semicolons inside struct declarations don't end the declarations. [bugfix] Support "f" and "F" floating constant suffixes. [bugfix] Removed whitespace shouldn't be considered in column calculations. [bugfix] Don't add surplus space on empty lines in comments. [bugfix] Bail out if there's no more space on the parser stack. [bugfix] Consistently indent declarations. [bugfix] Don't ignore the fact that offsetof is a keyword. [cleanup] Make definition of opchar conditional. [cleanup] Remove dead code relating to unix-style comments. [cleanup] Simplify pr_comment(). [cleanup] Deduplicate code that decided if a comment needs a blank line at the top. [bugfix] Fix wrapping of some lines in comments. [bugfix] Untangle the connection between pr_comment.c and io.c, fixing at least two bugs [feature] Add -sac (space after cast) and -nsac options. [bugfix] Don't newline on cpp lines like #endif unless -bacc is on. [feature] Add -U option for providing a file containing list of types. [optimization] Use bsearch() for looking up type keywords. freebsd-indent.diff.gz Description: application/gzip patches.tgz Description: application/compressed-tar -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers