Re: [HACKERS] Optimizing away second VACUUM heap scan when only BRIN indexes on table

2015-12-20 Thread Jim Nasby

On 11/23/15 5:06 PM, Peter Geoghegan wrote:

I realize that the second scan performed by lazy_vacuum_heap() only
visits those pages known to contain dead tuples. However, the
experience of seeing problems with the random sampling of ANALYZE
makes me think that that might not be very helpful. There is no good
reason to think that there won't be a uniform distribution of dead
tuples across the heap, and so only visiting pages known to contain
dead tuples might be surprisingly little help even when there are
relatively few VACUUM-able tuples in the table.


Even worse is if you can't fit all the dead TIDs in memory and have to 
do multiple passes for no reason...



Has any thought been given to how we could make VACUUM avoid a second
heap scan iff there are only BRIN indexes, without compromising
anything else? In other words, by killing heap TIDs*before*  any
"VACUUM" of BRIN index(es) occurs, avoiding a call to
lazy_vacuum_heap(), just as when there are no indexes on the table
whatsoever?


ISTM the big question here is how vacuum would know it can skip this 
since we wouldn't want to hard-code this for BRIN.

--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Optimizing away second VACUUM heap scan when only BRIN indexes on table

2015-12-20 Thread Alvaro Herrera
Jim Nasby wrote:
> On 11/23/15 5:06 PM, Peter Geoghegan wrote:
> >I realize that the second scan performed by lazy_vacuum_heap() only
> >visits those pages known to contain dead tuples. However, the
> >experience of seeing problems with the random sampling of ANALYZE
> >makes me think that that might not be very helpful. There is no good
> >reason to think that there won't be a uniform distribution of dead
> >tuples across the heap, and so only visiting pages known to contain
> >dead tuples might be surprisingly little help even when there are
> >relatively few VACUUM-able tuples in the table.
> 
> Even worse is if you can't fit all the dead TIDs in memory and have to do
> multiple passes for no reason...

Since BRIN indexes cannot be primary keys nor unique keys, it's hard to
be convinced that the use case of a table with only BRIN indexes is
terribly interesting.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] psql - -dry-run option

2015-12-20 Thread Jim Nasby

On 12/18/15 2:50 AM, Shulgin, Oleksandr wrote:

On Thu, Dec 17, 2015 at 9:13 PM, Tom Lane > wrote:


Whether we really need a feature like that isn't clear though; it's not
like it's hard to test things that way now.  Stick in a BEGIN with no
COMMIT, you're there.  The problem only comes in if you start expecting
the behavior to be bulletproof.  Maybe I'm being too pessimistic about
what people would believe a --dry-run switch to be good for ... but
I doubt it.


I'm on the same line: BEGIN/ROLLBACK requires trivial effort and a
--dry-run option might give a false sense of security, but it cannot
possibly rollback side-effects of user functions which modify filesystem
or interact with the outside world in some other way.


The issue with that is if you're \i'ing files in and one of those 
happens to contain a COMMIT, you're hosed. I can see some use for a 
"must rollback" mode of BEGIN.

--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] plpgsql - DECLARE - cannot to use %TYPE or %ROWTYPE for composite types

2015-12-20 Thread Jim Nasby

On 10/30/15 6:01 AM, Pavel Stehule wrote:

I am sending patch that enables to use references to polymorphic
parameters of row types. Another functionality is possibility to get
array or element type of referenced variable. It removes some gaps when
polymorphic parameters are used.


Did this make it into a commitfest?
--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PoC] Asynchronous execution again (which is not parallel)

2015-12-20 Thread Kyotaro HORIGUCHI
Thank you for the comment.

At Tue, 15 Dec 2015 21:01:27 -0500, Robert Haas  wrote 
in 
> On Mon, Dec 14, 2015 at 3:34 AM, Kyotaro HORIGUCHI
>  wrote:
> > Yes, the most significant and obvious (but hard to estimate the
> > benefit) target of async execution is (Merge)Append-ForeignScan,
> > which is narrow but freuquently used.  And this patch has started
> > from it.
> >
> > It is because of the startup-heavy nature of FDW. So I involved
> > sort as a target later then redesigned to give the ability on all
> > nodes.  If it is obviously over-done for the (currently) expected
> > benefit and if it is preferable to shrink this patch so as to
> > touch only the portion where async-exec has a benefit, I'll do
> > so.
> 
> Suppose we equip each EState with the ability to fire "callbacks".
> Callbacks have the signature:
> 
> typedef bool (*ExecCallback)(PlanState *planstate, TupleTableSlot
> *slot, void *context);
> 
> Executor nodes can register immediate callbacks to be run at the
> earliest possible opportunity using a function like
> ExecRegisterCallback(estate, callback, planstate, slot, context).
> They can registered deferred callbacks that will be called when a file
> descriptor becomes ready for I/O, or when the process latch is set,
> using a call like ExecRegisterFileCallback(estate, fd, event,
> callback, planstate, slot, context) or
> ExecRegisterLatchCallback(estate, callback, planstate, slot, context).
> 
> To execute callbacks, an executor node can call ExecFireCallbacks(),
> which will fire immediate callbacks in order of registration, and wait
> for the file descriptors for which callbacks have been registered and
> for the process latch when no immediate callbacks remain but there are
> still deferred callbacks.  It will return when (1) there are no
> remaining immediate or deferred callbacks or (2) one of the callbacks
> returns "true".

Excellent! I unconsciously excluded the case of callbacks because
I supposed (without certain ground) all executor nodes can have a
chance to win from this. Such callback is a good choice to do
what Start*Node did in the lastest patch.

> Then, suppose we add a function bool ExecStartAsync(PlanState *target,
> ExecCallback callback, PlanState *cb_planstate, void *cb_context).
> For non-async-aware plan nodes, this just returns false.  async-aware
> plan nodes should initiate some work, register some callbacks, and
> return.  The callback that get registered should arrange in turn to
> register the callback passed as an argument when a tuple becomes
> available, passing the planstate and context provided by
> ExecStartAsync's caller, plus the TupleTableSlot containing the tuple.

Although I don't imagine clearly about the case of
async-aware-nodes under non-aware-nodes, it seems to have a high
affinity with (true) parallel execution framework.

> So, in response to ExecStartAsync, if there's no tuple currently
> available, postgres_fdw can send a query to the remote server and
> request a callback when the fd becomes ready-ready.  It must save the
> callback passed to ExecStartAsync inside the PlanState someplace so
> that when a tuple becomes available it can register that callback.
> 
> ExecAppend can call ExecStartAsync on each of its subplans.  For any
> subplan where ExecStartAsync returns false, ExecAppend will just
> execute it normally, by calling ExecProcNode repeatedly until no more
> tuples are returned.  But for async-capable subplans, it can call
> ExecStartAsync on all of them, and then call ExecFireCallbacks.  The
> tuple-ready callback it passes to its child plans will take the tuple
> provided by the child plan and store it into the Append node's slot.
> It will then return true if, and only if, ExecFireCallbacks is being
> invoked from ExecAppend (which it can figure out via some kind of
> signalling either through its own PlanState or centralized signalling
> through the EState).  That way, if ExecAppend were itself invoked
> asynchronously, its tuple-ready callback could simply populate a slot
> appropriately register its invoker's tuple-ready callback.  Whether
> called synchronously or asynchronously, each invocation of as
> asynchronous append after the first would just need to again
> ExecStartAsync on the child that last returned a tuple.

Thanks for the attentive explanation. My concern about this is
that the latency by synchronizing one by one for every tuple
between the producer and the consumer. My previous patch is not
asynchronous on every tuple so it can give a pure gain without
loss from tuple-wise synchronization. But it looks clean and I
like it so I'll consider this.

> It seems pretty straightforward to fit Gather into this infrastructure.

Yes.

> It is unclear to me how useful this is beyond ForeignScan, Gather, and
> Append.  MergeAppend's ordering constraint makes it less useful; we
> can 

Re: [HACKERS] Optimizing away second VACUUM heap scan when only BRIN indexes on table

2015-12-20 Thread Peter Geoghegan
On Sun, Dec 20, 2015 at 6:14 PM, Alvaro Herrera
 wrote:
> Since BRIN indexes cannot be primary keys nor unique keys, it's hard to
> be convinced that the use case of a table with only BRIN indexes is
> terribly interesting.

I'm not convinced of that.

-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] psql - -dry-run option

2015-12-20 Thread Craig Ringer
On 18 December 2015 at 16:50, Shulgin, Oleksandr <
oleksandr.shul...@zalando.de> wrote:


> I'm on the same line: BEGIN/ROLLBACK requires trivial effort and a
> --dry-run option might give a false sense of security, but it cannot
> possibly rollback side-effects of user functions which modify filesystem or
> interact with the outside world in some other way.
>

Pretty much. Side effects.

The most glaringly obvious example is nextval(...) and setval(...). You
can't make them have no effect, since your script will then fail to run.
But you can't roll them back either.

Also, anything that touches the file system, like COPY. Untrusted PLs that
can fiddle with the file system. FDWs. All sorts. Oh, and of course psql
commands like \o .

I think this idea is completely unworkable. You might work around the
filesystem access issues with a new attribute (like LEAKPROOF) that asserts
that a function relies strictly on in-database transactional behaviour. But
you're not going to be able to do a dry run with sequence setting. You
could ignore setval and run nextval as normal, but then your dry-run
wouldn't reflect what the real run would do...

-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


Re: [HACKERS] plpgsql - DECLARE - cannot to use %TYPE or %ROWTYPE for composite types

2015-12-20 Thread Pavel Stehule
2015-12-21 1:06 GMT+01:00 Jim Nasby :

> On 10/30/15 6:01 AM, Pavel Stehule wrote:
>
>> I am sending patch that enables to use references to polymorphic
>> parameters of row types. Another functionality is possibility to get
>> array or element type of referenced variable. It removes some gaps when
>> polymorphic parameters are used.
>>
>
> Did this make it into a commitfest?
>

yes, it is relative trivial small patch without any side effects or
possible performance issues.

The important (and possible disputable) part of this patch is new syntax

DECLARE
  var othervar%arraytype,
  var othervar%elementtype;

It is consistent with current state, and doesn't increase a complexity of
DECLARE part in plpgsql parser - what was reason for reject this idea 5
years ago (no necessary reserved keywords, ...) .

Regards

Pavel



> --
> Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
> Experts in Analytics, Data Architecture and PostgreSQL
> Data in Trouble? Get it in Treble! http://BlueTreble.com
>


Re: [HACKERS] Tab completion for ALTER COLUMN SET STATISTICS

2015-12-20 Thread Jeff Janes
On Tue, Dec 1, 2015 at 10:37 PM, Michael Paquier
 wrote:
>
> With the refactoring of tab-complete.c that is moving on, perhaps this
> entry should be moved to next CF. Thoughts?

The now-committed tab-complete.c refactoring renders this unnecessary,
so I've marked this patch as rejected.

Thanks,

Jeff


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Tab completion for ALTER COLUMN SET STATISTICS

2015-12-20 Thread Michael Paquier
On Mon, Dec 21, 2015 at 3:34 PM, Jeff Janes  wrote:
> On Tue, Dec 1, 2015 at 10:37 PM, Michael Paquier
>  wrote:
>>
>> With the refactoring of tab-complete.c that is moving on, perhaps this
>> entry should be moved to next CF. Thoughts?
>
> The now-committed tab-complete.c refactoring renders this unnecessary,
> so I've marked this patch as rejected.

Thanks for the follow-up.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Threads in PostgreSQL

2015-12-20 Thread sri harsha
Hi,

   Is it possible to use threads in Postgresql ?? I am using threads in my
foreign data wrapper and i get the following error when i use the threads .

*ERROR:  stack depth limit exceeded*
*HINT:  Increase the configuration parameter "max_stack_depth" (currently
2048kB), after ensuring the platform's stack depth limit is adequate.*

No matter how much i increase the stack size , the error keeps occurring .
How do i solve this problem ??


Thanks,
Harsha


Re: [HACKERS] Weighted Stats

2015-12-20 Thread Jim Nasby

On 11/2/15 5:46 PM, David Fetter wrote:

I'd like to add weighted statistics to PostgreSQL


Anything happen with this? If community isn't interested, ISTM it'd be 
good to put this in PGXN.

--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Speed up Clog Access by increasing CLOG buffers

2015-12-20 Thread Amit Kapila
On Fri, Dec 18, 2015 at 9:58 PM, Robert Haas  wrote:
>
> On Fri, Dec 18, 2015 at 1:16 AM, Amit Kapila 
wrote:
>
> >> Some random comments:
> >>
> >> - TransactionGroupUpdateXidStatus could do just as well without
> >> add_proc_to_group.  You could just say if (group_no >= NUM_GROUPS)
> >> break; instead.  Also, I think you could combine the two if statements
> >> inside the loop.  if (nextidx != INVALID_PGPROCNO &&
> >> ProcGlobal->allProcs[nextidx].clogPage == proc->clogPage) break; or
> >> something like that.
> >>

Changed as per suggestion.

> >> - memberXid and memberXidstatus are terrible names.  Member of what?
> >
> > How about changing them to clogGroupMemberXid and
> > clogGroupMemberXidStatus?
>
> What we've currently got for group XID clearing for the ProcArray is
> clearXid, nextClearXidElem, and backendLatestXid.  We should try to
> make these things consistent.  Maybe rename those to
> procArrayGroupMember, procArrayGroupNext, procArrayGroupXid
>

Here procArrayGroupXid sounds like Xid at group level, how about
procArrayGroupMemberXid?
Find the patch with renamed variables for PGProc
(rename_pgproc_variables_v1.patch) attached with mail.

> and then
> start all of these identifiers with clogGroup as you propose.
>

I have changed them accordingly in the attached patch
(group_update_clog_v4.patch)  and addressed other comments given by
you.


With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


rename_pgproc_variables_v1.patch
Description: Binary data


group_update_clog_v4.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Making tab-complete.c easier to maintain

2015-12-20 Thread Thomas Munro
On Sun, Dec 20, 2015 at 10:24 AM, Tom Lane  wrote:
> I've committed this now with a number of changes, many of them just
> stylistic.

Thanks!  And thanks also to Michael, Kyotaro, Alvaro and Jeff.  +1 for
the suggested further improvements, which I will help out with where I
can.

-- 
Thomas Munro
http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Typo in the comment above heap_prepare_freeze_tuple()

2015-12-20 Thread Amit Langote
On 2015/12/19 2:05, Robert Haas wrote:
> On Fri, Dec 18, 2015 at 1:25 AM, Amit Langote
>  wrote:
>> I think the following may be a typo:
>>
>>   * Caller is responsible for ensuring that no other backend can access the
>>   * storage underlying this tuple, either by holding an exclusive lock on the
>> - * buffer containing it (which is what lazy VACUUM does), or by having it by
>> + * buffer containing it (which is what lazy VACUUM does), or by having it be
>>   * in private storage (which is what CLUSTER and friends do).
>>
>> If so, attached is the patch.
> 
> Committed.

Thanks!

Regards,
Amit





-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Threads in PostgreSQL

2015-12-20 Thread Atri Sharma
On Mon, Dec 21, 2015 at 11:51 AM, sri harsha 
wrote:

> Hi,
>
>Is it possible to use threads in Postgresql ?? I am using threads in my
> foreign data wrapper and i get the following error when i use the threads .
>
> *ERROR:  stack depth limit exceeded*
> *HINT:  Increase the configuration parameter "max_stack_depth" (currently
> 2048kB), after ensuring the platform's stack depth limit is adequate.*
>
>

PostgreSQL is a process-per-backend model.

Can you elaborate on what you are using the threads for?


Re: [HACKERS] Optimizing away second VACUUM heap scan when only BRIN indexes on table

2015-12-20 Thread David Fetter
On Sun, Dec 20, 2015 at 11:14:46PM -0300, Alvaro Herrera wrote:
> Jim Nasby wrote:
> > On 11/23/15 5:06 PM, Peter Geoghegan wrote:
> > >I realize that the second scan performed by lazy_vacuum_heap() only
> > >visits those pages known to contain dead tuples. However, the
> > >experience of seeing problems with the random sampling of ANALYZE
> > >makes me think that that might not be very helpful. There is no good
> > >reason to think that there won't be a uniform distribution of dead
> > >tuples across the heap, and so only visiting pages known to contain
> > >dead tuples might be surprisingly little help even when there are
> > >relatively few VACUUM-able tuples in the table.
> > 
> > Even worse is if you can't fit all the dead TIDs in memory and have to do
> > multiple passes for no reason...
> 
> Since BRIN indexes cannot be primary keys nor unique keys, it's hard to
> be convinced that the use case of a table with only BRIN indexes is
> terribly interesting.

If you've got high-frequency logs, timestamptz might not operate at
fine enough a grain to form a primary key, but it's just the kind of
thing BRIN is great at narrowing down.

Cheers,
David.
-- 
David Fetter  http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter  XMPP: david.fet...@gmail.com

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Weighted Stats

2015-12-20 Thread David Fetter
On Sun, Dec 20, 2015 at 06:13:33PM -0600, Jim Nasby wrote:
> On 11/2/15 5:46 PM, David Fetter wrote:
> >I'd like to add weighted statistics to PostgreSQL
> 
> Anything happen with this? If community isn't interested, ISTM it'd be good
> to put this in PGXN.

I think it's already in PGXN as an extension, and I'll get another
version out this early this week, as it involves mostly adding some
tests.

I'll do the float8 ones for core this week, too, and unless there's a
really great reason to do more data types on the first pass, it should
be in committable shape.

Cheers,
David.
-- 
David Fetter  http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter  XMPP: david.fet...@gmail.com

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] A typo in syncrep.c

2015-12-20 Thread Kyotaro HORIGUCHI
Hello,

At Fri, 18 Dec 2015 12:44:34 -0500, Robert Haas  wrote 
in 
> On Wed, Dec 16, 2015 at 3:33 AM, Kyotaro HORIGUCHI
>  wrote:
> > Hello, I think I found a typo in a comment of syncrep.c.
> >
> >>   * acknowledge the commit nor raise ERROR or FATAL.  The latter would
> >> - * lead the client to believe that that the transaction aborted, which
> >>   * is not true: it's already committed locally. The former is no good
> >
> > The 'that' looks duplicate.
> 
> Agreed.
> 
> > And it might be better to put a
> > be-verb before the 'aborted'.
> >
> >> + * lead the client to believe that the transaction is aborted, which
> 
> No, that's correct the way it is.  What you're proposing wouldn't
> exactly be wrong, but it's a little less clear and direct.

Hmm. I thought they are equal in meaning and make clearer, but I
understand they have such difference. Thank you for correcting
it.

> Committed the part of your patch that removes the extra "that".

Thank you!

regards,

-- 
Kyotaro Horiguchi
NTT Open Source Software Center




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: BUG #13685: Archiving while idle every archive_timeout with wal_level hot_standby

2015-12-20 Thread Michael Paquier
On Sun, Nov 8, 2015 at 9:50 PM, Michael Paquier
 wrote:
> On Sat, Nov 7, 2015 at 3:54 PM, Michael Paquier wrote:
>> I thought about something like that at some point by saving a minimum
>> activity pointer in XLogCtl, updated each time a segment was forcibly
>> switched or after inserting a checkpoint record. Then the bgwriter
>> looked at if the current insert position matched this minimum activity
>> pointer, skipping LogStandbySnapshot if both positions match. Does
>> this match your line of thoughts?
>
> Looking at the code, it occurred to me that the LSN position saved for
> a XLOG_SWITCH record is the last position of current segment, so we
> would still need to check if the current insert LSN matches the
> beginning of a new segment and if the last segment was forcibly
> switched by saving RecPtr of RequestXLogSwitch in XLogCtl for example.
> Thoughts?

I haven't given up on this patch yet, and putting again my head on
this problem I have finished with the patch attached, which checks if
the current insert LSN position is at the beginning of a segment that
has just been switched to decide if a standby snapshot should be
logged or not. This allows bringing back an idle system to the pre-9.3
state where a segment would be archived in the case of a low
archive_timeout only when a checkpoint has been issued on the system.
In order to achieve this idea I have added a field on XLogCtl that
saves the last time a segment has been forcibly switched after
XLOG_SWITCH.

Honestly I am failing to see why we should track the progress since
last checkpoint as mentioned upthread, and the current behavior is
certainly a regression.

Speaking of which, this patch was registered in this CF, I am moving
it to the next as a bug fix.
Regards,
-- 
Michael
diff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c
index 147fd53..6608666 100644
--- a/src/backend/access/transam/xlog.c
+++ b/src/backend/access/transam/xlog.c
@@ -526,6 +526,8 @@ typedef struct XLogCtlData
 	XLogRecPtr	RedoRecPtr;		/* a recent copy of Insert->RedoRecPtr */
 	uint32		ckptXidEpoch;	/* nextXID & epoch of latest checkpoint */
 	TransactionId ckptXid;
+	XLogRecPtr	forcedSegSwitchLSN;	/* LSN position of last forced segment
+	 * switch */
 	XLogRecPtr	asyncXactLSN;	/* LSN of newest async commit/abort */
 	XLogRecPtr	replicationSlotMinLSN;	/* oldest LSN needed by any slot */
 
@@ -6315,6 +6317,7 @@ StartupXLOG(void)
 	 checkPoint.newestCommitTs);
 	XLogCtl->ckptXidEpoch = checkPoint.nextXidEpoch;
 	XLogCtl->ckptXid = checkPoint.nextXid;
+	XLogCtl->forcedSegSwitchLSN = InvalidXLogRecPtr;
 
 	/*
 	 * Initialize replication slots, before there's a chance to remove
@@ -8988,6 +8991,10 @@ RequestXLogSwitch(void)
 	XLogBeginInsert();
 	RecPtr = XLogInsert(RM_XLOG_ID, XLOG_SWITCH);
 
+	SpinLockAcquire(>info_lck);
+	XLogCtl->forcedSegSwitchLSN = RecPtr;
+	SpinLockRelease(>info_lck);
+
 	return RecPtr;
 }
 
@@ -10628,6 +10635,21 @@ GetXLogWriteRecPtr(void)
 }
 
 /*
+ * Get last WAL position where an XLOG segment has been forcibly switched.
+ */
+XLogRecPtr
+GetXLogLastSwitchPtr(void)
+{
+	XLogRecPtr	last_switch_lsn;
+
+	SpinLockAcquire(>info_lck);
+	last_switch_lsn = XLogCtl->forcedSegSwitchLSN;
+	SpinLockRelease(>info_lck);
+
+	return last_switch_lsn;
+}
+
+/*
  * Returns the redo pointer of the last checkpoint or restartpoint. This is
  * the oldest point in WAL that we still need, if we have to restart recovery.
  */
diff --git a/src/backend/postmaster/bgwriter.c b/src/backend/postmaster/bgwriter.c
index 65465d6..ddd6efc 100644
--- a/src/backend/postmaster/bgwriter.c
+++ b/src/backend/postmaster/bgwriter.c
@@ -315,14 +315,28 @@ BackgroundWriterMain(void)
   LOG_SNAPSHOT_INTERVAL_MS);
 
 			/*
-			 * only log if enough time has passed and some xlog record has
-			 * been inserted.
+			 * Only log if enough time has passed and some xlog record has
+			 * been inserted on a new segment. On an idle system where
+			 * segments can be archived in a fast pace with for example a
+			 * low archive_command setting, avoid as well logging a new
+			 * standby snapshot if the current insert position is still
+			 * at the beginning of the segment that has just been switched.
 			 */
-			if (now >= timeout &&
-last_snapshot_lsn != GetXLogInsertRecPtr())
+			if (now >= timeout)
 			{
-last_snapshot_lsn = LogStandbySnapshot();
-last_snapshot_ts = now;
+XLogRecPtr	insert_lsn = GetXLogInsertRecPtr();
+XLogRecPtr	last_forced_switch_lsn = GetXLogLastSwitchPtr();
+XLogSegNo	insert_segno;
+
+XLByteToSeg(insert_lsn, insert_segno);
+
+if (last_snapshot_lsn != insert_lsn &&
+	!XLByteInPrevSeg(last_forced_switch_lsn, insert_segno) &&
+	(insert_lsn % XLOG_SEG_SIZE) != SizeOfXLogLongPHD)
+{
+	last_snapshot_lsn = LogStandbySnapshot();
+	last_snapshot_ts = now;
+}
 			}
 		}
 
diff --git a/src/include/access/xlog.h b/src/include/access/xlog.h
index 

[HACKERS] SET SESSION AUTHORIZATION superuser limitation.

2015-12-20 Thread Dmitry Igrishin
Hackers,

There are feature which may be useful in conjunction with connection pools.
It is the ability to change the session user without creating the new
connection, like this:
(pseudo REPL):
notsuperuser > SELECT current_user, session_user;
notsuperuser notsuperuser
notsuperuser > SET SESSION AUTHORIZATION notsuperuser2 PASSWORD
'password_of_notsuperuser2';
SET SESSION AUTHORIZATION
notsuperuser2 > SELECT current_user, session_user;
notsuperuser2 notsuperuser2
notsuperuser2 > SET ROLE user3;
notsuperuser2 > SELECT current_user, session_user;
user3 notsuperuser2
According to [1], SET SESSION AUTHORIZATION can only be
used by superusers. Is it possible to extend it for use by not only
superusers?

[1]
http://www.postgresql.org/docs/9.4/static/sql-set-session-authorization.html

-- 
// Dmitry.


Re: [HACKERS] SET SESSION AUTHORIZATION superuser limitation.

2015-12-20 Thread Tom Lane
Dmitry Igrishin  writes:
> There are feature which may be useful in conjunction with connection pools.
> It is the ability to change the session user without creating the new
> connection, like this:
> (pseudo REPL):
> notsuperuser > SELECT current_user, session_user;
> notsuperuser notsuperuser
> notsuperuser > SET SESSION AUTHORIZATION notsuperuser2 PASSWORD
> 'password_of_notsuperuser2';
> SET SESSION AUTHORIZATION
> notsuperuser2 > SELECT current_user, session_user;
> notsuperuser2 notsuperuser2
> notsuperuser2 > SET ROLE user3;
> notsuperuser2 > SELECT current_user, session_user;
> user3 notsuperuser2
> According to [1], SET SESSION AUTHORIZATION can only be
> used by superusers. Is it possible to extend it for use by not only
> superusers?

The syntax you propose exposes the user's password in cleartext in
the command, where it is likely to get captured in logs for example.
That's not going to do.  It also assumes that the user *has* a password
that should be honored unconditionally, which is not the case in many
authentication setups.

Also, you have failed to explain why SET ROLE isn't an adequate substitute
for the cases that would plausibly be allowable to non-superusers.

Lastly, no connection pool that I would trust would use such a command
rather than maintaining separate connections for each userid.  There's
too much risk of security problems from leftover session state.

regards, tom lane

PS: this has all been hashed out before.  See the archives.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SET SESSION AUTHORIZATION superuser limitation.

2015-12-20 Thread Dmitry Igrishin
2015-12-20 21:47 GMT+03:00 Tom Lane :

> Dmitry Igrishin  writes:
> > There are feature which may be useful in conjunction with connection
> pools.
> > It is the ability to change the session user without creating the new
> > connection, like this:
> > (pseudo REPL):
> > notsuperuser > SELECT current_user, session_user;
> > notsuperuser notsuperuser
> > notsuperuser > SET SESSION AUTHORIZATION notsuperuser2 PASSWORD
> > 'password_of_notsuperuser2';
> > SET SESSION AUTHORIZATION
> > notsuperuser2 > SELECT current_user, session_user;
> > notsuperuser2 notsuperuser2
> > notsuperuser2 > SET ROLE user3;
> > notsuperuser2 > SELECT current_user, session_user;
> > user3 notsuperuser2
> > According to [1], SET SESSION AUTHORIZATION can only be
> > used by superusers. Is it possible to extend it for use by not only
> > superusers?
>
> The syntax you propose exposes the user's password in cleartext in
> the command, where it is likely to get captured in logs for example.
> That's not going to do.

Uh, I'm not propose exactly this syntax. I just used it to explain the idea.
Secondly, there are CREATE ROLE ... [ENCRYPTED] PASSWORD
which can be also captured by logs?..

> It also assumes that the user *has* a password
> that should be honored unconditionally, which is not the case in many
> authentication setups.
>
Not really. Why not just signal an error from SET SESSION AUTHORIZATION
if the target user doesn't has a password?

>
> Also, you have failed to explain why SET ROLE isn't an adequate substitute
> for the cases that would plausibly be allowable to non-superusers.
>
Suppose the role 'web' which is used as a role for pool. SET ROLE is
useless in
this case, since every "guest" can use it to became the any user he/she
wants,
because SET ROLE don't require the password.

>
> Lastly, no connection pool that I would trust would use such a command
> rather than maintaining separate connections for each userid.  There's
> too much risk of security problems from leftover session state.
>
Creating the new (personal) connection for each HTTP request to use the
PostgreSQL's
privileges is too expensive. The feature I'm talking about is some sort of
optimization.


-- 
// Dmitry.


Re: [HACKERS] custom function for converting human readable sizes to bytes

2015-12-20 Thread Pavel Stehule
Hi

new update:

1. unit searching is case insensitive

2. initial support for binary byte prefixes - KiB, MiB, ..  (IEC standard),
change behave for SI units

Second point is much more complex then it is looking - if pg_size_bytes
should be consistent with pg_size_pretty.

The current pg_size_pretty and transformations in guc.c are based on JEDEC
standard. Using this standard for GUC has sense - using it for object sizes
is probably unhappy.

I tried to fix (and enhance) pg_size_pretty - now reports correct units,
and via second parameter it allows to specify base: 2 (binary, IEC  -
default) or 10 (SI).

I think it is good to have it. These standards are generic and wide used,
but should to be pretty explained in documentation if we will use JEDEC for
configuration. Probably better to leave JEDEC and prefer SI and IEC.

Plan B is fix Postgres on JEDEC only - it is trivial, simple - but it can
look like archaic in next years.

Comments, notices?

Regards

Pavel
diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
new file mode 100644
index 60b9a09..d9a4f34
*** a/doc/src/sgml/func.sgml
--- b/doc/src/sgml/func.sgml
*** postgres=# SELECT * FROM pg_xlogfile_nam
*** 17607,17612 
--- 17607,17615 
  pg_relation_size
 
 
+ pg_size_bytes
+
+
  pg_size_pretty
 
 
*** postgres=# SELECT * FROM pg_xlogfile_nam
*** 17678,17699 


 
! pg_size_pretty(bigint)
  
 text
 
   Converts a size in bytes expressed as a 64-bit integer into a
!  human-readable format with size units
 


 
! pg_size_pretty(numeric)
  
 text
 
   Converts a size in bytes expressed as a numeric value into a
!  human-readable format with size units
 


--- 17681,17715 


 
! pg_size_bytes(text)
! 
!bigint
!
!  Converts a size in human-readable format with size units
!  into bytes. The parameter is case insensitive string. Following
!  units are supported: B, kB, MB, GB, TB, PB, KiB, MiB, TiB, PiB.
!
!   
!   
!
! pg_size_pretty(bigint , int)
  
 text
 
   Converts a size in bytes expressed as a 64-bit integer into a
!  human-readable format with size units. Second parameter allows to
!  specify the base (2 or 10). The binary base is default.
 


 
! pg_size_pretty(numeric , int)
  
 text
 
   Converts a size in bytes expressed as a numeric value into a
!  human-readable format with size units. Second parameter allows to
!  specify the base (2 or 10). The binary base is default.
 


diff --git a/src/backend/utils/adt/dbsize.c b/src/backend/utils/adt/dbsize.c
new file mode 100644
index 5ee59d0..819da35
*** a/src/backend/utils/adt/dbsize.c
--- b/src/backend/utils/adt/dbsize.c
***
*** 31,36 
--- 31,64 
  #include "utils/relmapper.h"
  #include "utils/syscache.h"
  
+ #define MAX_UNIT_LEN		3
+ #define MAX_DIGITS		20
+ 
+ typedef struct
+ {
+ 	char		unit[MAX_UNIT_LEN + 1];
+ 	long int	multiplier;
+ } unit_multiplier;
+ 
+ static const unit_multiplier unit_multiplier_table[] =
+ {
+ 	{"B", 1L},
+ 	{"kiB", 1024L},
+ 	{"MiB", 1024L * 1024},
+ 	{"GiB", 1024L * 1024 * 1024},
+ 	{"TiB", 1024L * 1024 * 1024 * 1024},
+ 	{"PiB", 1024L * 1024 * 1024 * 1024 * 1024},
+ 	{"B", 1L},
+ 	{"kB", 1000L},
+ 	{"MB", 1000L * 1000},
+ 	{"GB", 1000L * 1000 * 1000},
+ 	{"TB", 1000L * 1000 * 1000 * 1000},
+ 	{"PB", 1000L * 1000 * 1000 * 1000 * 1000},
+ 
+ 	{""}/* end of table marker */
+ };
+ 
+ 
  /* Divide by two and round towards positive infinity. */
  #define half_rounded(x)   (((x) + ((x) < 0 ? 0 : 1)) / 2)
  
*** calculate_table_size(Relation rel)
*** 409,415 
   * Calculate total on-disk size of all indexes attached to the given table.
   *
   * Can be applied safely to an index, but you'll just get zero.
!  */
  static int64
  calculate_indexes_size(Relation rel)
  {
--- 437,443 
   * Calculate total on-disk size of all indexes attached to the given table.
   *
   * Can be applied safely to an index, but you'll just get zero.
!  */	
  static int64
  calculate_indexes_size(Relation rel)
  {
*** pg_total_relation_size(PG_FUNCTION_ARGS)
*** 526,574 
  }
  
  /*
!  * formatting with size units
   */
  Datum
  pg_size_pretty(PG_FUNCTION_ARGS)
  {
  	int64		size = PG_GETARG_INT64(0);
  	char		buf[64];
- 	int64		limit = 10 * 1024;
- 	int64		limit2 = limit * 2 - 1;
  
! 	if (Abs(size) < limit)
! 		snprintf(buf, sizeof(buf), INT64_FORMAT " bytes", size);
! 	else
  	{
! 		size >>= 9;/* keep one extra bit for rounding */
! 		if (Abs(size) < limit2)
! 			snprintf(buf, sizeof(buf), INT64_FORMAT " kB",

Re: [HACKERS] [PATCH] Equivalence Class Filters

2015-12-20 Thread David Rowley
On 16 December 2015 at 13:26, Simon Riggs  wrote:

> There is an interesting real world case where we might get some use of
> these thoughts.
>
> If we have Orders and OrderItems (FK->Orders)
> and we also know (and can Assert) Order.order_date <= OrderItems.ship_date
> then a restriction on Orders.order_date > X => OrderItem.ship_date > X
> when the two tables are joined on OrderId
> and also a restriction on OrderItems.ship_date >= X => Orders.order_date <
> X when the two tables are joined on OrderId
>
> Such an assertion could be checked during the FK check, so would not be
> expensive to maintain.
>
> One for the future, at least, since we don't have any way of expressing or
> enforcing that just yet.
>
>
That does sound interesting, but it's important to remember that referenced
tables are not updated in real time in that same way that indexes are. This
was the reason the INNER JOIN removals had problems, we simply can't
determine at planner time that the trigger queue for the foreign key will
be empty during execution, so can't be certain that the foreign key will be
"true".

I'm just mentioning this as I wouldn't want someone to run off thinking
this was a fantastic idea without being aware of the above, and waste time
making the same mistakes as I did last year.

-- 
 David Rowley   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


Re: [HACKERS] Additional role attributes && superuser review

2015-12-20 Thread Michael Paquier
On Tue, Dec 1, 2015 at 9:18 AM, Michael Paquier
 wrote:
> On Tue, Dec 1, 2015 at 3:32 AM, Stephen Frost  wrote:
>> * Robert Haas (robertmh...@gmail.com) wrote:
>>> On Fri, Nov 20, 2015 at 12:29 PM, Stephen Frost  wrote:
>>> > * Michael Paquier (michael.paqu...@gmail.com) wrote:
>>> >> On Thu, Nov 19, 2015 at 7:10 AM, Stephen Frost wrote:
>>> >> > * Michael Paquier (michael.paqu...@gmail.com) wrote:
>>> >> >> It seems weird to not have a dedicated role for pg_switch_xlog.
>>> >> >
>>> >> > I didn't add a pg_switch_xlog default role in this patch series, but
>>> >> > would be happy to do so if that's the consensus.  It's quite easy to 
>>> >> > do.
>>> >>
>>> >> Agreed. I am not actually getting why that's part of the backup
>>> >> actually. That would be more related to archiving, both being
>>> >> unrelated concepts. But at this point I guess that's mainly a
>>> >> philosophical split.
>>> >
>>> > As David notes, they're actually quite related.  Note that in our
>>> > documentation pg_switch_xlog() is listed in the "Backup Control
>>> > Functions" table.
>>> >
>>> > I can think of a use-case for a user who can call pg_switch_xlog, but
>>> > not pg_start_backup()/pg_stop_backup(), but I have to admit that it
>>> > seems rather limited and I'm on the fence about it being a worthwhile
>>> > distinction.
>>>
>>> Sounds too narrow to me.  Are we going to have a separate predefined
>>> role for every security-restricted function to which someone might
>>> want to grant access?  That seems over the top to me.
>>
>> I certainly don't want to go down to that level and was, as seen above,
>> unsure about having pg_switch_xlog() as a differentiated privilege.
>> Michael, do you still see that as a useful independent capability?
>
> OK, let's do so then by having this one fall under pg_backup. Let's
> not be my grunting concerns be an obstacle for this patch, and we
> could still change it afterwards in this release beta cycle anyway
> based on user feedback.

Three weeks later...
This thread has not moved a iota. Stephen, are you planning to work
more on this patch? It seems that we found a consensus. If nothing
happens, I am afraid that the destiny of this patch will be to be
returned with feedback, it is the 5th CF where this entry is
registered.
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: BUG #13685: Archiving while idle every archive_timeout with wal_level hot_standby

2015-12-20 Thread Michael Paquier
On Sun, Dec 20, 2015 at 10:14 PM, Michael Paquier
 wrote:
> Speaking of which, this patch was registered in this CF, I am moving
> it to the next as a bug fix.

I found a stupid bug in my previous patch: when issuing XLOG_SWITCH it
is possible that the return LSN pointer is on the new segment that has
been forcibly archived if RequestXLogSwitch is called multiple times
and that subsequent calls are not necessary. Patch updated.
-- 
Michael


archive-timeout-v5.patch
Description: binary/octet-stream

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Additional LWLOCK_STATS statistics

2015-12-20 Thread Jesper Pedersen

On 12/18/2015 01:16 PM, Robert Haas wrote:

Is this just for informational purposes, or is this something you are
looking to have committed?  I originally thought the former, but now
I'm wondering if I misinterpreted your intent.  I have a hard time
getting excited about committing something that would, unless I'm
missing something, pretty drastically increase the overhead of running
with LWLOCK_STATS...



Yeah, so unless other people using LWLOCK_STATS find the additional 
information of use (w/ the extra overhead), I think we can mark it as 
"Returned with feedback" or "Rejected".


Alternative, I can redo the patch requiring an additional #define - 
f.ex. LWLOCK_STATS_QUEUE_SIZES


Best regards,
 Jesper



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Threads in PostgreSQL

2015-12-20 Thread Chapman Flack
On 12/21/15 01:24, Atri Sharma wrote:
> On Mon, Dec 21, 2015, sri harsha  wrote:
> 
>> I am using threads in my
>> foreign data wrapper and i get the following error when i use the threads .
>>
>> *ERROR:  stack depth limit exceeded*
>> *HINT:  Increase the configuration parameter "max_stack_depth" (currently
>> 2048kB), after ensuring the platform's stack depth limit is adequate.*

> PostgreSQL is a process-per-backend model.

To elaborate on that, it is a process per backend and most of the code
in that process has not been written for safe execution by multiple
threads. If your code starts other threads that only do other things,
but only one ever touches PostgreSQL backend code, that can be ok.
This happens in PL/Java, but it uses several interrelated precautions
to make sure no thread ever enters PostgreSQL backend code unless every
other thread is known to be out.

Even if you are taking that precaution, if the backend code is entered
by a different thread than last executed there, the stack depth tests
may be made by comparing the last thread's stack base to the current
thread's stack pointer, which will naturally give you a bogus result.
There is some API in miscadmin.h for manipulating the backend's idea
of the stack base, but there be dragons.

I am far short of the knowledgeable voices here, but in case you don't
hear from them right away, that is at least how I understand the matter.

Chapman Flack


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: In-core regression tests for replication, cascading, archiving, PITR, etc.

2015-12-20 Thread Michael Paquier
On Sat, Dec 12, 2015 at 8:29 PM, Michael Paquier
 wrote:
> On Sat, Dec 12, 2015 at 11:37 AM, Noah Misch  wrote:
>> On Fri, Dec 11, 2015 at 09:34:34PM +0900, Michael Paquier wrote:
>>> On Fri, Dec 11, 2015 at 8:48 PM, Alvaro Herrera
>>>  wrote:
>>> > Michael Paquier wrote:
>>> >> On Fri, Dec 11, 2015 at 5:35 AM, Alvaro Herrera
>>> >>  wrote:
>>> >> I guess that to complete your idea we could allow PostgresNode to get
>>> >> a custom name for its log file through an optional parameter like
>>> >> logfile => 'myname' or similar. And if nothing is defined, process
>>> >> falls back to applname. So this would give the following:
>>> >> ${testname}_${logfile}.log
>>> >
>>> > Sure. I don't think we should the name only for the log file, though,
>>> > but also for things like the "## " informative messages we print here
>>> > and there.  That would make the log file simpler to follow.  Also, I'm
>>> > not sure about having it be optional.  (TBH I'm not sure about applname
>>> > either; why do we keep that one?)
>>>
>>> OK, so let's do this: the node name is a mandatory argument of
>>> get_new_node, which is passed to "new PostgresNode" like the port and
>>> the host, and it is then used in the log file name as well as in the
>>> information messages you are mentioning. That's a patch simple enough.
>>> Are you fine with this approach?
>>
>> Sounds reasonable so far.
>
> OK, done so.
>
>>> Regarding the application name, I still think it is useful to have it
>>> though. pg_rewind should actually use it, and the other patch adding
>>> the recovery routines will use it.
>>
>> Using the application_name connection parameter is fine, but I can't think of
>> a reason to set it to "node_".$node->port instead of $node->name.  And I 
>> can't
>> think of a use for the $node->applname field once you have $node->name.  What
>> use case would benefit?
>
> I have the applname stuff, and updated the log messages to use the
> node name for clarity.
>
> The patch to address those points is attached.

As this thread is stalling a bit, please find attached a series of
patch gathering all the pending issues for this thread:
- 0001, fix config_default.pl for MSVC builds to take into account TAP tests
- 0002, append a node name in get_new_node (per Noah's request)
- 0003, the actual recovery test suite
Hopefully this facilitates future reviews.
Regards,
-- 
Michael


0001-Fix-default-configuration-of-MSVC-builds-ignoring-TA.patch
Description: binary/octet-stream


0002-Assign-node-name-to-TAP-tests.patch
Description: binary/octet-stream


0003-Add-recovery-test-suite.patch
Description: binary/octet-stream

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Threads in PostgreSQL

2015-12-20 Thread Konstantin Knizhnik

Hi,

PostgreSQL is not using threads but it is possible to spawn thread in 
your PostgreSQL extensions.

For example, I have used pool of threads in my IMCS extension.
But you need to build your  extension with -pthread:

CUSTOM_COPT = -pthread

Also, please take in account that many PostgreSQL functions (even in/out 
or comparison functions) are not reentrant: them are storing their state 
in global variables.
So you will get race conditions if you are calling  such functions from 
multiple threads.


Concerning stack overflow, I think that the most probable reason is 
trivial infinite recursion.

Did you inspect stack trace in debugger?






On 21.12.2015 09:21, sri harsha wrote:

Hi,

   Is it possible to use threads in Postgresql ?? I am using threads 
in my foreign data wrapper and i get the following error when i use 
the threads .


*ERROR:  stack depth limit exceeded*
*HINT:  Increase the configuration parameter "max_stack_depth" 
(currently 2048kB), after ensuring the platform's stack depth limit is 
adequate.*


No matter how much i increase the stack size , the error keeps 
occurring . How do i solve this problem ??



Thanks,
Harsha




Re: [HACKERS] Parallel Aggregate

2015-12-20 Thread David Rowley
On 21 December 2015 at 17:23, Haribabu Kommi 
wrote:

>
> Attached latest performance report. Parallel aggregate is having some
> overhead
> in case of low selectivity.This can be avoided with the help of cost
> comparison
> between normal and parallel aggregates.
>
>
Hi, Thanks for posting an updated patch.

Would you be able to supply a bit more detail on your benchmark? I'm
surprised by the slowdown reported with the high selectivity version. It
gives me the impression that the benchmark might be producing lots of
groups which need to be pushed through the tuple queue to the main process.
I think it would be more interesting to see benchmarks with varying number
of groups, rather than scan selectivity. Selectivity was important for
parallel seqscan, but less so for this, as it's aggregated groups we're
sending to main process, not individual tuples.

-- 
 David Rowley   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


Re: [HACKERS] Additional role attributes && superuser review

2015-12-20 Thread Stephen Frost
Michael,

* Michael Paquier (michael.paqu...@gmail.com) wrote:
> On Tue, Dec 1, 2015 at 9:18 AM, Michael Paquier
>  wrote:
> > OK, let's do so then by having this one fall under pg_backup. Let's
> > not be my grunting concerns be an obstacle for this patch, and we
> > could still change it afterwards in this release beta cycle anyway
> > based on user feedback.
> 
> Three weeks later...
> This thread has not moved a iota. Stephen, are you planning to work
> more on this patch? It seems that we found a consensus. If nothing
> happens, I am afraid that the destiny of this patch will be to be
> returned with feedback, it is the 5th CF where this entry is
> registered.

Ok, seems you're right that we've got consensus on it.  I'll post an
updated patch later today which I'll plan to commit.

Thanks!

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] extend pgbench expressions with functions

2015-12-20 Thread Michael Paquier
On Sat, Dec 19, 2015 at 10:32 PM, Fabien COELHO  wrote:
>
>>> After looking again at the code, I remembered why double are useful:
>>> there
>>> are needed for random exponential & gaussian because the last parameter
>>> is a
>>> double.
>>>
>>> I do not care about the sqrt, but double must be allowed to keep that,
>>> and
>>> the randoms are definitely useful for a pgbench script. Now the patch may
>>> just keep double constants, but it would look awkward, and the doc must
>>> explain why 1.3 and 1+2 are okay, but not 1.3 + 2.4.
>>>
>>> So I'm less keen at removing double expressions, because it removes a key
>>> feature. If it is a blocker I'll go for just the constant, but this looks
>>> to
>>> me like a stupid compromise.
>>
>>
>> Hm, say that you do that in a script: \set aid double(1.4) \set bid
>> random_gaussian(1, 10, :aid) Then what is passed as third argument in
>> random_gaussian is 1, and not 1.4, no?
>
>
> Indeed.
>
> Maybe pgbench should just generate an error when a variable is assigned a
> double, so that the user must explicitly add an int() cast.

I would honestly just remove this whole int() and double() business
from what is available for the user, and mention on in the
documentation clearly the return type and the types of the arguments
of each function. And that's already what your patch is doing.

>> If all allocations within a variable are unconditionally integers, why is
>> it useful to make the cast function double() user-visible?
>
> I'm not sure whether we are talking about the same thing:
>  - there a "double" type managed within expressions, but not variables
>  - there is a double() function, which takes an int and casts to double
>
> I understood that you were suggesting to remove all "double" expressions,
> but now it seems to be just about the double() function.

There is indeed a misunderstanding here: I meant from the start the
removal of only the "double" function. It would be nice to keep as
user-visible only things that have some meaning.

>> I am just doubting that it is actually necessary to make that visible at
>> user-level if they have no direct use..
>
> If there are both ints and doubles, then being able to cast make sense, so I
> just put both functions without deeper thinking.
> So I would suggest to generate an error when an double expression is
> assigned to a variable, so as to avoid any surprise.
> If both type are kept, I would like to keep the debug functions, which is
> really just a debug tool to have a look at what is going within expressions.

Well, if there were doubles as return results really allocated as
doubles in variables having both would make sense. And honestly
something like sqrt that returns an integer when allocated in a
variable is really surprising.. And as you mentioned upthread there is
no real meaning to have doubles variable types that can be allocated.

(Moving this patch to next CF btw)
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP: bloom filter in Hash Joins with batches

2015-12-20 Thread Tomas Vondra

Hi,

On 12/20/2015 05:46 AM, Oleg Bartunov wrote:

Tomas,

have you seen
http://www.postgresql.org/message-id/4b4dd67f.9010...@sigaev.ru
I have very limited internet connection (no graphics) , so I may miss
something


I haven't seen that, but I don't really see how that's related - your 
post is about indexes, mine is about building temporary bloom filters 
when executing hash joins.


FWIW, I think bloom filters should be easy to add to BRIN indexes, as 
another type of 'summary'. That should address most of the missing 
pieces in your implementation (e.g. WAL).


regards

--
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Equivalence Class Filters

2015-12-20 Thread David Fetter
On Sun, Dec 20, 2015 at 10:27:35PM +1300, David Rowley wrote:
> On 16 December 2015 at 13:26, Simon Riggs  wrote:
> 
> > There is an interesting real world case where we might get some
> > use of these thoughts.
> >
> > If we have Orders and OrderItems (FK->Orders) and we also know
> > (and can Assert) Order.order_date <= OrderItems.ship_date then a
> > restriction on Orders.order_date > X => OrderItem.ship_date > X
> > when the two tables are joined on OrderId and also a restriction
> > on OrderItems.ship_date >= X => Orders.order_date < X when the two
> > tables are joined on OrderId
> >
> > Such an assertion could be checked during the FK check, so would
> > not be expensive to maintain.
> >
> > One for the future, at least, since we don't have any way of
> > expressing or enforcing that just yet.
> >
> That does sound interesting, but it's important to remember that
> referenced tables are not updated in real time in that same way that
> indexes are.

Is getting them so even remotely possible, given the system we have
now?

Cheers,
David.
-- 
David Fetter  http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter  XMPP: david.fet...@gmail.com

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers