2014-02-25 11:29 GMT+01:00 Pavel Stehule <[email protected]>:
> Hello
>
>
> 2014-02-24 21:31 GMT+01:00 Jeff Janes <[email protected]>:
>
> On Mon, Feb 24, 2014 at 7:02 AM, Pavel Stehule <[email protected]>wrote:
>>
>>>
>>>
>>>
>>> 2014-02-23 21:32 GMT+01:00 Andres Freund <[email protected]>:
>>>
>>> Hi,
>>>>
>>>> On 2014-02-23 20:04:39 +0100, Pavel Stehule wrote:
>>>> > There is relative few very long ProcArrayLocks lwlocks
>>>> >
>>>> > This issue is very pathologic on fast computers with more than 8 CPU.
>>>> This
>>>> > issue was detected after migration from 8.4 to 9.2. (but tested with
>>>> same
>>>> > result on 9.0) I see it on devel 9.4 today actualized.
>>>> >
>>>> > When I moved PREPARE from cycle, then described issues is gone. But
>>>> when I
>>>> > use a EXECUTE IMMEDIATELY, then the issue is back. So it looks it is
>>>> > related to planner, ...
>>>>
>>>> In addition to the issue Jeff mentioned, I'd suggest trying the same
>>>> workload with repeatable read. That can do *wonders* because of the
>>>> reduced number of snapshots.
>>>>
>>>>
>>> I tested it, and it doesn't help.
>>>
>>> Is there some patch, that I can test related to this issue?
>>>
>>
>> This is the one that I was referring to:
>>
>> http://www.postgresql.org/message-id/[email protected]
>>
>
> I tested this patch with great success. Waiting on ProcArrayLocks are
> off. Throughput is expected.
>
> For described use case it is seriously interesting.
>
Here is a update for 9.4
Regards
Pavel
>
> Regards
>
> Pavel
>
>
> light weight locks
> lockname mode count avg
> DynamicLocks Exclusive 8055 5003
> DynamicLocks Shared 1666 50
> LockMgrLocks Exclusive 129 36
> IndividualLock Exclusive 34 48
> IndividualLock Shared 21 7
> BufFreelistLock Exclusive 12 8
> WALWriteLock Exclusive 1 38194
> ProcArrayLock Shared 1 8
>
>
>
>> Cheers,
>>
>> Jeff
>>
>>
>>
>
diff --git a/src/backend/utils/adt/selfuncs.c b/src/backend/utils/adt/selfuncs.c
new file mode 100644
index d525ca4..261473d
*** a/src/backend/utils/adt/selfuncs.c
--- b/src/backend/utils/adt/selfuncs.c
*************** get_actual_variable_range(PlannerInfo *r
*** 4958,4963 ****
--- 4958,4964 ----
HeapTuple tup;
Datum values[INDEX_MAX_KEYS];
bool isnull[INDEX_MAX_KEYS];
+ SnapshotData SnapshotDirty;
estate = CreateExecutorState();
econtext = GetPerTupleExprContext(estate);
*************** get_actual_variable_range(PlannerInfo *r
*** 4980,4985 ****
--- 4981,4987 ----
slot = MakeSingleTupleTableSlot(RelationGetDescr(heapRel));
econtext->ecxt_scantuple = slot;
get_typlenbyval(vardata->atttype, &typLen, &typByVal);
+ InitDirtySnapshot(SnapshotDirty);
/* set up an IS NOT NULL scan key so that we ignore nulls */
ScanKeyEntryInitialize(&scankeys[0],
*************** get_actual_variable_range(PlannerInfo *r
*** 4997,5003 ****
if (min)
{
index_scan = index_beginscan(heapRel, indexRel,
! GetActiveSnapshot(), 1, 0);
index_rescan(index_scan, scankeys, 1, NULL, 0);
/* Fetch first tuple in sortop's direction */
--- 4999,5005 ----
if (min)
{
index_scan = index_beginscan(heapRel, indexRel,
! &SnapshotDirty, 1, 0);
index_rescan(index_scan, scankeys, 1, NULL, 0);
/* Fetch first tuple in sortop's direction */
*************** get_actual_variable_range(PlannerInfo *r
*** 5029,5035 ****
if (max && have_data)
{
index_scan = index_beginscan(heapRel, indexRel,
! GetActiveSnapshot(), 1, 0);
index_rescan(index_scan, scankeys, 1, NULL, 0);
/* Fetch first tuple in reverse direction */
--- 5031,5037 ----
if (max && have_data)
{
index_scan = index_beginscan(heapRel, indexRel,
! &SnapshotDirty, 1, 0);
index_rescan(index_scan, scankeys, 1, NULL, 0);
/* Fetch first tuple in reverse direction */
--
Sent via pgsql-hackers mailing list ([email protected])
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers