_accounts | (1050391,6)
> pgbench_accounts | (1158640,46)
> pgbench_accounts | (1238067,18)
> pgbench_accounts | (1273282,22)
> pgbench_accounts | (1355816,54)
> pgbench_accounts | (1361880,33)
> (8 rows)
>
>
Is this output of pg_check_visible() or pg_check_frozen()?
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com
On Thu, Jun 9, 2016 at 7:44 PM, Robert Haas wrote:
>
> On Thu, Jun 9, 2016 at 9:37 AM, Amit Kapila
wrote:
> > On Thu, Jun 9, 2016 at 8:47 AM, Robert Haas
wrote:
> >> On Mon, Jun 6, 2016 at 6:07 AM, Amit Kapila
> >> wrote:
> >> > That seems doable,
On Thu, Jun 9, 2016 at 8:47 AM, Robert Haas wrote:
>
> On Mon, Jun 6, 2016 at 6:07 AM, Amit Kapila
wrote:
> > That seems doable, as for such rels we can only have Vars and
> > PlaceHolderVars in targetlist. Basically, whenever we are adding
> > PlaceHolderVars to a rela
On Thu, Jun 9, 2016 at 8:48 AM, Amit Kapila wrote:
>
> On Wed, Jun 8, 2016 at 6:31 PM, Robert Haas wrote:
> >
> >
> > Here's my proposal:
> >
> > 1. You already implemented a function to find non-frozen tuples on
> > supposedly all-frozen pag
sedly
> all-visible pages.
>
I am planning to name them as pg_check_frozen and pg_check_visible, let me
know if you something else suits better?
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com
On Wed, Jun 8, 2016 at 6:31 PM, Robert Haas wrote:
>
> On Wed, Jun 8, 2016 at 4:01 AM, Amit Kapila
wrote:
> > If we want to address both page level and tuple level inconsistencies, I
> > could see below possibility.
> >
> > 1. An API that returns setof record
On Wed, Jun 8, 2016 at 11:39 AM, Andres Freund wrote:
>
> On 2016-06-08 10:04:56 +0530, Amit Kapila wrote:
> > On Tue, Jun 7, 2016 at 11:01 PM, Andres Freund
wrote:>
> > > I think if we go with the pg_check_visibility approach, we should also
> > > copy
g as a block level
> issue?
>
The way currently this module provides information, it seems better to have
separate API's for block and tuple level inconsistency. For block level, I
think most of the information can be retrieved by existing API's and for
tuple level, this new API
On Wed, Jun 8, 2016 at 8:37 AM, Robert Haas wrote:
>
> On Tue, Jun 7, 2016 at 10:19 AM, Amit Kapila
wrote:
> > I have implemented the above function in attached patch. Currently, it
> > returns SETOF tupleids, but if we want some variant of same, that should
> > also
nk that we should use BufferIsValid() here.
>
We can use BufferIsValid() as well, but I am trying to be consistent with
nearby code, refer collect_visibility_data(). We can change at all places
together if people prefer that way.
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com
nge in page (CLOG-page) format
which might not be a trivial work to accomplish.
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com
plemented the above function in attached patch. Currently, it
returns SETOF tupleids, but if we want some variant of same, that should
also be possible.
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com
pg_check_visibility_func_v1.patch
Description: Binary data
--
Sent
as for that
we adjust target list separately in set_append_rel_size. I think we need
to deal with it separately.
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com
prohibit_parallel_clause_below_rel_v2.patch
Description: Binary data
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
On Sat, Jun 4, 2016 at 8:43 AM, Robert Haas wrote:
>
> On Thu, May 26, 2016 at 5:57 AM, Amit Kapila
wrote:
> >
> > I am able to reproduce the assertion (it occurs once in two to three
times,
> > but always at same place) you have reported upthread with the above
query.
&
On Fri, Jun 3, 2016 at 6:49 PM, Konstantin Knizhnik <
k.knizh...@postgrespro.ru> wrote:
>
>
> On 03.06.2016 16:05, Amit Kapila wrote:
>
> On Fri, Jun 3, 2016 at 1:34 AM, Konstantin Knizhnik <
> k.knizh...@postgrespro.ru> wrote:
>
>> We have to add three m
* Deserialize transaction state
> */
> void(*DeserializeTransactionState)(void* ctx);
>
>
In above proposal, are you suggesting to change the existing API's as well,
because the parameters of function pointers don't match with exiting API's.
I think it i
eration,
but that is not everything. I think calling it max_parallelism as
suggested by Alvaro upthread suits better than max_parallel_workers.
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com
with respect to
simple query. However, I agree that it is better if statement_timeout is
the timeout for each execution of the parsed statement.
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com
'll be putting CancelRequested checks back in at some point.
>
> >
https://www.postgresql.org/message-id/20150122174601.gb1...@alvh.no-ip.org
>
> Hmm, did the patch you're discussing there get committed?
>
Yes, it was committed - a1792320
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com
duplicate, and keep a flag about
> ignoring nextval in the context variable?
>
makes sense. +1 for doing it in the way as you are suggesting.
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com
ot the person to ask.
>
I think point (2) and (3) are related because using _endthreadex won't
close the thread handle explicitly [1].
[1] - https://msdn.microsoft.com/en-us/library/kdzttdcb.aspx
Refer line "*_endthread* automatically closes the thread handle, whereas
*_endthreadex* does not."
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com
eK1Ky2=HsTsT4hmfL=eal5rv0_t59tvwzvk9hqkvn6do...@mail.gmail.com
[2] -
https://www.postgresql.org/message-id/CAA4eK1L-Uo=s4=0jvvva51pj06u5wddvsqg673yuxj_ja+x...@mail.gmail.com
[3] -
https://www.postgresql.org/message-id/CAFiTN-vzg5BkK6kAh3OMhvgRu-uJvkjz47ybtopMAfGJp=z...@mail.gmail.com
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com
On Wed, May 25, 2016 at 9:44 PM, Michael Paquier
wrote:
>
> On Wed, May 25, 2016 at 12:11 AM, Amit Kapila
wrote:
> >
> > Okay, attached patch just does that and I have verified that it allows
to
> > start multiple services in windows. In off list discussion with
Robert,
ref_0.histogram_bounds as c2, 100 as c3
from
pg_catalog.pg_stats as ref_0
where 49 is not NULL limit 55) as subq_0
where true
limit 58;
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com
stop_processing_tuples_detached_queue_v1.patch
Description: Binary
On Tue, May 17, 2016 at 2:31 AM, Michael Paquier
wrote:
>
> On Tue, May 17, 2016 at 4:16 AM, Amit Kapila
wrote:
> > On Mon, May 16, 2016 at 9:45 AM, Michael Paquier <
michael.paqu...@gmail.com>
> > wrote:
> >>
> >> On Sun, May 15, 2016 at 3:34 PM, Amit
On Mon, May 23, 2016 at 4:48 PM, Andreas Seltenreich
wrote:
>
> Amit Kapila writes:
>
> > Earlier problems were due to the reason that some unsafe/restricted
> > expressions were pushed below Gather node as part of target list
whereas in
> > the plan6, it seems some
> >>
> >> TRAP: FailedAssertion("!(mqh->mqh_partial_bytes <= nbytes)", File:
"shm_mq.c", Line: 386)
> >
> > I no longer observe these after applying these two patches by Amit
> > Kapila
>
> I spoke too soon: These still occur with the p
On Sun, May 22, 2016 at 9:32 PM, Andreas Seltenreich
wrote:
>
> Amit Kapila writes:
>
> > avoid_restricted_clause_below_gather_v1.patch
> > prohibit_parallel_clause_below_rel_v1.patch
>
> I didn't observe any parallel worker related coredumps since applying
>
t the value of nextXID in your patch same as
lastSaneFrozenXid in most cases (I mean there is a small window where some
new transaction might have started due to which the value of
ShmemVariableCache->nextXid has been advanced)? So isn't relying on
lastSaneFrozenXid check sufficient?
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com
re is any form of parallel sort work.
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com
On Mon, May 16, 2016 at 9:45 AM, Michael Paquier
wrote:
>
> On Sun, May 15, 2016 at 3:34 PM, Amit Kapila
wrote:
> > Sounds sensible, but if we want to that route, shall we have some
mechanism
> > such that if retrying it for 10 times (10 is somewhat arbitrary, but we
>
On Sat, May 14, 2016 at 7:33 PM, Robert Haas wrote:
>
> On Tue, Mar 22, 2016 at 12:56 AM, Amit Kapila
wrote:
> >> >> Yes, same random number generation is not the problem. In windows
apart
> >> >> from EEXIST error, EACCES also needs to be validated and re
ed
buffers, then Mithun has already reported above [1] that it didn't see any
regression for that case
[1] -
http://www.postgresql.org/message-id/cad__ouiobznvtt_ho__p5aenu4inqcfwgarxr4tblke-uxy...@mail.gmail.com
Read line - Even for READ-WRITE when data fits into shared buffer
(scale_factor=300 and shared_buffers=8GB) performance has improved.
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com
On Fri, May 13, 2016 at 9:43 AM, Amit Kapila
wrote:
>
> On Thu, May 12, 2016 at 11:37 PM, Tom Lane wrote:
> >
> > Robert Haas writes:
> > >> Target list for a relation, you mean? See relation.h:
> > >>
> > >> * reltarget - Def
l, plain rels only output Vars ;-)
>
Does this mean that base rels can't contain PlaceHolderVars? Isn't it
possible in below code:
query_planner()
{
..
/*
* Now distribute "placeholders" to base rels as needed. This has to be
* done after join removal because removal could c
On Sat, May 7, 2016 at 6:37 PM, Amit Kapila wrote:
>
> On Fri, May 6, 2016 at 8:45 AM, Tom Lane wrote:
> >
> > Andreas Seltenreich writes:
> > > when fuzz testing master as of c1543a8, parallel workers trigger the
> > > following assertion in ExecInitSubPla
't have any code to perform
incomplete splits, the logic for locking/pins during Insert is yet to be
done and many more things.
[1] -
http://www.postgresql.org/message-id/ca+tgmozymojsrfxhxq06g8jhjxqcskvdihb_8z_7nc7hj7i...@mail.gmail.com
[2] - http://www.postgresql.org/message-id/531992af.2
uses below gather path.
Now back to the original bug, if you notice in plan file attached in
original bug report, the subplan is pushed below Gather node in target
list, but not to immediate join, rather at one more level down to SeqScan
path. I am still not sure how it has manage to push the restr
t cut, something like the attached.
>
Patch looks good to me. I have done some testing with hash and btree
indexes and it works as expected.
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com
er scalability
I think we should add that as a significant change.
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com
On Thu, May 5, 2016 at 11:52 AM, Thomas Munro
wrote:
>
> On Thu, May 5, 2016 at 4:32 PM, Tom Lane wrote:
> > Amit Kapila writes:
> >> How about using 512 bytes as a write size and perform direct writes
rather
> >> than going via OS buffer cache for control fi
se is not having a sane range limit on the GUC.
>
I think it might not be advisable to have this value more than the number
of CPU cores, so how about limiting it to 512 or 1024?
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com
with a certain xmin horizon is
> taken.
Here are you talking about snapshot time (snapshot->whenTaken) which is
updated at the time of GetSnapshotData()?
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com
On Wed, May 4, 2016 at 8:03 PM, Tom Lane wrote:
>
> Amit Kapila writes:
> > On Wed, May 4, 2016 at 4:02 PM, Alex Ignatov
> > wrote:
> >> On 03.05.2016 2:17, Tom Lane wrote:
> >>> Writing a single sector ought to be atomic too.
>
> >> pg_co
8k record of
> pg_control should be written first. It can be last sector or say sector
> number 10 from 16.
The actual data written is always sizeof(ControlFileData) which should be
less than one sector. I think it is only possible that we get a torn write
for pg_control, if while writing + fsyncing, the filesystem maps that data
to different sectors.
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com
_old or
atomic pin/unpin for 9.6. Can we consider to postpone beta1, so that the
patch authors get time to resolve blocking issues? I think there could be
a strong argument that it is just a waste of time if the situation doesn't
improve much even after delay, but it seems we ca
ve) the function that adjusts the xmin is called for a vacuum or
> pruning. He mentioned one and I mentioned the other, but it's all
> controlled by TransactionIdLimitedForOldSnapshots().
>
Yes, I think we are saying the same thing here.
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com
vote goes with changing the default of max_parallel_degree to
1 (as suggested by Peter G.).
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com
On Sun, May 1, 2016 at 12:05 PM, Amit Kapila
wrote:
> Currently we do the test for old snapshot (TestForOldSnapshot) for hash
> indexes while scanning them. Does this test makes any sense for hash
> indexes considering LSN on hash index will always be zero (as hash indexes
> are no
() will always return false which means that the error
"snapshot too old" won't be generated for hash indexes.
Am I missing something here, if not, then I think we need a way to prohibit
pruning for hash indexes based on old_snapshot_threshold?
With Regards,
Amit Kapila.
Ent
On Sat, Apr 30, 2016 at 5:58 AM, Andreas Seltenreich
wrote:
>
> Alvaro Herrera writes:
> > Amit Kapila wrote:
> >> It will be helpful if you can find the offending query or plan
> >> corresponding to it?
> >
> > So I suppose the PID of the process start
On Fri, Apr 29, 2016 at 7:15 PM, Tom Lane wrote:
>
> Amit Kapila writes:
> > On Fri, Apr 29, 2016 at 12:01 PM, Andreas Seltenreich <
seltenre...@gmx.de>
> > wrote:
> >> tonight's sqlsmith run yielded another core dump:
> >>
> >> TRAP:
On Fri, Apr 29, 2016 at 7:33 PM, Tom Lane wrote:
>
> Amit Kapila writes:
> >> On Thu, Apr 28, 2016 at 10:06 PM, Tom Lane wrote:
> >>> I'd be inclined to think that it's silly to build GatherPaths in
advance
> >>> of having finalized the list o
below another Gather node which makes worker execute
the Gather node. Currently there is no support in workers to launch
another workers and ideally such a plan should not be generated. It will
be helpful if you can find the offending query or plan corresponding to it?
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com
relation and vice versa. So now
the second call to add_paths_to_joinrel() can replace a partial path which
is being referenced by GatherPath generated in first call. I think we
should generate gather paths for join rel after both the calls
to add_paths_to_joinrel() aka in make_join_rel(). Attached pat
..@lab.ntt.co.jp>
> > At Sat, 23 Apr 2016 10:12:03 -0400, Tom Lane wrote
> in <476.1461420...@sss.pgh.pa.us>
> > > Amit Kapila writes:
> > > > The main point for this improvement is that the handling for guc
> s_s_names
> > > > is not simila
On Mon, Apr 25, 2016 at 6:04 PM, Alexander Korotkov <
a.korot...@postgrespro.ru> wrote:
> On Sun, Apr 17, 2016 at 7:32 PM, Amit Kapila
> wrote:
>
>> On Thu, Apr 14, 2016 at 8:05 AM, Andres Freund
>> wrote:
>> >
>> > On 2016-04-14 07:59:07 +0530, Amit
; "max_parallel_workers"?
>
Degree of Parallelism is a term used in many of the other popular databases
for the similar purpose, so I think that is another reason to prefer
max_parallel_degree.
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com
On Sat, Apr 23, 2016 at 5:20 PM, Michael Paquier
wrote:
>
> On Sat, Apr 23, 2016 at 7:44 PM, Amit Kapila
wrote:
> > On Wed, Apr 20, 2016 at 12:46 PM, Kyotaro HORIGUCHI
> > wrote:
> >>
> >>
> >> assign_s_s_names causes SEGV when it is called without
On Tue, Apr 19, 2016 at 8:41 PM, Kevin Grittner wrote:
>
> On Tue, Apr 19, 2016 at 9:57 AM, Amit Kapila
wrote:
> > On Sun, Apr 17, 2016 at 2:26 AM, Andres Freund
wrote:
> >>
> >> On 2016-04-16 16:44:52 -0400, Noah Misch wrote:
> >> > That is more contr
lt); in
assign_synchronous_standby_names at below place:
+ /* Copy the parsed config into TopMemoryContext if exists */
+ if (syncrep_parse_result)
+ SyncRepConfig = SyncRepCopyConfig(syncrep_parse_result);
Could you please explain how to trigger the scenario where you have seen
SEGV?
With
n I think the
current implementation done by Kevin is closer to what Oracle provides.
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com
On Thu, Apr 21, 2016 at 6:38 AM, Ants Aasma wrote:
>
> On Tue, Apr 19, 2016 at 6:11 PM, Kevin Grittner wrote:
> > On Tue, Apr 19, 2016 at 9:57 AM, Amit Kapila
wrote:
> >> On Sun, Apr 17, 2016 at 2:26 AM, Andres Freund
wrote:
> >>>
> >>> FWIW, I coul
On Wed, Apr 20, 2016 at 7:39 PM, Andres Freund wrote:
>
> On 2016-04-19 20:27:31 +0530, Amit Kapila wrote:
> > On Sun, Apr 17, 2016 at 2:26 AM, Andres Freund
wrote:
> > >
> > > On 2016-04-16 16:44:52 -0400, Noah Misch wrote:
> > > > That is more contr
ly
>> find the problem. But I'm OK with changing the default to 2.
>>
>> I'm curious.
>
> Why not 4?
IIUC, the idea to change max_parallel_degree for beta is to catch any bugs
in parallelism code, not to do any performance testing of same. So, I
think either 1 or 2 should be sufficient to hit the bugs if there are any.
Do you have any reason to think that we might miss some category of bugs if
we don't use higher max_parallel_degree?
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com
On Tue, Apr 19, 2016 at 8:44 PM, Robert Haas wrote:
>
> On Tue, Apr 19, 2016 at 11:11 AM, Kevin Grittner
wrote:
> > On Tue, Apr 19, 2016 at 9:57 AM, Amit Kapila
wrote:
>
> >> It seems that for read-only workloads, MaintainOldSnapshotTimeMapping()
> >> takes EXC
>latest_xmin?
If we don't need it for above cases, I think it can address the performance
regression to a good degree for read-only workloads when the feature is
enabled.
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com
On Fri, Apr 15, 2016 at 1:59 AM, Alexander Korotkov <
a.korot...@postgrespro.ru> wrote:
> On Thu, Apr 14, 2016 at 5:35 AM, Andres Freund wrote:
>
>> On 2016-04-14 07:59:07 +0530, Amit Kapila wrote:
>> > What you want to see by prewarming?
>>
>> Prewarmin
On Thu, Apr 14, 2016 at 8:05 AM, Andres Freund wrote:
>
> On 2016-04-14 07:59:07 +0530, Amit Kapila wrote:
> > What you want to see by prewarming?
>
> Prewarming appears to greatly reduce the per-run variance on that
> machine, making it a lot easier to get meaningful result
ith such buffer pools can bypass ring buffers and use unused
shared buffers), retain or keep buffers (relations that are frequently
accessed can be associated with this kind of buffer pool where buffers can
stay for longer time) and a default buffer pool (all relations by default
will be associated with default buffer pool where the behaviour will be
same as current).
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com
On Fri, Apr 15, 2016 at 11:30 AM, Kyotaro HORIGUCHI <
horiguchi.kyot...@lab.ntt.co.jp> wrote:
>
> At Fri, 15 Apr 2016 08:52:56 +0530, Amit Kapila
wrote :
> >
> > How about if we do all the parsing stuff in temporary context and then
copy
> > the results using TopMe
opMemoryContext, because next time we try to check/assign s_s_names, it
will free the previous result.
>
> Changing
> > SyncRepConfigData.members to be char** would be messier..
>
> SyncRepGetSyncStandby logic assumes deeply that the sync standby names
> are constructed as a list.
> I think that it would entail a radical change in SyncRepGetStandby
> Another idea is to prepare the some functions that allocate/free
> element of list using by malloc, free.
>
Yeah, that could be another way of doing it, but seems like much more work.
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com
lexander and myself are
working on similar observation (run-to-run performance variation) in a
nearby thread [1].
[1] -
http://www.postgresql.org/message-id/20160412160246.nyzil35w3wein...@alap3.anarazel.de
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com
ed the contention (spinlocks) in dynahash tables, it might be
interesting to run the tests again.
> FWIW, I've posted an implementation of this in the checkpoint flushing
> thread; I saw quite substantial gains with it. It was just entirely
> unrealistic to push that into 9.6.
>
Sounds good. I remember last time you mentioned that such an idea could
benefit bulk load case when data doesn't fit in shared buffers, is it the
same case where you saw benefit or other cases like read-only and
read-write tests as well.
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com
On Wed, Apr 13, 2016 at 10:47 PM, Robert Haas wrote:
>
>
> I would be inclined to view this as a reasonable 9.6 cleanup of
> parallel query, but other people may wish to construe things more
> strictly than I would.
>
+1.
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com
yContext to allocate the memory in the check or assign function or
should we allocate some temporary context (like we do in load_tzoffsets())
to perform parsing and then delete the same at end.
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com
On Tue, Apr 12, 2016 at 9:32 PM, Andres Freund wrote:
>
> On 2016-04-12 19:42:11 +0530, Amit Kapila wrote:
> > Andres suggested me on IM to take performance data on x86 m/c
> > by padding PGXACT and the data for the same is as below:
> >
> > median of 3, 5-min run
ht need such padding or may be optimize them, so
that they are aligned.
I can do some more experiments on similar lines, but I am out on vacation
and might not be able to access the m/c for 3-4 days.
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com
pad_pgxac
veLock, the reason being it affects
> SELECTs.
>
> That is supposed to apply when things might change the answer from a
> SELECT, whereas this affects only the default for a plan.
>
>
By this theory, shouldn't any other parameter like n_distinct_inherited
which just effects the p
On Mon, Apr 11, 2016 at 7:33 PM, Alexander Korotkov <
a.korot...@postgrespro.ru> wrote:
> On Sun, Apr 10, 2016 at 2:24 PM, Amit Kapila
> wrote:
>>
>> I also tried to run perf top during pgbench and get some interesting
>>> results.
>>>
&g
On Sun, Apr 10, 2016 at 6:15 PM, Amit Kapila
wrote:
> On Sun, Apr 10, 2016 at 11:10 AM, Alexander Korotkov <
> a.korot...@postgrespro.ru> wrote:
>
>> On Sun, Apr 10, 2016 at 7:26 AM, Amit Kapila
>> wrote:
>>
>>> On Sun, Apr 10, 2016 at 1:13 AM, Andres
On Sun, Apr 10, 2016 at 11:10 AM, Alexander Korotkov <
a.korot...@postgrespro.ru> wrote:
> On Sun, Apr 10, 2016 at 7:26 AM, Amit Kapila
> wrote:
>
>> On Sun, Apr 10, 2016 at 1:13 AM, Andres Freund
>> wrote:
>>
>>> On 2016-04-09 22:38:31 +0300, Alexand
but with increased
clog buffers, it started showing noticeable gain. If by any chance, you
can apply that patch and see the results (latest patch is at [2]).
[1] -
http://www.postgresql.org/message-id/CAD__Ouic1Tvnwqm6Wf6j7Cz1Kk1DQgmy0isC7=ogx+3jtfg...@mail.gmail.com
[2] -
http://www.postgresql.org/message-id/cad__ouiwei5she2wwqck36ac9qmhvjuqg3cfpn+ofcmb7rd...@mail.gmail.com
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com
. Alexander, if try read-write workload with unlogged
tables, then we should see an improvement.
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com
On Fri, Apr 8, 2016 at 9:00 PM, Andres Freund wrote:
>
> On 2016-03-31 15:07:22 +0530, Amit Kapila wrote:
> > I think we should change comments on top of this function. I have
changed
> > the comments as per my previous patch and attached the modified patch
with
> > this
On Thu, Apr 7, 2016 at 5:49 PM, Fujii Masao wrote:
>
> On Thu, Apr 7, 2016 at 7:29 PM, Amit Kapila
wrote:
> >
> > So if we go by this each time backend calls pg_stat_get_wal_senders, it
> > needs to do parsing to form SyncRepConfig whether it's changed or n
On Thu, Apr 7, 2016 at 6:48 PM, Andres Freund wrote:
> On 2016-04-07 18:40:14 +0530, Amit Kapila wrote:
>
> > This is the data with -b tpcb-like@1 with 20-min run for each version
> and I
> > could see almost similar results as the data posted in previous e-mail.
> >
On Wed, Apr 6, 2016 at 10:49 PM, Julien Rouhaud
wrote:
>
> On 06/04/2016 07:38, Amit Kapila wrote:
> > On Tue, Apr 5, 2016 at 11:55 PM, Julien Rouhaud
> >>
> >> In alter_table.sgml, I didn't comment the lock level needed to modify
> >> parallel_degre
On Thu, Apr 7, 2016 at 6:48 PM, Andres Freund wrote:
>
> On 2016-04-07 18:40:14 +0530, Amit Kapila wrote:
> > This is the data with -b tpcb-like@1 with 20-min run for each version
and I
> > could see almost similar results as the data posted in previous e-mail.
> >
> &g
On Thu, Apr 7, 2016 at 10:16 AM, Andres Freund wrote:
>
> Hi,
>
> On 2016-04-07 09:14:00 +0530, Amit Kapila wrote:
> > On Sat, Apr 2, 2016 at 5:25 PM, Amit Kapila
wrote:
> > I have ran exactly same test on intel x86 m/c and the results are as
below:
>
>
On Thu, Apr 7, 2016 at 1:30 PM, Amit Langote
wrote:
>
> On 2016/04/07 15:26, Fujii Masao wrote:
> > On Thu, Apr 7, 2016 at 2:48 PM, Amit Kapila
wrote:
> >> On Thu, Apr 7, 2016 at 10:02 AM, Fujii Masao
wrote:
> >>> Yes if the variable that we'd like to pass
On Thu, Apr 7, 2016 at 11:56 AM, Fujii Masao wrote:
>
> On Thu, Apr 7, 2016 at 2:48 PM, Amit Kapila
wrote:
> > On Thu, Apr 7, 2016 at 10:02 AM, Fujii Masao
wrote:
> >>
> >> On Thu, Apr 7, 2016 at 1:22 PM, Amit Kapila
> >> wrote:
> >> >
> &
On Thu, Apr 7, 2016 at 10:02 AM, Fujii Masao wrote:
>
> On Thu, Apr 7, 2016 at 1:22 PM, Amit Kapila
wrote:
> >
> > But for that, I think we don't need to do anything extra. I mean
> > write_nondefault_variables() will automatically write the non-default
value
&g
On Wed, Apr 6, 2016 at 8:11 PM, Fujii Masao wrote:
>
> On Wed, Apr 6, 2016 at 11:14 PM, Amit Kapila
wrote:
> > On Wed, Apr 6, 2016 at 7:03 PM, Fujii Masao
wrote:
> >>
> >> On Wed, Apr 6, 2016 at 8:59 PM, Amit Kapila
> >> wrote:
> >> >
> &
On Sat, Apr 2, 2016 at 5:25 PM, Amit Kapila wrote:
> On Thu, Mar 31, 2016 at 3:48 PM, Andres Freund wrote:
>
> Here is the performance data (configuration of machine used to perform
> this test is mentioned at end of mail):
>
> Non-
On Wed, Apr 6, 2016 at 7:03 PM, Fujii Masao wrote:
>
> On Wed, Apr 6, 2016 at 8:59 PM, Amit Kapila
wrote:
> >
> >> BTW, we can move SyncRepUpdateConfig() just after ProcessConfigFile()
> >> from pg_stat_get_wal_senders() and every backends always parse the
va
On Wed, Apr 6, 2016 at 11:17 AM, Fujii Masao wrote:
>
> On Tue, Apr 5, 2016 at 11:40 PM, Amit Kapila
wrote:
> >>
> >> > 2.
> >> > pg_stat_get_wal_senders()
> >> > {
> >> > ..
> >> > /*
> >> > ! * Allo
On Tue, Apr 5, 2016 at 11:55 PM, Julien Rouhaud
wrote:
>
> On 05/04/2016 06:19, Amit Kapila wrote:
> >
> > Few more comments:
> >
> > 1.
> > @@ -909,6 +909,17 @@ CREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } |
> > UNLOGGED ] TABLE [ IF NOT EXI
>
On Tue, Apr 5, 2016 at 5:35 PM, Magnus Hagander wrote:
>
>
> On Mon, Apr 4, 2016 at 3:15 PM, Amit Kapila
> wrote:
>
>> On Mon, Apr 4, 2016 at 4:31 PM, Magnus Hagander
>> wrote:
>>
>>> On Fri, Apr 1, 2016 at 6:47 AM, Amit Kapila
>>> wrote:
>
On Tue, Apr 5, 2016 at 9:00 PM, Andres Freund wrote:
>
> On 2016-04-05 20:56:31 +0530, Amit Kapila wrote:
> > This fluctuation started appearing after commit 6150a1b0 which we have
> > discussed in another thread [1] and a colleague of mine is working on to
> > write a p
1101 - 1200 of 3368 matches
Mail list logo