Qingqing Zhou <[EMAIL PROTECTED]> writes:
> Yeah, understood. What I can't understand that in this case why it costs
> so much -- without concurrency, the LWLock code path just invloves
> spinlock_lock/unlock and serveral simple instructions?

I don't see those costing nearly as much as your results show
... perhaps there's something platform-specific at work?
What I see, down to the 1% level, is

Each sample counts as 0.01 seconds.
  %   cumulative   self              self     total           
 time   seconds   seconds    calls  ms/call  ms/call  name    
 37.98     13.91    13.91                             _mcount
  6.53     16.30     2.39  5242900     0.00     0.00  heapgettup
  3.33     17.52     1.22 10542596     0.00     0.00  LockBuffer
  3.30     18.73     1.21  5242880     0.00     0.00  
advance_transition_function
  2.68     19.71     0.98  5242880     0.00     0.00  IncrBufferRefCount
  2.46     20.61     0.90  5385174     0.00     0.00  LWLockRelease
  2.38     21.48     0.87  5271273     0.00     0.00  ReleaseAndReadBuffer
  2.35     22.34     0.86  5385174     0.00     0.00  LWLockAcquire
  2.18     23.14     0.80  5242938     0.00     0.00  ReleaseBuffer
  2.10     23.91     0.77  5242900     0.00     0.00  ExecStoreTuple
  1.97     24.63     0.72                             noshlibs
  1.91     25.33     0.70  5242900     0.00     0.00  SeqNext
  1.72     25.96     0.63  5271294     0.00     0.00  
ResourceOwnerRememberBuffer
  1.72     26.59     0.63  5242900     0.00     0.00  heap_getnext
  1.72     27.22     0.63  5242880     0.00     0.00  advance_aggregates
  1.69     27.84     0.62  5242940     0.00     0.00  ExecProcNode
  1.64     28.44     0.60                             $$dyncall
  1.61     29.03     0.59  5242900     0.00     0.00  MemoryContextReset
  1.53     29.59     0.56  5242880     0.00     0.00  HeapTupleSatisfiesSnapshot
  1.45     30.12     0.53  5242880     0.00     0.00  int8inc
  1.37     30.62     0.50  5243140     0.00     0.00  ExecClearTuple
  1.17     31.05     0.43  5242880     0.00     0.00  ExecEvalExprSwitchContext
  1.15     31.47     0.42  5271294     0.00     0.00  ResourceOwnerForgetBuffer
  1.12     31.88     0.41                             SeqNext
  1.09     32.28     0.40  5271294     0.00     0.00  
ResourceOwnerEnlargeBuffers
  1.09     32.68     0.40  5242900     0.00     0.00  ExecScan
  1.04     33.06     0.38  5242900     0.00     0.00  ExecSeqScan

(This is for 20, not 10, iterations of your example, but otherwise it's
the same test case.)

I've since gotten rid of the IncrBufferRefCount, ReleaseBuffer,
and ResourceOwnerXXX entries by eliminating some inefficiency in
ExecStoreTuple, so that puts the buffer lock stuff further up,
but it's still not all that critical by my numbers.

> What's more, we can see that for each row, a LWLock pair is invoked. So on
> a more aggressive thought, can we change it to page level?

Yeah, I was wondering the same.  It'd be possible to rewrite the seqscan
stuff so that we do the visibility tests for all the tuples on a given
page at once, taking the buffer content lock just once, and saving aside
the valid tuple IDs to return later.  This should definitely be faster
when all the tuples actually get fetched.  It might be a bit slower for
a LIMIT query, but I'm not sure if we care that much.  The only other
objection I can think of is that if there are any broken tuples on a
page, this approach would likely make it impossible to fetch any of the
non-broken ones :-(

                        regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 6: explain analyze is your friend

Reply via email to