On Fri, Dec 18, 2015 at 10:51 AM, Dilip Kumar <dilipbal...@gmail.com> wrote:

> On Sun, Jul 19 2015 9:37 PM Andres Wrote,
>
> > The situation the read() protect us against is that two backends try to
> > extend to the same block, but after one of them succeeded the buffer is
> > written out and reused for an independent page. So there is no in-memory
> > state telling the slower backend that that page has already been used.
>
> I was looking into this patch, and done some performance testing..
>
> Currently i have done testing my my local machine, later i will perform on
> big machine once i get access to that.
>
> Just wanted to share the current result what i get i my local machine
> Machine conf (Intel(R) Core(TM) i7-4870HQ CPU @ 2.50GHz, 8 core and 16GM
> of RAM).
>
> Test Script:
> ./psql -d postgres -c "COPY (select g.i::text FROM generate_series(1,
> 10000) g(i)) TO '/tmp/copybinarywide' WITH BINARY";
>
> ./psql -d postgres -c "truncate table data"
> ./psql -d postgres -c "checkpoint"
> ./pgbench -f copy_script -T 120 -c$ -j$ postgres
>

This time i have done some testing on big machine with* 64 physical cores @
2.13GHz and 50GB of RAM*

There is performance comparison of base, extend without
RelationExtensionLock patch given by Andres and
multi-extend patch (this will extend the multiple blocks at a time based on
a configuration parameter.)


*Problem Analysis:------------------------*
1. With base code when i try to observe the problem using perf and other
method (gdb), i found that RelationExtensionLock is main bottleneck.
2. Then after using RelationExtensionLock Free patch, i observed now
contention is FileWrite (All backends are trying to extend the file.)


*Performance Summary and
Analysis:------------------------------------------------*
1. In my performance results Multi Extend shown best performance and
scalability.
2. I think by extending in multiple blocks we solves both the
problem(Extension Lock and Parallel File Write).
3. After extending one Block immediately adding to FSM so in most of the
cases other backend can directly find it without taking extension lock.

Currently the patch is in initial stage, i have done only test performance
and pass the regress test suit.



*Open problems -----------------------------*
1. After extending the page we are adding it directly to FSM, so if vacuum
find this page as new it will give WARNING.
2. In RelationGetBufferForTuple, when PageIsNew, we are doing PageInit,
same need to be consider for index cases.



*Test Script:-------------------------*
./psql -d postgres -c "COPY (select g.i::text FROM generate_series(1,
10000) g(i)) TO '/tmp/copybinarywide' WITH BINARY";

./psql -d postgres -c "truncate table data"
./psql -d postgres -c "checkpoint"
./pgbench -f copy_script -T 120 -c$ -j$ postgres

*Performanec Data:*
--------------------------
*There are Three code base for performance*
1. Base Code

2. Lock Free Patch : patch given in below thread
*http://www.postgresql.org/message-id/20150719140746.gh25...@awork2.anarazel.de
<http://www.postgresql.org/message-id/20150719140746.gh25...@awork2.anarazel.de>*

3. Multi extend patch attached in the mail.
*#extend_num_pages : *This this new config parameter to tell how many extra
page extend in case of normal extend..
may be it will give more control to user if we make it relation property.

I will work on the patch for this CF, so adding it to CF.


*Shared Buffer 48 GB*


*Clients* *Base (TPS)*
*Lock Free Patch* *Multi-extend **extend_num_pages=5* 1 142 138 148 2 251
253 280 4 237 416 464 8 168 491 575 16 141 448 404 32 122 337 332



*Shared Buffer 64 MB*


*Clients* *Base (TPS)* *Multi-extend **extend_num_pages=5*
1 140 148
2 252 266
4 229 437
8 153 475
16 132 364


Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
*** a/src/backend/access/brin/brin_pageops.c
--- b/src/backend/access/brin/brin_pageops.c
***************
*** 771,776 **** brin_getinsertbuffer(Relation irel, Buffer oldbuf, Size itemsz,
--- 771,781 ----
  			UnlockRelationForExtension(irel, ExclusiveLock);
  
  		page = BufferGetPage(buf);
+ 		if (PageIsNew(page))
+ 		{
+ 			MarkBufferDirty(buf);
+ 			PageInit(page, BufferGetPageSize(buf), 0);
+ 		}
  
  		/*
  		 * We have a new buffer to insert into.  Check that the new page has
*** a/src/backend/access/heap/hio.c
--- b/src/backend/access/heap/hio.c
***************
*** 393,398 **** RelationGetBufferForTuple(Relation relation, Size len,
--- 393,404 ----
  		 * we're done.
  		 */
  		page = BufferGetPage(buffer);
+ 		if (PageIsNew(page))
+ 		{
+ 			MarkBufferDirty(buffer);
+ 			PageInit(page, BufferGetPageSize(buffer), 0);
+ 		}
+ 
  		pageFreeSpace = PageGetHeapFreeSpace(page);
  		if (len + saveFreeSpace <= pageFreeSpace)
  		{
*** a/src/backend/storage/buffer/bufmgr.c
--- b/src/backend/storage/buffer/bufmgr.c
***************
*** 90,95 **** int			effective_io_concurrency = 0;
--- 90,96 ----
   * effective_io_concurrency parameter set.
   */
  int			target_prefetch_pages = 0;
+ int			extend_num_pages = 0;
  
  /* local state for StartBufferIO and related functions */
  static BufferDesc *InProgressBuf = NULL;
***************
*** 394,400 **** ForgetPrivateRefCountEntry(PrivateRefCountEntry *ref)
  static Buffer ReadBuffer_common(SMgrRelation reln, char relpersistence,
  				  ForkNumber forkNum, BlockNumber blockNum,
  				  ReadBufferMode mode, BufferAccessStrategy strategy,
! 				  bool *hit);
  static bool PinBuffer(BufferDesc *buf, BufferAccessStrategy strategy);
  static void PinBuffer_Locked(BufferDesc *buf);
  static void UnpinBuffer(BufferDesc *buf, bool fixOwner);
--- 395,401 ----
  static Buffer ReadBuffer_common(SMgrRelation reln, char relpersistence,
  				  ForkNumber forkNum, BlockNumber blockNum,
  				  ReadBufferMode mode, BufferAccessStrategy strategy,
! 				  bool *hit, Relation rel);
  static bool PinBuffer(BufferDesc *buf, BufferAccessStrategy strategy);
  static void PinBuffer_Locked(BufferDesc *buf);
  static void UnpinBuffer(BufferDesc *buf, bool fixOwner);
***************
*** 621,627 **** ReadBufferExtended(Relation reln, ForkNumber forkNum, BlockNumber blockNum,
  	 */
  	pgstat_count_buffer_read(reln);
  	buf = ReadBuffer_common(reln->rd_smgr, reln->rd_rel->relpersistence,
! 							forkNum, blockNum, mode, strategy, &hit);
  	if (hit)
  		pgstat_count_buffer_hit(reln);
  	return buf;
--- 622,628 ----
  	 */
  	pgstat_count_buffer_read(reln);
  	buf = ReadBuffer_common(reln->rd_smgr, reln->rd_rel->relpersistence,
! 							forkNum, blockNum, mode, strategy, &hit, reln);
  	if (hit)
  		pgstat_count_buffer_hit(reln);
  	return buf;
***************
*** 649,655 **** ReadBufferWithoutRelcache(RelFileNode rnode, ForkNumber forkNum,
  	Assert(InRecovery);
  
  	return ReadBuffer_common(smgr, RELPERSISTENCE_PERMANENT, forkNum, blockNum,
! 							 mode, strategy, &hit);
  }
  
  
--- 650,656 ----
  	Assert(InRecovery);
  
  	return ReadBuffer_common(smgr, RELPERSISTENCE_PERMANENT, forkNum, blockNum,
! 							 mode, strategy, &hit, NULL);
  }
  
  
***************
*** 661,667 **** ReadBufferWithoutRelcache(RelFileNode rnode, ForkNumber forkNum,
  static Buffer
  ReadBuffer_common(SMgrRelation smgr, char relpersistence, ForkNumber forkNum,
  				  BlockNumber blockNum, ReadBufferMode mode,
! 				  BufferAccessStrategy strategy, bool *hit)
  {
  	BufferDesc *bufHdr;
  	Block		bufBlock;
--- 662,668 ----
  static Buffer
  ReadBuffer_common(SMgrRelation smgr, char relpersistence, ForkNumber forkNum,
  				  BlockNumber blockNum, ReadBufferMode mode,
! 				  BufferAccessStrategy strategy, bool *hit, Relation rel)
  {
  	BufferDesc *bufHdr;
  	Block		bufBlock;
***************
*** 685,691 **** ReadBuffer_common(SMgrRelation smgr, char relpersistence, ForkNumber forkNum,
--- 686,695 ----
  
  	/* Substitute proper block number if caller asked for P_NEW */
  	if (isExtend)
+ 	{
  		blockNum = smgrnblocks(smgr, forkNum);
+ 		//blockNum += extend_num_pages;
+ 	}
  
  	if (isLocalBuf)
  	{
***************
*** 814,823 **** ReadBuffer_common(SMgrRelation smgr, char relpersistence, ForkNumber forkNum,
--- 818,836 ----
  
  	if (isExtend)
  	{
+ 		int blkCount = 0;
+ 
  		/* new buffers are zero-filled */
  		MemSet((char *) bufBlock, 0, BLCKSZ);
  		/* don't set checksum for all-zero page */
  		smgrextend(smgr, forkNum, blockNum, (char *) bufBlock, false);
+ 
+ 		while (blkCount < extend_num_pages)
+ 		{
+ 			blkCount++;
+ 			smgrextend(smgr, forkNum, blockNum+blkCount, (char *) bufBlock, false);
+ 			RecordPageWithFreeSpace(rel, blockNum+blkCount, 8126);
+ 		}
  	}
  	else
  	{
*** a/src/backend/utils/misc/guc.c
--- b/src/backend/utils/misc/guc.c
***************
*** 2683,2688 **** static struct config_int ConfigureNamesInt[] =
--- 2683,2698 ----
  		NULL, NULL, NULL
  	},
  
+ 	{
+ 		{"extend_num_pages", PGC_SUSET, RESOURCES_ASYNCHRONOUS,
+ 			gettext_noop("Sets the Number of pages to extended at one time."),
+ 			NULL
+ 		},
+ 		&extend_num_pages,
+ 		0, 0, 100,
+ 		NULL, NULL, NULL
+ 	},
+ 
  	/* End-of-list marker */
  	{
  		{NULL, 0, 0, NULL, NULL}, NULL, 0, 0, 0, NULL, NULL, NULL
*** a/src/backend/utils/misc/postgresql.conf.sample
--- b/src/backend/utils/misc/postgresql.conf.sample
***************
*** 139,144 ****
--- 139,146 ----
  
  #temp_file_limit = -1			# limits per-session temp file space
  					# in kB, or -1 for no limit
+ #extend_num_pages = 0			# number of extra pages allocate during extend
+ 					# min 0 max 100 pages
  
  # - Kernel Resource Usage -
  
*** a/src/include/storage/bufmgr.h
--- b/src/include/storage/bufmgr.h
***************
*** 60,65 **** extern PGDLLIMPORT char *BufferBlocks;
--- 60,66 ----
  
  /* in guc.c */
  extern int	effective_io_concurrency;
+ extern int	extend_num_pages;
  
  /* in localbuf.c */
  extern PGDLLIMPORT int NLocBuffer;
-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to