Re: [PATCH] Add support for choosing huge page size

2020-07-16 Thread Thomas Munro
On Mon, Jun 22, 2020 at 7:51 AM Odin Ugedal  wrote:
> Ahh, thanks. Looks like the Windows stuff isn't autogenerated, so
> maybe this new patch works..

On second thoughts, it seemed like overkill to use configure just to
detect whether macros are defined, so I dropped that and used plain
old #if defined().  I also did some minor proof-reading and editing on
the documentation and comments; I put back the bit about sysctl and
sysctl.conf because I think that is still pretty useful to highlight
for people who just want to use the default size, along with the /sys
method.

Pushed.  Thanks for the patch!  It's always nice to see notes like
this being removed:

- * Currently *mmap_flags is always just MAP_HUGETLB.  Someday, on systems
- * that support it, we might OR in additional bits to specify a particular
- * non-default huge page size.

In passing, I think GetHugePageSize() is a bit odd; it claims to have
a Linux-specific part and a generic part, and yet the whole thing is
wrapped in #ifdef MAP_HUGETLB which is Linux-specific as far as I
know.  But that's not this patch's fault.

We might want to consider removing the note about CONFIG_HUGETLB_PAGE
from the manual; I'm not sure if kernels built without that stuff are
still roaming in the wild, or if it's another anachronysm due for
removal like commit c8be915a.  I didn't do that today, though.




Re: [PATCH] Add support for choosing huge page size

2020-06-21 Thread Andres Freund
Hi,

On 2020-06-18 16:00:49 +1200, Thomas Munro wrote:
> Unfortunately I can't access the TLB miss counters on this system due
> to virtualisation restrictions, and the systems where I can don't have
> 1GB pages.  According to cpuid(1) this system has a fairly typical
> setup:
> 
>cache and TLB information (2):
>   0x63: data TLB: 2M/4M pages, 4-way, 32 entries
> data TLB: 1G pages, 4-way, 4 entries
>   0x03: data TLB: 4K pages, 4-way, 64 entries

Hm. Doesn't that system have a second level of TLB (STLB) with more 1GB
entries? I think there's some errata around what intel exposes via cpuid
around this :(

Guessing that this is a skylake server chip?
https://en.wikichip.org/wiki/intel/microarchitectures/skylake_(server)#Memory_Hierarchy

> [...] Additionally there is a unified L2 TLB (STLB)
> [...]  STLB
> [...] 1 GiB page translations:
> [...] 16 entries; 4-way set associative


> This operation is touching about 8GB of data (scanning 3.5GB of table,
> building a 4.5GB hash table) so 4 x 1GB is not enough do this without
> TLB misses.

I assume this uses 7 workers?


> Let's try that again, except this time with shared_buffers=4GB,
> dynamic_shared_memory_main_size=4GB, and only half as many tuples in
> t, so it ought to fit:
> 
> 4KB pages:  6.37 seconds
> 2MB pages:  4.96 seconds
> 1GB pages:  5.07 seconds
> 
> Well that's disappointing.

Hm, I don't actually know the answer to this: If this actually uses
multiple workers, won't the fact that each has an independent page table
(despite having overlapping contents) lead to there being fewer actually
available 1GB entries available?  Obviously depends on how processes are
scheduled (iirc hyperthreading shares dTLBs).

Might be worth looking at whether there are cpu migrations or testing
with a single worker.


> I wondered if this was something to do
> with NUMA effects on this two node box, so I tried running that again
> with postgres under numactl --cpunodebind 0 --membind 0 and I got:


> 4KB pages:  5.43 seconds
> 2MB pages:  4.05 seconds
> 1GB pages:  4.00 seconds
> 
> From this I can't really conclude that it's terribly useful to use
> larger page sizes, but it's certainly useful to have the ability to do
> further testing using the proposed GUC.

Due to the low number of 1GB entries they're quite likely to be
problematic imo. Especially when there's more concurrent misses than
there are page table entries.

I'm somewhat doubtful that it's useful to use 1GB entries for all of our
shared memory when that's bigger than the maximum covered size. I
suspect that it'd better to use 1GB entries for some and smaller entries
for the rest of the memory.

Greetings,

Andres Freund




Re: [PATCH] Add support for choosing huge page size

2020-06-21 Thread Odin Ugedal
upid implementation reasons).
>
> # echo never > /sys/kernel/mm/transparent_hugepage/enabled
> # echo 8500 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
> # echo 17 > /sys/kernel/mm/hugepages/hugepages-1048576kB/nr_hugepages
>
> shared_buffers=8GB
> dynamic_shared_memory_main_size=8GB
>
> create table t as select generate_series(1, 1)::int i;
> alter table t set (parallel_workers = 7);
> create extension pg_prewarm;
> select pg_prewarm('t');
> set max_parallel_workers_per_gather=7;
> set work_mem='1GB';
>
> select count(*) from t t1 join t t2 using (i);
>
> 4KB pages: 12.42 seconds
> 2MB pages:  9.12 seconds
> 1GB pages:  9.07 seconds
>
> Unfortunately I can't access the TLB miss counters on this system due
> to virtualisation restrictions, and the systems where I can don't have
> 1GB pages.  According to cpuid(1) this system has a fairly typical
> setup:
>
>cache and TLB information (2):
>   0x63: data TLB: 2M/4M pages, 4-way, 32 entries
> data TLB: 1G pages, 4-way, 4 entries
>   0x03: data TLB: 4K pages, 4-way, 64 entries
>
> This operation is touching about 8GB of data (scanning 3.5GB of table,
> building a 4.5GB hash table) so 4 x 1GB is not enough do this without
> TLB misses.
>
> Let's try that again, except this time with shared_buffers=4GB,
> dynamic_shared_memory_main_size=4GB, and only half as many tuples in
> t, so it ought to fit:
>
> 4KB pages:  6.37 seconds
> 2MB pages:  4.96 seconds
> 1GB pages:  5.07 seconds
>
> Well that's disappointing.  I wondered if this was something to do
> with NUMA effects on this two node box, so I tried running that again
> with postgres under numactl --cpunodebind 0 --membind 0 and I got:
>
> 4KB pages:  5.43 seconds
> 2MB pages:  4.05 seconds
> 1GB pages:  4.00 seconds
>
> From this I can't really conclude that it's terribly useful to use
> larger page sizes, but it's certainly useful to have the ability to do
> further testing using the proposed GUC.
>
> [1] 
> https://www.postgresql.org/message-id/flat/CA%2BhUKGLAE2QBv-WgGp%2BD9P_J-%3Dyne3zof9nfMaqq1h3EGHFXYQ%40mail.gmail.com
From fa3b30a32032bf38c8dc72de9656526a5d5e8daa Mon Sep 17 00:00:00 2001
From: Odin Ugedal 
Date: Sun, 7 Jun 2020 21:04:57 +0200
Subject: [PATCH v4] Add support for choosing huge page size

This adds support for using non-default huge page sizes for shared
memory. This is achived via the new "huge_page_size" config entry.
The config value defaults to 0, meaning it will use the system default.
---
 configure | 26 +++
 configure.in  |  4 ++
 doc/src/sgml/config.sgml  | 27 
 doc/src/sgml/runtime.sgml | 41 ++-
 src/backend/port/sysv_shmem.c | 69 ++-
 src/backend/utils/misc/guc.c  | 25 +++
 src/backend/utils/misc/postgresql.conf.sample |  2 +
 src/include/pg_config.h.in|  8 +++
 src/include/pg_config_manual.h|  6 ++
 src/include/storage/pg_shmem.h|  1 +
 src/tools/msvc/Solution.pm|  2 +
 11 files changed, 179 insertions(+), 32 deletions(-)

diff --git a/configure b/configure
index 2feff37fe3..11e3112ee4 100755
--- a/configure
+++ b/configure
@@ -15488,6 +15488,32 @@ _ACEOF
 
 fi # fi
 
+# Check if system supports mmap flags for allocating huge page memory with page sizes
+# other than the default
+ac_fn_c_check_decl "$LINENO" "MAP_HUGE_MASK" "ac_cv_have_decl_MAP_HUGE_MASK" "#include 
+"
+if test "x$ac_cv_have_decl_MAP_HUGE_MASK" = xyes; then :
+  ac_have_decl=1
+else
+  ac_have_decl=0
+fi
+
+cat >>confdefs.h <<_ACEOF
+#define HAVE_DECL_MAP_HUGE_MASK $ac_have_decl
+_ACEOF
+ac_fn_c_check_decl "$LINENO" "MAP_HUGE_SHIFT" "ac_cv_have_decl_MAP_HUGE_SHIFT" "#include 
+"
+if test "x$ac_cv_have_decl_MAP_HUGE_SHIFT" = xyes; then :
+  ac_have_decl=1
+else
+  ac_have_decl=0
+fi
+
+cat >>confdefs.h <<_ACEOF
+#define HAVE_DECL_MAP_HUGE_SHIFT $ac_have_decl
+_ACEOF
+
+
 ac_fn_c_check_decl "$LINENO" "fdatasync" "ac_cv_have_decl_fdatasync" "#include 
 "
 if test "x$ac_cv_have_decl_fdatasync" = xyes; then :
diff --git a/configure.in b/configure.in
index 0188c6ff07..f56c06eb3d 100644
--- a/configure.in
+++ b/configure.in
@@ -1687,6 +1687,10 @@ AC_CHECK_FUNCS(posix_fadvise)
 AC_CHECK_DECLS(posix_fadvise, [], [], [#include ])
 ]) # fi
 
+# Check if system supports mmap flags for allocating huge page memory with page sizes
+# other than the default
+AC_CHECK_DECLS([MAP_HUGE_MASK, MAP_HUGE_SHIFT], [], [], [#include ])
+
 AC_CHECK_DECLS(fdatasync, [], [], [#include ])
 AC_C

Re: [PATCH] Add support for choosing huge page size

2020-06-17 Thread Thomas Munro
Hi Odin,

Documentation syntax error "2MB" shows up as:

config.sgml:1605: parser error : Opening and ending tag mismatch:
literal line 1602 and para
   
  ^

Please install the documentation tools
https://www.postgresql.org/docs/devel/docguide-toolsets.html, rerun
configure and "make docs" to see these kinds of errors.

The build is currently failing on Windows:

undefined symbol: HAVE_DECL_MAP_HUGE_MASK at src/include/pg_config.h
line 143 at src/tools/msvc/Mkvcbuild.pm line 851.

I think that's telling us that you need to add this stuff into
src/tools/msvc/Solution.pm, so that we can say it doesn't have it.  I
don't have Windows but whenever you post a new version we'll see if
Windows likes it here:

http://cfbot.cputube.org/odin-ugedal.html

When using huge_pages=on, huge_page_size=1GB, but default
shared_buffers, I noticed that the error message reports the wrong
(unrounded) size in this message:

2020-06-18 02:06:30.407 UTC [73552] HINT:  This error usually means
that PostgreSQL's request for a shared memory segment exceeded
available memory, swap space, or huge pages. To reduce the request
size (currently 149069824 bytes), reduce PostgreSQL's shared memory
usage, perhaps by reducing shared_buffers or max_connections.

The request size was actually:

mmap(NULL, 1073741824, PROT_READ|PROT_WRITE,
MAP_SHARED|MAP_ANONYMOUS|MAP_HUGETLB|30< /sys/kernel/mm/transparent_hugepage/enabled
# echo 8500 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
# echo 17 > /sys/kernel/mm/hugepages/hugepages-1048576kB/nr_hugepages

shared_buffers=8GB
dynamic_shared_memory_main_size=8GB

create table t as select generate_series(1, 1)::int i;
alter table t set (parallel_workers = 7);
create extension pg_prewarm;
select pg_prewarm('t');
set max_parallel_workers_per_gather=7;
set work_mem='1GB';

select count(*) from t t1 join t t2 using (i);

4KB pages: 12.42 seconds
2MB pages:  9.12 seconds
1GB pages:  9.07 seconds

Unfortunately I can't access the TLB miss counters on this system due
to virtualisation restrictions, and the systems where I can don't have
1GB pages.  According to cpuid(1) this system has a fairly typical
setup:

   cache and TLB information (2):
  0x63: data TLB: 2M/4M pages, 4-way, 32 entries
data TLB: 1G pages, 4-way, 4 entries
  0x03: data TLB: 4K pages, 4-way, 64 entries

This operation is touching about 8GB of data (scanning 3.5GB of table,
building a 4.5GB hash table) so 4 x 1GB is not enough do this without
TLB misses.

Let's try that again, except this time with shared_buffers=4GB,
dynamic_shared_memory_main_size=4GB, and only half as many tuples in
t, so it ought to fit:

4KB pages:  6.37 seconds
2MB pages:  4.96 seconds
1GB pages:  5.07 seconds

Well that's disappointing.  I wondered if this was something to do
with NUMA effects on this two node box, so I tried running that again
with postgres under numactl --cpunodebind 0 --membind 0 and I got:

4KB pages:  5.43 seconds
2MB pages:  4.05 seconds
1GB pages:  4.00 seconds

>From this I can't really conclude that it's terribly useful to use
larger page sizes, but it's certainly useful to have the ability to do
further testing using the proposed GUC.

[1] 
https://www.postgresql.org/message-id/flat/CA%2BhUKGLAE2QBv-WgGp%2BD9P_J-%3Dyne3zof9nfMaqq1h3EGHFXYQ%40mail.gmail.com




Re: [PATCH] Add support for choosing huge page size

2020-06-10 Thread Odin Ugedal
Thanks again Thomas,

> Oh, so maybe we need a configure test for them?  And if you don't have
> it, a runtime error if you try to set the page size to something other
> than 0 (like we do for effective_io_concurrency if you don't have a
> posix_fadvise() function).

Ahh, yes, that sounds reasonable. Did some fiddling with the configure
script to add a check, and think I got it right (but not 100% sure
tho.). Added new v3 patch.

> If you set it to an unsupported size, that seems reasonable to me.  If
> you set it to an unsupported size and have huge_pages=try, do we fall
> back to using no huge pages?

Yes, the "fallback" with huge_pages=try is the same for both
huge_page_size=0 and huge_page_size=nMB, and is the same as without
this patch.

> For what it's worth, here's what I know about this on other operating systems:

Thanks for all the background info!

> 1.  AIX can do huge pages, but only if you use System V shared memory
> (not for mmap() anonymous shared).  In
> https://commitfest.postgresql.org/25/1960/ we got as far as adding
> support for shared_memory_type=sysv, but to go further we'll need
> someone willing to hack on the patch on an AIX system, preferably with
> root access so they can grant the postgres user wired memory
> privileges (or whatever they call that over there).  But at a glance,
> they don't have a way to ask for a specific page size, just "large".

Interesting. I might get access to some AIX systems at university this fall,
so maybe I will get some time to dive into the patch.


Odin
From 8cb876bf73258646044a6a99d72e7c12d1d03e3a Mon Sep 17 00:00:00 2001
From: Odin Ugedal 
Date: Sun, 7 Jun 2020 21:04:57 +0200
Subject: [PATCH v3] Add support for choosing huge page size

This adds support for using non-default huge page sizes for shared
memory. This is achived via the new "huge_page_size" config entry.
The config value defaults to 0, meaning it will use the system default.
---
 configure | 26 +++
 configure.in  |  4 ++
 doc/src/sgml/config.sgml  | 27 
 doc/src/sgml/runtime.sgml | 41 +++-
 src/backend/port/sysv_shmem.c | 67 ++-
 src/backend/utils/misc/guc.c  | 25 +++
 src/backend/utils/misc/postgresql.conf.sample |  2 +
 src/include/pg_config.h.in|  8 +++
 src/include/pg_config_manual.h|  6 ++
 src/include/storage/pg_shmem.h|  1 +
 10 files changed, 176 insertions(+), 31 deletions(-)

diff --git a/configure b/configure
index 2feff37fe3..11e3112ee4 100755
--- a/configure
+++ b/configure
@@ -15488,6 +15488,32 @@ _ACEOF
 
 fi # fi
 
+# Check if system supports mmap flags for allocating huge page memory with page sizes
+# other than the default
+ac_fn_c_check_decl "$LINENO" "MAP_HUGE_MASK" "ac_cv_have_decl_MAP_HUGE_MASK" "#include 
+"
+if test "x$ac_cv_have_decl_MAP_HUGE_MASK" = xyes; then :
+  ac_have_decl=1
+else
+  ac_have_decl=0
+fi
+
+cat >>confdefs.h <<_ACEOF
+#define HAVE_DECL_MAP_HUGE_MASK $ac_have_decl
+_ACEOF
+ac_fn_c_check_decl "$LINENO" "MAP_HUGE_SHIFT" "ac_cv_have_decl_MAP_HUGE_SHIFT" "#include 
+"
+if test "x$ac_cv_have_decl_MAP_HUGE_SHIFT" = xyes; then :
+  ac_have_decl=1
+else
+  ac_have_decl=0
+fi
+
+cat >>confdefs.h <<_ACEOF
+#define HAVE_DECL_MAP_HUGE_SHIFT $ac_have_decl
+_ACEOF
+
+
 ac_fn_c_check_decl "$LINENO" "fdatasync" "ac_cv_have_decl_fdatasync" "#include 
 "
 if test "x$ac_cv_have_decl_fdatasync" = xyes; then :
diff --git a/configure.in b/configure.in
index 0188c6ff07..f56c06eb3d 100644
--- a/configure.in
+++ b/configure.in
@@ -1687,6 +1687,10 @@ AC_CHECK_FUNCS(posix_fadvise)
 AC_CHECK_DECLS(posix_fadvise, [], [], [#include ])
 ]) # fi
 
+# Check if system supports mmap flags for allocating huge page memory with page sizes
+# other than the default
+AC_CHECK_DECLS([MAP_HUGE_MASK, MAP_HUGE_SHIFT], [], [], [#include ])
+
 AC_CHECK_DECLS(fdatasync, [], [], [#include ])
 AC_CHECK_DECLS([strlcat, strlcpy, strnlen])
 # This is probably only present on macOS, but may as well check always
diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index aca8f73a50..42f06a41cb 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -1582,6 +1582,33 @@ include_dir 'conf.d'
   
  
 
+ 
+  huge_page_size (integer)
+  
+   huge_page_size configuration parameter
+  
+  
+  
+   
+Controls what size of huge pages is used in conjunction with
+.
+The default is zero (0).
+When set to 0, the default huge page size on the system will
+be used.
+   
+   
+Some c

Re: [PATCH] Add support for choosing huge page size

2020-06-09 Thread Thomas Munro
On Wed, Jun 10, 2020 at 5:11 PM Thomas Munro  wrote:
> 3.  Last time I checked, Solaris and illumos seemed to have the same
> philosophy as FreeBSD and not give you explicit control; my info could
> be out of date, and I have no clue beyond that.

Ah, I was wrong about that one: memcntl(2) looks highly relevant, but
I'm not planning to look into that myself.




Re: [PATCH] Add support for choosing huge page size

2020-06-09 Thread Thomas Munro
On Wed, Jun 10, 2020 at 2:24 AM Odin Ugedal  wrote:
> Attached v2 of patch, updated with the comments from Thomas (again,
> thanks). I also changed the mmap flags to only set size if the
> selected huge page size is not the default on (on linux). The support
> for this functionality was added in Linux 3.8, and therefore it was
> not supported before then. Should we add that to the docs, or what do
> you think? The definitions of MAP_HUGE_MASK and MAP_HUGE_SHIFT were
> added in Linux 3.8 too, but since they are a part of libc/musl, and
> are "used" at compile time, that shouldn't be a problem, or?

Oh, so maybe we need a configure test for them?  And if you don't have
it, a runtime error if you try to set the page size to something other
than 0 (like we do for effective_io_concurrency if you don't have a
posix_fadvise() function).

> If a huge page size that is not supported on the system is chosen via
> huge_page_size (and huge_pages = on), it will result in "FATAL:  could
> not map anonymous shared memory: Invalid argument". This is the same
> that happens today when huge pages aren't supported at all, so I guess
> it is ok for now (and then we can consider verifying that it is
> supported at a later stage).

If you set it to an unsupported size, that seems reasonable to me.  If
you set it to an unsupported size and have huge_pages=try, do we fall
back to using no huge pages?

> Also, thanks for the information about the Windows. Have been
> searching about info on huge pages in windows and "superpages" in bsd,
> without that much luck. I only have experience on linux, so I think we
> can do as you said, to let someone else look at it. :)

For what it's worth, here's what I know about this on other operating systems:

1.  AIX can do huge pages, but only if you use System V shared memory
(not for mmap() anonymous shared).  In
https://commitfest.postgresql.org/25/1960/ we got as far as adding
support for shared_memory_type=sysv, but to go further we'll need
someone willing to hack on the patch on an AIX system, preferably with
root access so they can grant the postgres user wired memory
privileges (or whatever they call that over there).  But at a glance,
they don't have a way to ask for a specific page size, just "large".

2.  FreeBSD doesn't currently have a way to ask for super pages
explicitly at all; it does something like Linux Transparent Huge
Pages, except that it's transparent.  It does seem to do a pretty good
job of putting PostgreSQL text/code, shared memory and heap memory
into super pages automatically on my systems.  One small detail is
that there is a flag MAP_ALIGNED_SUPER that might help get better
alignment; it'd be bad if the lower pages of our shared memory
happened to be the location of lock arrays, proc array, buffer mapping
or other largish and very hot stuff and also happened to be on 4kb
pages due to misalignment stuff, but I wonder if the flag is really
needed to avoid that on current FreeBSD or not.  I should probably go
and check some time!  (I have no clue for other BSDs.)

3.  Last time I checked, Solaris and illumos seemed to have the same
philosophy as FreeBSD and not give you explicit control; my info could
be out of date, and I have no clue beyond that.

4.  What I said above about Windows; the explicit page size thing
seems to be bleeding edge and barely documented.

5.  macOS does have flags to ask for super pages with various sizes,
but apparently such mappings are not inherited by child processes.  So
that's useless for us.

As for the relevance of all this to your patch, I think we just need a
check callback for the GUC, that says "ERROR: huge_page_size must be
set to 0 on this platform".




Re: [PATCH] Add support for choosing huge page size

2020-06-09 Thread Odin Ugedal
Hi,

Thank you so much for the feedback David and Thomas!

Attached v2 of patch, updated with the comments from Thomas (again,
thanks). I also changed the mmap flags to only set size if the
selected huge page size is not the default on (on linux). The support
for this functionality was added in Linux 3.8, and therefore it was
not supported before then. Should we add that to the docs, or what do
you think? The definitions of MAP_HUGE_MASK and MAP_HUGE_SHIFT were
added in Linux 3.8 too, but since they are a part of libc/musl, and
are "used" at compile time, that shouldn't be a problem, or?

If a huge page size that is not supported on the system is chosen via
huge_page_size (and huge_pages = on), it will result in "FATAL:  could
not map anonymous shared memory: Invalid argument". This is the same
that happens today when huge pages aren't supported at all, so I guess
it is ok for now (and then we can consider verifying that it is
supported at a later stage).

Also, thanks for the information about the Windows. Have been
searching about info on huge pages in windows and "superpages" in bsd,
without that much luck. I only have experience on linux, so I think we
can do as you said, to let someone else look at it. :)

Odin
From 5cf1af94337523c2dcd6427d70ca5c589942a64c Mon Sep 17 00:00:00 2001
From: Odin Ugedal 
Date: Sun, 7 Jun 2020 21:04:57 +0200
Subject: [PATCH v2] Add support for choosing huge page size

This adds support for using non-default huge page sizes for shared
memory. This is achived via the new "huge_page_size" config entry.
The config value defaults to 0, meaning it will use the system default.
---
 doc/src/sgml/config.sgml  | 27 
 doc/src/sgml/runtime.sgml | 41 +++-
 src/backend/port/sysv_shmem.c | 67 ++-
 src/backend/utils/misc/guc.c  | 11 +++
 src/backend/utils/misc/postgresql.conf.sample |  2 +
 src/include/storage/pg_shmem.h|  1 +
 6 files changed, 118 insertions(+), 31 deletions(-)

diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index aca8f73a50..42f06a41cb 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -1582,6 +1582,33 @@ include_dir 'conf.d'
   
  
 
+ 
+  huge_page_size (integer)
+  
+   huge_page_size configuration parameter
+  
+  
+  
+   
+Controls what size of huge pages is used in conjunction with
+.
+The default is zero (0).
+When set to 0, the default huge page size on the system will
+be used.
+   
+   
+Some commonly available page sizes on modern 64 bit server architectures include:
+2MB and 1GB (Intel and AMD), 16MB and
+16GB (IBM POWER), and 64kB, 2MB,
+32MB and 1GB (ARM). For more information
+about usage and support, see .
+   
+   
+Controlling huge page size is currently not supported on Windows.
+   
+  
+ 
+
  
   temp_buffers (integer)
   
diff --git a/doc/src/sgml/runtime.sgml b/doc/src/sgml/runtime.sgml
index 88210c4a5d..cbdbcb4fdf 100644
--- a/doc/src/sgml/runtime.sgml
+++ b/doc/src/sgml/runtime.sgml
@@ -1391,41 +1391,50 @@ export PG_OOM_ADJUST_VALUE=0
 using large values of .  To use this
 feature in PostgreSQL you need a kernel
 with CONFIG_HUGETLBFS=y and
-CONFIG_HUGETLB_PAGE=y. You will also have to adjust
-the kernel setting vm.nr_hugepages. To estimate the
-number of huge pages needed, start PostgreSQL
-without huge pages enabled and check the
-postmaster's anonymous shared memory segment size, as well as the system's
-huge page size, using the /proc file system.  This might
-look like:
+CONFIG_HUGETLB_PAGE=y. You will also have to pre-allocate
+huge pages with the the desired huge page size. To estimate the number of
+huge pages needed, start PostgreSQL without huge
+pages enabled and check the postmaster's anonymous shared memory segment size,
+as well as the system's supported huge page sizes, using the
+/sys file system.  This might look like:
 
 $ head -1 $PGDATA/postmaster.pid
 4170
 $ pmap 4170 | awk '/rw-s/  /zero/ {print $2}'
 6490428K
+$ ls /sys/kernel/mm/hugepages
+hugepages-1048576kB  hugepages-2048kB
+
+
+ You can now choose between the supported sizes, 2MiB and 1GiB in this case.
+ By default PostgreSQL will use the default huge
+ page size on the system, but that can be configured via
+ .
+ The default huge page size can be found with:
+
 $ grep ^Hugepagesize /proc/meminfo
 Hugepagesize:   2048 kB
 
+
+ For 2MiB,
  6490428 / 2048 gives approximately
  3169.154, so in this example we need at
  least 3170 huge pages, which we can set with:
 
-$ sysctl -w vm.nr_hugepages=3170
+$ echo 3170 | tee /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
 
 A larger settin

Re: [PATCH] Add support for choosing huge page size

2020-06-08 Thread Thomas Munro
On Tue, Jun 9, 2020 at 4:13 AM Odin Ugedal  wrote:
> This adds support for using non-default huge page sizes for shared
> memory. This is achived via the new "huge_page_size" config entry.
> The config value defaults to 0, meaning it will use the system default.
> ---
>
> This would be very helpful when running in kubernetes since nodes may
> support multiple huge page sizes, and have pre-allocated huge page meory
> for each size. This lets the user select huge page size without having
> to change the default huge page size on the node. This will also be
> useful when doing benchmarking with different huge page sizes, since it
> wouldn't require a full system reboot.

+1

> Since the default value of the new config is 0 (resulting in using the
> default huge page size) this should be backwards compatible with old
> configs.

+1

> Feel free to comment on the phrasing (both in docs and code) and on the
> overall change.

This change seems good to me, because it will make testing easier and
certain mixed page size configurations possible.  I haven't tried your
patch yet; I'll take it for a spin when I'm benchmarking some other
relevant stuff soon.

Minor comments on wording:

> +   
> +Most modern linux systems support 2MB and 
> 1GB
> +huge pages, and some architectures supports other sizes as well. For 
> more information
> +on how to check for support and usage, see  linkend="linux-huge-pages"/>.

Linux with a capital L.  Hmm, I don't especially like saying "Most
modern Linux systems" as code for Intel.  I wonder if we should
instead say something like: "Some commonly available page sizes on
modern 64 bit server architectures include: 2MB and
1GB (Intel and AMD), 16MB and
16GB (IBM POWER), and ... (ARM)."

> +   
> +   
> +Controling huge page size is not supported on Windows.

Controlling

Just by the way, some googling is telling me that very recent versions
of Windows *could* support this (search keywords:
"NtAllocateVirtualMemoryEx 1GB"), so that could be a project for
someone who understands Windows to look into later.




[PATCH] Add support for choosing huge page size

2020-06-08 Thread Odin Ugedal
This adds support for using non-default huge page sizes for shared
memory. This is achived via the new "huge_page_size" config entry.
The config value defaults to 0, meaning it will use the system default.
---

This would be very helpful when running in kubernetes since nodes may
support multiple huge page sizes, and have pre-allocated huge page meory
for each size. This lets the user select huge page size without having
to change the default huge page size on the node. This will also be
useful when doing benchmarking with different huge page sizes, since it
wouldn't require a full system reboot.

Since the default value of the new config is 0 (resulting in using the
default huge page size) this should be backwards compatible with old
configs.

Feel free to comment on the phrasing (both in docs and code) and on the
overall change.

 doc/src/sgml/config.sgml  | 25 ++
 doc/src/sgml/runtime.sgml | 41 +
 src/backend/port/sysv_shmem.c | 88 ---
 src/backend/utils/misc/guc.c  | 11 +++
 src/backend/utils/misc/postgresql.conf.sample |  2 +
 src/include/storage/pg_shmem.h|  1 +
 6 files changed, 120 insertions(+), 48 deletions(-)

diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml
index aca8f73a50..6177b819ce 100644
--- a/doc/src/sgml/config.sgml
+++ b/doc/src/sgml/config.sgml
@@ -1582,6 +1582,31 @@ include_dir 'conf.d'
   
  
 
+ 
+  huge_page_size (integer)
+  
+   huge_page_size configuration 
parameter
+  
+  
+  
+   
+Controls what size of huge pages is used in conjunction with
+.
+The default is zero (0).
+When set to 0, the default huge page size on the 
system will
+be used.
+   
+   
+Most modern linux systems support 2MB and 
1GB
+huge pages, and some architectures supports other sizes as well. For 
more information
+on how to check for support and usage, see .
+   
+   
+Controling huge page size is not supported on Windows.  
+   
+  
+ 
+
  
   temp_buffers (integer)
   
diff --git a/doc/src/sgml/runtime.sgml b/doc/src/sgml/runtime.sgml
index 88210c4a5d..cbdbcb4fdf 100644
--- a/doc/src/sgml/runtime.sgml
+++ b/doc/src/sgml/runtime.sgml
@@ -1391,41 +1391,50 @@ export PG_OOM_ADJUST_VALUE=0
 using large values of .  To use this
 feature in PostgreSQL you need a kernel
 with CONFIG_HUGETLBFS=y and
-CONFIG_HUGETLB_PAGE=y. You will also have to adjust
-the kernel setting vm.nr_hugepages. To estimate the
-number of huge pages needed, start PostgreSQL
-without huge pages enabled and check the
-postmaster's anonymous shared memory segment size, as well as the system's
-huge page size, using the /proc file system.  This 
might
-look like:
+CONFIG_HUGETLB_PAGE=y. You will also have to 
pre-allocate
+huge pages with the the desired huge page size. To estimate the number of
+huge pages needed, start PostgreSQL without huge
+pages enabled and check the postmaster's anonymous shared memory segment 
size,
+as well as the system's supported huge page sizes, using the
+/sys file system.  This might look like:
 
 $ head -1 $PGDATA/postmaster.pid
 4170
 $ pmap 4170 | awk '/rw-s/  /zero/ {print $2}'
 6490428K
+$ ls /sys/kernel/mm/hugepages
+hugepages-1048576kB  hugepages-2048kB
+
+
+ You can now choose between the supported sizes, 2MiB and 1GiB in this 
case.
+ By default PostgreSQL will use the default huge
+ page size on the system, but that can be configured via
+ .
+ The default huge page size can be found with:
+
 $ grep ^Hugepagesize /proc/meminfo
 Hugepagesize:   2048 kB
 
+
+ For 2MiB,
  6490428 / 2048 gives approximately
  3169.154, so in this example we need at
  least 3170 huge pages, which we can set with:
 
-$ sysctl -w vm.nr_hugepages=3170
+$ echo 3170 | tee 
/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
 
 A larger setting would be appropriate if other programs on the machine
-also need huge pages.  Don't forget to add this setting
-to /etc/sysctl.conf so that it will be reapplied
-after reboots.
+also need huge pages. It is also possible to pre allocate huge pages on 
boot
+by adding the kernel parameters hugepagesz=2M 
hugepages=3170.

 

 Sometimes the kernel is not able to allocate the desired number of huge
-pages immediately, so it might be necessary to repeat the command or to
-reboot.  (Immediately after a reboot, most of the machine's memory
-should be available to convert into huge pages.)  To verify the huge
-page allocation situation, use:
+pages immediately due to external fragmentation, so it might be necessary 
to
+repeat the command or to reboot. To verify the huge page allocation 
situation
+for a given size, use:
 
-$ grep Huge