On 2019-Sep-26, at 13:29, Mark Johnston <markj at FreeBSD.org> wrote: > On Wed, Sep 25, 2019 at 10:03:14PM -0700, Mark Millard wrote: >> >> >> On 2019-Sep-25, at 20:27, Mark Millard <marklmi at yahoo.com> wrote: >> >>> On 2019-Sep-25, at 19:26, Mark Millard <marklmi at yahoo.com> wrote: >>> >>>> On 2019-Sep-25, at 10:02, Mark Johnston <markj at reeBSD.org> wrote: >>>> >>>>> On Mon, Sep 23, 2019 at 01:28:15PM -0700, Mark Millard via freebsd-amd64 >>>>> wrote: >>>>>> Note: I have access to only one FreeBSD amd64 context, and >>>>>> it is also my only access to a NUMA context: 2 memory >>>>>> domains. A Threadripper 1950X context. Also: I have only >>>>>> a head FreeBSD context on any architecture, not 12.x or >>>>>> before. So I have limited compare/contrast material. >>>>>> >>>>>> I present the below basically to ask if the NUMA handling >>>>>> has been validated, or if it is going to be, at least for >>>>>> contexts that might apply to ThreadRipper 1950X and >>>>>> analogous contexts. My results suggest they are not (or >>>>>> libc++'s now times get messed up such that it looks like >>>>>> NUMA mishandling since this is based on odd benchmark >>>>>> results that involve mean time for laps, using a median >>>>>> of such across multiple trials). >>>>>> >>>>>> I ran a benchmark on both Fedora 30 and FreeBSD 13 on this >>>>>> 1950X got got expected results on Fedora but odd ones on >>>>>> FreeBSD. The benchmark is a variation on the old HINT >>>>>> benchmark, spanning the old multi-threading variation. I >>>>>> later tried Fedora because the FreeBSD results looked odd. >>>>>> The other architectures I tried FreeBSD benchmarking with >>>>>> did not look odd like this. (powerpc64 on a old PowerMac 2 >>>>>> socket with 2 cores per socket, aarch64 Cortex-A57 Overdrive >>>>>> 1000, CortextA53 Pine64+ 2GB, armv7 Cortex-A7 Orange Pi+ 2nd >>>>>> Ed. For these I used 4 threads, not more.) >>>>>> >>>>>> I tend to write in terms of plots made from the data instead >>>>>> of the raw benchmark data. >>>>>> >>>>>> FreeBSD testing based on: >>>>>> cpuset -l0-15 -n prefer:1 >>>>>> cpuset -l16-31 -n prefer:1 >>>>>> >>>>>> Fedora 30 testing based on: >>>>>> numactl --preferred 1 --cpunodebind 0 >>>>>> numactl --preferred 1 --cpunodebind 1 >>>>>> >>>>>> While I have more results, I reference primarily DSIZE >>>>>> and ISIZE being unsigned long long and also both being >>>>>> unsigned long as examples. Variations in results are not >>>>>> from the type differences for any LP64 architectures. >>>>>> (But they give an idea of benchmark variability in the >>>>>> test context.) >>>>>> >>>>>> The Fedora results solidly show the bandwidth limitation >>>>>> of using one memory controller. They also show the latency >>>>>> consequences for the remote memory domain case vs. the >>>>>> local memory domain case. There is not a lot of >>>>>> variability between the examples of the 2 type-pairs used >>>>>> for Fedora. >>>>>> >>>>>> Not true for FreeBSD on the 1950X: >>>>>> >>>>>> A) The latency-constrained part of the graph looks to >>>>>> normally be using the local memory domain when >>>>>> -l0-15 is in use for 8 threads. >>>>>> >>>>>> B) Both the -l0-15 and the -l16-31 parts of the >>>>>> graph for 8 threads that should be bandwidth >>>>>> limited show mostly examples that would have to >>>>>> involve both memory controllers for the bandwidth >>>>>> to get the results shown as far as I can tell. >>>>>> There is also wide variability ranging between the >>>>>> expected 1 controller result and, say, what a 2 >>>>>> controller round-robin would be expected produce. >>>>>> >>>>>> C) Even the single threaded result shows a higher >>>>>> result for larger total bytes for the kernel >>>>>> vectors. Fedora does not. >>>>>> >>>>>> I think that (B) is the most solid evidence for >>>>>> something being odd. >>>>> >>>>> The implication seems to be that your benchmark program is using pages >>>>> from both domains despite a policy which preferentially allocates pages >>>>> from domain 1, so you would first want to determine if this is actually >>>>> what's happening. As far as I know we currently don't have a good way >>>>> of characterizing per-domain memory usage within a process. >>>>> >>>>> If your benchmark uses a large fraction of the system's memory, you >>>>> could use the vm.phys_free sysctl to get a sense of how much memory from >>>>> each domain is free. >>>> >>>> The ThreadRipper 1950X has 96 GiBytes of ECC RAM, so 48 GiBytes per memory >>>> domain. I've never configured the benchmark such that it even reaches >>>> 10 GiBytes on this hardware. (It stops for a time constraint first, >>>> based on the values in use for the "adjustable" items.) >>>> >>>> . . . (much omitted material) . . . >>> >>>> >>>>> Another possibility is to use DTrace to trace the >>>>> requested domain in vm_page_alloc_domain_after(). For example, the >>>>> following DTrace one-liner counts the number of pages allocated per >>>>> domain by ls(1): >>>>> >>>>> # dtrace -n 'fbt::vm_page_alloc_domain_after:entry >>>>> /progenyof($target)/{@[args[2]] = count();}' -c "cpuset -n rr ls" >>>>> ... >>>>> 0 71 >>>>> 1 72 >>>>> # dtrace -n 'fbt::vm_page_alloc_domain_after:entry >>>>> /progenyof($target)/{@[args[2]] = count();}' -c "cpuset -n prefer:1 ls" >>>>> ... >>>>> 1 143 >>>>> # dtrace -n 'fbt::vm_page_alloc_domain_after:entry >>>>> /progenyof($target)/{@[args[2]] = count();}' -c "cpuset -n prefer:0 ls" >>>>> ... >>>>> 0 143 >>>> >>>> I'll think about this, although it would give no >>>> information which CPUs are executing the threads >>>> that are allocating or accessing the vectors for >>>> the integration kernel. So, for example, if the >>>> threads migrate to or start out on cpus they >>>> should not be on, this would not report such. >>>> >>>> For such "which CPUs" questions one stab would >>>> be simply to watch with top while the benchmark >>>> is running and see which CPUs end up being busy >>>> vs. which do not. I think I'll try this. >>> >>> Using top did not show evidence of the wrong >>> CPUs being actively in use. >>> >>> My variation of top is unusual in that it also >>> tracks some maximum observed figures and reports >>> them, here being: >>> >>> 8804M MaxObsActive, 4228M MaxObsWired, 13G MaxObs(Act+Wir) >>> >>> (no swap use was reported). This gives a system >>> level view of about how much RAM was put to use >>> during the monitoring of the 2 benchmark runs >>> (-l0-15 and -l16-31). No where near enough used >>> to require both memory domains to be in use. >>> >>> Thus, it would appear to be just where the >>> allocations are made for -n prefer:1 that >>> matters, at least when a (temporary) thread >>> does the allocations. >>> >>>>> This approach might not work for various reasons depending on how >>>>> exactly your benchmark program works. >>> >>> I've not tried dtrace yet. >> >> Well, for an example -l0-15 -n prefer:1 run >> for just the 8 threads benchmark case . . . >> >> dtrace: pid 10997 has exited >> >> 0 712 >> 1 6737529 >> >> Something is leading to domain 0 >> allocations, despite -n prefer:1 . > > You can get a sense of where these allocations are occuring by changing > the probe to capture kernel stacks for domain 0 page allocations: > > fbt::vm_page_alloc_domain_after:entry /progenyof($target) && args[2] == > 0/{@[stack()] = count();} > > One possibility is that these are kernel memory allocations occurring in > the context of the benchmark threads. Such allocations may not respect > the configured policy since they are not private to the allocating > thread. For instance, upon opening a file, the kernel may allocate a > vnode structure for that file. That vnode may be accessed by threads > from many processes over its lifetime, and may be recycled many times > before its memory is released back to the allocator. For -l0-15 -n prefer:1 : Looks like this reports sys_thr_new activity, sys_cpuset activity, and 0xffffffff80bc09bd activity (whatever that is). Mostly sys_thr_new activity, over 1300 of them . . . dtrace: pid 13553 has exited kernel`uma_small_alloc+0x61 kernel`keg_alloc_slab+0x10b kernel`zone_import+0x1d2 kernel`uma_zalloc_arg+0x62b kernel`thread_init+0x22 kernel`keg_alloc_slab+0x259 kernel`zone_import+0x1d2 kernel`uma_zalloc_arg+0x62b kernel`thread_alloc+0x23 kernel`thread_create+0x13a kernel`sys_thr_new+0xd2 kernel`amd64_syscall+0x3ae kernel`0xffffffff811b7600 2 kernel`uma_small_alloc+0x61 kernel`keg_alloc_slab+0x10b kernel`zone_import+0x1d2 kernel`uma_zalloc_arg+0x62b kernel`cpuset_setproc+0x65 kernel`sys_cpuset+0x123 kernel`amd64_syscall+0x3ae kernel`0xffffffff811b7600 2 kernel`uma_small_alloc+0x61 kernel`keg_alloc_slab+0x10b kernel`zone_import+0x1d2 kernel`uma_zalloc_arg+0x62b kernel`uma_zfree_arg+0x36a kernel`thread_reap+0x106 kernel`thread_alloc+0xf kernel`thread_create+0x13a kernel`sys_thr_new+0xd2 kernel`amd64_syscall+0x3ae kernel`0xffffffff811b7600 6 kernel`uma_small_alloc+0x61 kernel`keg_alloc_slab+0x10b kernel`zone_import+0x1d2 kernel`uma_zalloc_arg+0x62b kernel`uma_zfree_arg+0x36a kernel`vm_map_process_deferred+0x8c kernel`vm_map_remove+0x11d kernel`vmspace_exit+0xd3 kernel`exit1+0x5a9 kernel`0xffffffff80bc09bd kernel`amd64_syscall+0x3ae kernel`0xffffffff811b7600 6 kernel`uma_small_alloc+0x61 kernel`keg_alloc_slab+0x10b kernel`zone_import+0x1d2 kernel`uma_zalloc_arg+0x62b kernel`thread_alloc+0x23 kernel`thread_create+0x13a kernel`sys_thr_new+0xd2 kernel`amd64_syscall+0x3ae kernel`0xffffffff811b7600 22 kernel`vm_page_grab_pages+0x1b4 kernel`vm_thread_stack_create+0xc0 kernel`kstack_import+0x52 kernel`uma_zalloc_arg+0x62b kernel`vm_thread_new+0x4d kernel`thread_alloc+0x31 kernel`thread_create+0x13a kernel`sys_thr_new+0xd2 kernel`amd64_syscall+0x3ae kernel`0xffffffff811b7600 1324 For -l16-31 -n prefer:1 : Again, exactly 2. Both being sys_cpuset . . . dtrace: pid 13594 has exited kernel`uma_small_alloc+0x61 kernel`keg_alloc_slab+0x10b kernel`zone_import+0x1d2 kernel`uma_zalloc_arg+0x62b kernel`cpuset_setproc+0x65 kernel`sys_cpuset+0x123 kernel`amd64_syscall+0x3ae kernel`0xffffffff811b7600 2 > > Given the low number of domain 0 allocations I am skeptical that they > are responsible for the variablility in your results. > >> So I tried -l16-31 -n prefer:1 and it got: >> >> dtrace: pid 11037 has exited >> >> 0 2 >> 1 8055389 >> >> (The larger number of allocations is >> not a surprise: more work done in >> about the same overall time based on >> faster memory access generally.) === Mark Millard marklmi at yahoo.com ( dsl-only.net went away in early 2018-Mar) _______________________________________________ freebsd-amd64@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-amd64 To unsubscribe, send any mail to "freebsd-amd64-unsubscr...@freebsd.org"
Re: head -r352341 example context on ThreadRipper 1950X: cpuset -n prefer:1 with -l 0-15 vs. -l 16-31 odd performance?
Mark Millard via freebsd-amd64 Thu, 26 Sep 2019 17:06:23 -0700
- head -r352341 example context on ThreadRipp... Mark Millard via freebsd-amd64
- Re: head -r352341 example context on T... Mark Johnston
- Re: head -r352341 example context ... Mark Millard via freebsd-amd64
- Re: head -r352341 example cont... Mark Millard via freebsd-amd64
- Re: head -r352341 example ... Mark Millard via freebsd-amd64
- Re: head -r352341 exa... Mark Johnston
- Re: head -r352341... Mark Millard via freebsd-amd64
- Re: head -r35... Mark Millard via freebsd-amd64
- Re: head -r35... Mark Johnston
- Re: head -r35... Mark Millard via freebsd-amd64
- Re: head -r35... Mark Millard via freebsd-amd64
- Re: head -r35... Mark Millard via freebsd-amd64