Good point about the numa part. My dd example may well end up allocating 
memory for the file in /tmp on one socket or another, causing all the reads 
to hit that one socket's memory channels.

For a good spread, a C program will probably be best. Allocate memory from 
the local NUMA node, and run multiple threads reading that memory for speed 
clocking. then run one copy of this program bound to each socket (with 
numactl -N) and sum up the numbers (for an idealized max memory bandwidth 
thing that does no cross-socket access).

On Tuesday, January 16, 2018 at 10:54:22 AM UTC-8, Chet L wrote:
>
> Agree with Gil (w.r.t DIMM slots, channels etc) but may be not on the 'dd' 
> script. Here's why. Some BIOS's have 'memory interleaving' option turned 
> OFF. And if the OS-wide memory policy is non-interleaving too then in that 
> case unless your application explicitly binds memory to a remote socket you 
> cannot interleave memory. Or you would need to use numa tools (to set mem 
> policy etc) while launching your application.
>
> Bandwidth or latency monitoring is also dependent on the workload you are 
> running. If the workload a) runs atomics b) is running on Node-0 only 
> (memory is local or striped) : then the snoop-responses are going take 
> longer(uncore frequency etc) because socket-1 might be in power-saving 
> states. So ideally you would need a 'snoozer' thread on the remote 
> socket(Node-1) which would prevent the socket from entering one of the 'C' 
> (or whatever) states (or you can disable hardware power-saving modes - but 
> you may need to line up all the options because the kernel may have 
> power-saving options too).  If you use the industry standard tools like 
> 'stream' (as others mentioned) etc they will do all of this for you (via 
> dummy/snoozer threads and so on).
>
> If you want to write this all by yourself then you should know the numa 
> apis (http://man7.org/linux/man-pages/man3/numa.3.html), numa tools 
> (numactl, taskset). For latency measurements you should also disable 
> pre-fetching else it will give you super awesome numbers.
>
> Note: In future, if you use a device-driver that does the allocation for 
> your app then you should make sure the driver knows about numa allocation 
> too and aligns everything for you. I found that out the hard way back in 
> 2010 and realized that the linux-kernel had no guidance for pcie 
> drivers(back then ... its ok now). I fixed it locally at the time.
>
> Hope this helps.
>
> Chetan Loke
>
>
> On Monday, January 15, 2018 at 11:19:53 AM UTC-5, Kevin Bowling wrote:
>>
>> lmbench works well http://www.bitmover.com/lmbench/man_lmbench.html, 
>> and Larry seems happy to answer questions on building/using it. 
>>
>> Unless you've explicitly built an application to work with NUMA, or 
>> are able to run two copies of an application pinned to each domain, 
>> you really only will get about 1 package worth of BW, and latecny is a 
>> bigger deal (which lmbench can also measure in cooperation with 
>> numactl) 
>>
>> Regards, 
>>
>> On Sun, Jan 14, 2018 at 11:44 AM, Peter Veentjer <alarm...@gmail.com> 
>> wrote: 
>> > I'm working on some very simple aggregations on huge chunks of offheap 
>> > memory (500GB+) for a hackaton. This is done using a very simple 
>> stride; 
>> > every iteration the address increases with 20 bytes. So the prefetcher 
>> > should not have any problems with it. 
>> > 
>> > According to my calculations I'm currently processing 35 GB/s. However 
>> I'm 
>> > not sure if I'm close to the maximum bandwidth of this machine. Specs: 
>> > 2133 MHz, 24x HP 32GiB 4Rx4 PC4-2133P 
>> > 2x Intel(R) Xeon(R) CPU E5-2687W v3, 3.10GHz, 10 cores per socket 
>> > 
>> > What is the best tool to determine the maximum bandwidth of a machine 
>> > running Linux (RHEL 7) 
>> > 
>> > -- 
>> > You received this message because you are subscribed to the Google 
>> Groups 
>> > "mechanical-sympathy" group. 
>> > To unsubscribe from this group and stop receiving emails from it, send 
>> an 
>> > email to mechanical-sympathy+unsubscr...@googlegroups.com. 
>> > For more options, visit https://groups.google.com/d/optout. 
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"mechanical-sympathy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to mechanical-sympathy+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to