: gromacs.org_gmx-users-boun...@maillist.sys.kth.se
on behalf of Szilárd Páll
Sent: Friday, April 24, 2020 6:06 PM
To: Discussion list for GROMACS users
Cc: gromacs.org_gmx-users@maillist.sys.kth.se
Subject: Re: [gmx-users] GROMACS performance issues on POWER9/V100 node
Hi,
Affinity settings
ist.sys.kth.se> on behalf of Kevin
Boyd
Sent: Thursday, April 23, 2020 9:08 PM
To: gmx-us...@gromacs.org
Subject: Re: [gmx-users] GROMACS performance issues on POWER9/V100 node
Hi,
Can you post the full log for the Intel system? I typically find the
real
cycle and time accounting section a b
Hi,
Affinity settings on the Talos II with Ubuntu 18.04 kernel 5.0 works fine.
I get threads pinned where they should be (hwloc confirmed) and consistent
results. I also get reasonable thread placement even without pinning (i.e.
the kernel scatters first until #threads <= #hwthreads). I see only a
> The following lines are found in md.log for the POWER9/V100 run:
>
> Overriding thread affinity set outside gmx mdrun
> Pinning threads with an auto-selected logical core stride of 128
> NOTE: Thread affinity was not set.
>
> The full md.log is available here:
> https://github.com/jdh4/running_gr
th
> >> different values of -pinoffset for 2019.6.
> >>
> >> I know a group at NIST is having the same or similar problems with
> >> POWER9/V100.
> >>
> >> Jon
> >>
> >> From: gromacs.org_gmx-users
list.sys.kth.se
on behalf of Szilárd Páll
Sent: Friday, April 24, 2020 10:23 AM
To: Discussion list for GROMACS users
Subject: Re: [gmx-users] GROMACS performance issues on POWER9/V100 node
Using a single thread per GPU as the linked log files show is not
sufficient for GROMACS (and any modern
> Sent: Thursday, April 23, 2020 9:08 PM
> To: gmx-us...@gromacs.org
> Subject: Re: [gmx-users] GROMACS performance issues on POWER9/V100 node
>
> Hi,
>
> Can you post the full log for the Intel system? I typically find the real
> cycle and time accounting section a better plac
illist.sys.kth.se <
gromacs.org_gmx-users-boun...@maillist.sys.kth.se> on behalf of Kevin
Boyd
Sent: Thursday, April 23, 2020 9:08 PM
To: gmx-us...@gromacs.org
Subject: Re: [gmx-users] GROMACS performance issues on POWER9/V100 node
Hi,
Can you post the full log for the Intel system? I ty
nt: Thursday, April 23, 2020 9:08 PM
> To: gmx-us...@gromacs.org
> Subject: Re: [gmx-users] GROMACS performance issues on POWER9/V100 node
>
> Hi,
>
> Can you post the full log for the Intel system? I typically find the real
> cycle and time accounting section a better pl
same or similar problems with POWER9/V100.
Jon
From: gromacs.org_gmx-users-boun...@maillist.sys.kth.se
on behalf of Kevin Boyd
Sent: Thursday, April 23, 2020 9:08 PM
To: gmx-us...@gromacs.org
Subject: Re: [gmx-users] GROMACS performance issues on POWER9/V100 nod
Hi,
Can you post the full log for the Intel system? I typically find the real
cycle and time accounting section a better place to start debugging
performance issues.
A couple quick notes, but need a side-by-side comparison for more useful
analysis, and these points may apply to both systems so ma
We are finding that GROMACS (2018.x, 2019.x, 2020.x) performs worse on an IBM
POWER9/V100 node versus an Intel Broadwell/P100. Both are running RHEL 7.7 and
Slurm 19.05.5. We have no concerns about GROMACS on our Intel nodes. Everything
below is about of the POWER9/V100 node.
We ran the RNASE b
12 matches
Mail list logo