Hi! ... with the usual caveat that I know much more about OpenACC than OpenMP, and I know (at least a bit) more about nvptx than GCN... ;-)
On 2022-03-02T15:12:30+0000, "Stubbs, Andrew" <andrew_stu...@mentor.com> wrote: > Has anyone ever considered having GCC add the "simd" clause to offload (or > regular) loop nests automatically? > > For example, something like "-fomp-auto-simd" would transform "distribute > parallel" to "distribute parallel simd" automatically. Loop nests that > already contain "simd" clauses or directives would remain unchanged, most > likely. > > The reason I ask is that other toolchains have chosen to use a "SIMT" model > for GPUs, which means that OpenMP threads map to individual vector lanes and > are therefore are strictly scalar. The result is that the "simd" directive is > irrelevant and lots of code out there isn't using it at all (so I'm told). > Meanwhile, in GCC we map OpenMP threads to Nvidia warps and AMD GCN > wavefronts, so it is impossible to get full performance without explicitly > specifying the "simd" directive. We therefore suffer in direct comparisons. > > I'm of the opinion that GCC is the one implementing OpenMP as intended I'm curious: how does one arrive at this conclusion? For example, in addition to intra-warp thread parallelism, nvptx also does have a few SIMD instructions: data transfer (combine two adjacent 32-bit transfers into one 64-bit transfer, and also some basic arithmetic; I'd have to look up the details). It's not much, but it's something that GCC's SLP vectorizer can use. (Tom worked on that, years ago.) Using that to implement OpenMP's SIMD (quite likely via default-(SLP-)auto-vectorization), you'd then indeed get for actualy OpenMP threads what you described as "SIMT" model above. Why not change GCC to do the same, if that's the common understanding how OpenMP for GPUs should be done, as implemented by other compilers? Grüße Thomas > but all the same I need to explore our options here, figure out what the > consequences would be, and plan a project to do what we can. > > I've thought of simply enabling "-ftree-vectorize" on AMD GCN (this doesn't > help NVPTX) but I think that is sub-optimal because things like the OpenMP > scheduler really need to be aware of the vector size, and there's probably > other ways in which parallel regions can be better formed with regard to the > vectorizer. If these features don't exist right now then I have an > opportunity to include them in our upcoming project. > > Any info/suggestions/advice would be appreciated. > > Thanks > > Andrew ----------------- Siemens Electronic Design Automation GmbH; Anschrift: Arnulfstraße 201, 80634 München; Gesellschaft mit beschränkter Haftung; Geschäftsführer: Thomas Heurung, Frank Thürauf; Sitz der Gesellschaft: München; Registergericht München, HRB 106955