Hi Nathan, > This is an interesting design. It appears similar to how we'd > envisioned implementing openacc support -- namely leverage the LTO > machinery to communicate from the host compiler to the device > compiler. Your design looks more detailed, which is good. Thanks, do you have a similar description of your design? It would be pretty interesting to take a look at it.
> Are you envisioning the device compilers to be stand alone > compilers, built separately. Or are you envisioning extending the > configuration machinery by adding something like > --enable-acclerator=<list> so that: > .../configure --target=x86_64-linux --enable-accelerator=foo,baz > causes > * a build of an x86_64 compiler aware of the foo and baz accelerators > * build of an appropriate runtime support library > * a build of a foo lto accelerator backend, assembler (and linker?) > * (if needed) build of a foo support library > * a build of a baz lto accelerator backend > * (if needed) build of a baz support library, assembler (and linker?) > > or are you expecting something more like 3 separate configures & build? > .../configure --target=x86_64-linux --enable-accelerator=foo,baz > .../configure --target=foo --enable-languages=lto-accelerator > .../configure --target=baz --enable-languages=lto-accelerator > > I'd been imagining the former scheme, as it provides for a more > integrated build, fwiw. That's an open question, and we'd like to clarify it too. We'd appreciate any inputs on this. Personally, I see actually one more option. Similar of how libgomp figures out which runtimes are available (by looking for the corresponding plugins), we could look for available target compilers at compile-time and produce as many target images as number of compilers we have. Thus, we won't need to rebuild host compiler to support more targets - we'd just need to place the corresponding target compiler somewhere. That looks more like your second option, but differs a bit from it in that we don't need to specify enabled accelerators. Michael > nathan