sammccall added a comment.

Thanks for putting this together, I'm going to study it carefully and try it 
out!

That said, there are two large issues that I think should be addressed in the 
design (though not necessarily *implemented* now).
I'll be upfront: these are things without which $EMPLOYER will not be able to 
use this at all.
Since there isn't really room for multiple modules implementations in clangd, 
I'd like to make sure these fit into the design, or can be bolted on.

In the past, we've been reasonably successful at finding extension points that 
let clangd scale to unreasonable codebase sizes, while doing the right thing 
for smaller projects too. (Index, build system integration, VFS support, etc). 
To quote from the bug:

> you will need an alternative for them (luckily most of them already are 
> willing to invest in custom tooling), but scanning all project files should 
> still be fine for the vast majority of clangd users

For historical context: clangd **was** that custom tooling designed to scale to 
huge monorepos. This isn

---

**support for clang header modules**

Google has a large deployment of clangd, serving a codebase that builds 
significant parts as clang header modules for performance reasons. There is no 
near-term plan of adopting C++20 modules (multiple reasons, which I probably 
can't represent well).
We've been disabling header modules for clangd & other tools for a long time. 
But it's come to a head, and we expect to get someone working on enabling 
modules in clangd in ~2 months.
As I understand it, Meta are in a similar situation (though their solution is 
to version-lock clangd with the toolchain, keep modules enabled, and accept 
that some things are broken).

Our specific build system setup is that all module-build actions and inputs 
explicit (-fmodule-file=, -fmodule-map-file=). The build system will not 
produce usable PCMs due to version skew.
I know that Meta folks *would* like to make use of available PCMs from the 
build system.

---

**support for large projects**

We have multiple codebases large enough that touching every file to discover 
dependencies isn't feasible. 
The largest internal one had 2B LOC 7 years ago, and is now much larger. But 
even Chrome for example has 20M. Such projects are prone to adopt modules (in 
some form) for build scalability.
Apart from the concrete scanning of deps, keeping the full project module graph 
in memory won't always be possible. (It's a perfectly reasonable default 
implementation though).


CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D153114/new/

https://reviews.llvm.org/D153114

_______________________________________________
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits

Reply via email to