================
@@ -1817,27 +1804,29 @@ But with optimizations, things are different:
                 │                                       │
                 └---------------------------------------┘
 
-It would be very unfortunate if we end up with worse performance after using 
modules.
-The main concern is that when we compile a source file, the compiler needs to 
see the function body
-of imported module units so that it can perform IPO (InterProcedural 
Optimization, primarily inlining
-in practice) to optimize functions in current source file with the help of the 
information provided by
-the imported module units.
-In other words, the imported code would be processed again and again in 
importee units
-by optimizations (including IPO itself).
-The optimizations before IPO and the IPO itself are the most time-consuming 
part in whole compilation process.
-So from this perspective, we might not be able to get the improvements 
described in the theory.
-But we could still save the time for optimizations after IPO and the whole 
backend.
-
-Overall, at ``O0`` the implementations of functions defined in a module will 
not impact module users,
-but at higher optimization levels the definitions of such functions are 
provided to user compilations for the
-purposes of optimization (but definitions of these functions are still not 
included in the use's object file)-
-this means the build speedup at higher optimization levels may be lower than 
expected given ``O0`` experience,
-but does provide by more optimization opportunities.
+It would be very unfortunate if we end up with worse performance when using
+modules. The main concern is that when a source file is compiled, the compiler
+needs to see the body of imported module units so that it can perform IPO
+(InterProcedural Optimization, primarily inlining in practice) to optimize
+functions in the current source file with the help of the information provided
+by the imported module units. In other words, the imported code would be
+processed again and again in importee units by optimizations (including IPO
+itself). The optimizations before IPO and IPO itself are the most 
time-consuming
+part in whole compilation process. So from this perspective, it might not be
+possible to get the compile time improvements described in the theory, but
+there could be time savings for optimizations after IPO and the whole backend.
+
+Overall, at ``-O0`` the implementations of functions defined in a module will
+not impact module users, but at higher optimization levels the definitions of
+such functions are provided to user compilations for the purposes of
+optimization (but definitions of these functions are still not included in the
+use's object file) -- this means the build speedup at higher optimization
----------------
Endilll wrote:

```suggestion
use's object file). This means the build speedup at higher optimization
```

https://github.com/llvm/llvm-project/pull/90237
_______________________________________________
cfe-commits mailing list
cfe-commits@lists.llvm.org
https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits

Reply via email to