On 2023/05/03 16:32, Gabriele Svelto wrote:
On 03/05/23 01:49, ISHIKAWA,chiaki wrote:
Anyway, just relinking C-C TB takes time.
Before (with 16GB assigned to linux guest inside virtualbox): almost
10 minutes. Slightly less. 550 seconds or so.
After (with 24 GB assigned to linux guest.): 4min 38 seconds.
that's a long time just for linking! What linker are you using? I
think either gold or lld should be able to link Thunderbird much
faster than that.
Gabriele
I stand corrected.
The number was not the linking time per se.
I explain how I obtained the numbers.
BTW, I use gnu gold and mold.
In order to obtain the ballpark figure of typical build time of after
downloading daily changes, I tried the following.
I clobbered the tree, and reconfigure the tree and then rebuild.
By this time, the pre-compiled object files from the previous sessions
are in the cache of ccache and scache (I am not sure if I am using the
following anymore.)||
So basically the time is for the re-building the C-C TB binary by
traversing the new source file tree, and
invoking ccache if necessary to compile (= obtrain the precompiled
binary) and then link the binaries.
It is a bit strange number. But it is the baseline figure when many
files remain identical modulo whitespace changes and thus
ccache can save the recompilation time. This is true for maybe more than
half the files after daily source tree updates.
Of course, depending on the number of header files changed, basically I
would have to compile almost all the files sometimes.
That this elapsed time for this workflow becomes half alone was the
merit of adding extra 8GB to my linux image (from 16GB to 24GB).
Anyway, the typical link time alone is about a minute (76 seconds
including various chores of my bash script to set environmental
variable, etc.)
I find out this out by running |mach build| again after the above
scenario of mach {clobber, configure, build|}
In this case, household chores diminish and it becomes 76 seconds from
about 450 seconds.
I think the initial build process records some file system update
information (or the previous compilation process updates the object file
timestamp) so that the second build does no longer have to invoke ccache
too often or maybe even some subdirectories?
Like I said, if I need to compile many files, that would be CPU-bound
and the build time would be much longer.
In any case, I reviewed the page fault rates today during build, and I
think cargo library processing seems to generate many page faults.
Even with 24GB of memory I see sustained page faults shown in xosview
window.
With 16GB of memory I used to see really long period of sustained page
faults and wondered what it was.
(This is what makes me wonder if I am not using gnu gold or mold for
cargo linking. )
So the number I gave and the description was a bit misleading, but still
the larger the memory up until to 24GB is a plus for C/C++
edit/compile/build for M-C and C-C development.
Actually I often create C-C patches, but the build time includes M-C
tree recompilation. Thus I believe the merit of added memory holds true
for FF developers, too.
How can I find out if I am using gnu gold or mold for rust library
linking (CARGO processing, that is)?
Chiaki
--
You received this message because you are subscribed to the Google Groups
"[email protected]" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To view this discussion on the web visit
https://groups.google.com/a/mozilla.org/d/msgid/dev-platform/fedfbade-9bd7-9f68-1532-71d0aa1b3deb%40yk.rim.or.jp.