How big does a project in Nim have to get, before re-compilation slows down to 
a crawl with small modifications?

I know C barely slows down at all, by separating out a linking phase, so object 
files can be prepared, and then re-linked quickly without needing to recompile 
from source. But Nim has situations where the underlying C source would change, 
despite the Nim module remaining the same, because the way module was imported 
was changed, in a different module. New templates could be instantiated, new 
procedures called, stuff like that. So the absolute dumbest, least clever 
solution would be to recompile the entire source of every project from .nim to 
.c to .o every time. That's the strategy C++ takes, and that language is 
notorious for slow compilation times. But does Nim do something smarter?

Like, the ideal solution I would imagine is this. The compiler starts at a root 
.nim file, and forms a tree of dependencies from there. Each branch would have 
to have a record of what code was generated from the module being imported, 
through importing or templates or whatever. Each time a module is imported, it 
checks if that record is different from the current situation, and only if so 
does it recompile that module (and check its dependencies).

The problem I could see with that is, having found all the needed information 
to know whether you should recompile a module, is it saving time to avoid doing 
so, or is the module practically already recompiled at that point?

I'm no expert here, and maybe it is saving time? Maybe there's a clever way 
around that I don't know about? All I know is my _subjective_ experience has 
been that compiling the first time is slow, and compiling the second time is 
fast. But I don't know what's going on to make that happen. I don't how well it 
would scale up to large projects. I don't know what the cost savings of 
dividing a project up into smaller packages would be.

Reply via email to