On Mon, Apr 26, 2010 at 11:45 PM, Mike Shal <mar...@gmail.com> wrote:
> Just curious, what kind of performance increase do you actually see > here? If I understand your solution correctly, instead of including N > dependency files with M dependencies each, you want to include 1 > dependency file with N*M dependencies (per directory). I would've > thought most of make's time is spent in parsing out that info into the > DAG, rather than just in opening and closing the files. Is the time > difference all in the system time (I think user time should be about > the same)? As a general rule of thumb, unless you're working on a pig slow processor or a machine with high-end solid state storage these days, finding the bytes and dragging them off the disk is the expensive part. A single file means two things; the disk doesn't have to do as much seeking (assuming non-pathological filesystem fragmentation levels, of course), and the OS can be clever about readahead. Readahead is potentially a big one. If you read part of a file, a lot of OSs will assume you'll probably want the rest and speculatively read part or all of it before you ask it to. Even for very large projects, dependency files don't get very big (at least from the filesystem's point of view), so there's a good chance that the whole file will get read in quickly. The OS typically won't speculatively open new files, though, so if your dependencies are scattered around in multiple files you lose some of the benefit of readahead. All of that said, I've never really noticed significant overhead from make itself. Dependency generation is often expensive, but at least on my projects (even the large ones) all of the time is spent actually inside the dependency generator. So it's entirely possible that it will all be moot. Todd. -- Todd Showalter, President, Electron Jump Games, Inc. _______________________________________________ Help-make mailing list Help-make@gnu.org http://lists.gnu.org/mailman/listinfo/help-make