Josh Simmons wrote:
> As a general rule I think, most things don't scale linearly, they
> scale considerably worse.

Let's try something:

===
import std.file;
import std.conv;
import std.string;

void main() {
        string m;
        foreach(i; 0 .. 1000) {
                string code;
                foreach(a; 0 .. 1000) {
                        code ~= format(q{
                                int someGlobal%d;
                                void someFunction%d() {
                                        someGlobal%d++;
                                }
                        }, a, a, a);
                }

                std.file.write("file" ~ to!string(i) ~ ".d", code);

                m ~= "import file" ~ to!string(i) ~ ";";
        }

        std.file.write("main.d", m ~ " void main() {} ");
}
===

$ ./generate
$ wc *.d
 5000000  7001004 83684906 total
$ time dmd *.d
Segmentation fault
real    0m29.915s
user    0m26.208s
sys     0m3.684s


Holy shit, 3.6 GB of memory used! Aaaand segmentation fault.


OK, something's not good here :-P

Let's try this:

$ time ../dmd2/linux/bin64/dmd *.d
real    0m45.363s
user    0m26.009s
sys     0m14.193s

About 6 GB total memory used at the peak. Wow. I never thought
I'd actually use that much ram (but it was cheap).

$ ls -lh *.o
-rw-r--r-- 1 me users 371M 2011-09-17 12:40 file0.o

For some reason, there's no executable...

$ time ld file0.o ../dmd2/linux/lib64/libphobos2.a -lm -lpthread
ld: warning: cannot find entry symbol _start; defaulting to 0000000000400f20

real    0m10.439s
user    0m9.304s
sys     0m0.915s

$ ls -lh ./a.out
-rwxr-xr-x 1 me users 102M 2011-09-17 12:43 ./a.out


Wow.


Anyway, one minute to compile 5,000,000 lines of (bullshit) code
isn't really bad. It took a lot of memory, but that's not
a dealbreaker - I got this 8 gb dirt cheap, and the price has
gone down even more since then. Worst case, we can just throw
more hardware at it.

This code is nothing fancy, of course.

Now, what about an incremental build.

$ echo 'rm *.o; for i in *.d; do dmd -c ; done; dmd *.o' > build
$ time bash build

waiting... lots of hard drive activity here. (BTW, I realize in
a real build, you could do a lot of this in parallel, so this
isn't really a fair scenario. I'm just curious how it will
turn out.)

ld is running now. I guess dmd did it's thing in about one
minute

Anyway, it's complete:

$ time bash build.sh
rm: cannot remove `*.o': No such file or directory

real    1m44.632s
user    1m17.358s
sys     0m10.275s


Two minutes for compile+link incrementally. The memory usage
never became significant.

$ ls -l file0
-rwxr-xr-x 1 me users 214M 2011-09-17 12:50 file0


This is probably double unrealistic since I didn't have any of
the modules import other modules.



But, I did feed 5,000,000 lines of code spread over 1,000 modules
to the D compiler, and it managed to work in a fairly reasonable
time - one minute is decent.

Of course, if you bring in fancier things than this trivial example,
who knows.

Reply via email to