Jason House wrote:
Walter Bright Wrote:

Robert Clipsham wrote:
Leandro Lucarella wrote:
Thanks for finally taking this way, Walter =)

http://www.dsource.org/projects/dmd/timeline
Now that DMD is under version control it should be fairly easy
for me to adapt the automated build system used for ldc for dmd.
I can set it up to automatically build dmd after a commit, and
run dstress/build popular projects and libraries, even package up
dmd for "Nightly" builds and maybe even post the results to the D
newsgroups/IRC channels.

If you'd be interested to see this Walter, let me know what
exactly you want automating/how and where you want results and
I'll see about setting it up for you.
The problem is if some package fails, then I have a large debugging
 problem trying to figure out unfamiliar code.

With small commits to dmd, it should be trivial to know what small
change in dmd caused a user observable change in behavior. If things
look good on the dmd side, I'd really hope the code authors would
help with debugging their code.

Knowing of a failure within an hour is way better than finding out a
month later.

BTW, such regression tests work much better when all failing tests
can be identified. It can help with figuring out patterns. Stopping
on the first failure can be somewhat limiting, especially if the
failure will stick around for a week.

We clearly can't define the language around a best-effort kind of flow analysis. I consider Walter's extra checks during optimization a nice perk, but definitely not something we can consider a part of the language. The language definition must work without those.

Andrei

Reply via email to