On Thu, 11 Aug 2011 14:19:35 -0400, Jacob Carlborg <d...@me.com> wrote:

On 2011-08-11 19:07, Steven Schveighoffer wrote:
On Thu, 11 Aug 2011 12:24:48 -0400, Andrew Wiley
<wiley.andre...@gmail.com> wrote:

On Thu, Aug 11, 2011 at 5:52 AM, Steven Schveighoffer
<schvei...@yahoo.com>wrote:
I think the benefit of this approach over a build tool which wraps the
compiler is, the compiler already has the information needed for
dependencies, etc. To a certain extent, the wrapping build tool has to
re-implement some of the compiler pieces.


This last bit doesn't really come into play here because you can
already ask
the compiler to output all that information. and easily use it in a
separate
program. That much is already done.

Yes, but then you have to restart the compiler to figure out what's
next. Let's say a source file needs a.d, and a.d needs b.d, and both a.d
and b.d are on the network. You potentially need to run the compiler 3
times just to make sure you have all the files, then run it a fourth
time to compile.

So how would that be different if the compiler drives everything? Say you begin with a few local files. The compiler then scans through them looking for URL imports. Then asks a tool to download the dependencies it found and starts all over again.

Forgive my compiler ignorance (not a compiler writer), but why does the compiler have to start over? It's no different than importing a file, is it?

This is how my package manager will work. You have a local file containing all the direct dependencies needed to build your project. When invoked, the package manager tool fetches a file containing all packages and all their dependencies, from the repository. It then figures out all dependencies, both direct and indirect. Then it downloads all dependencies. It does all this before the compiler is even invoked once.

Then, preferably, but optionally, it hands over to a build tool that builds everything. The build tool would need to invoke the compiler twice, first to get all the dependencies of all the local files in the project that is being built. Then it finally runs the compiler to build everything.

The benefit of using source is the source code is already written with an import statement, there is no need to write an external build file (all you need is command line that configures the compiler). Essentially, the import statements become your "build file". I think dsss worked like this, but I don't remember completely.

My ideal solution, no matter how it's implemented is, I get a file blah.d, and I do:

xyz blah.d

and xyz handles all the dirty work of figuring out what to build along with blah.d as well as where to get those resources. Whether xyz == dmd, I don't know. It sure sounds like it could be...


-Steve

Reply via email to