On 15.06.2011 17:33, Steven Schveighoffer wrote:
On Tue, 14 Jun 2011 09:53:16 -0400, Andrei Alexandrescu
<seewebsiteforem...@erdani.org> wrote:
http://www.wikiservice.at/d/wiki.cgi?LanguageDevel/DIPs/DIP11
Destroy.
I put this as replies in several threads, but I'll throw it out there
as its own thread:
* You already agree that having the fetching done by a separate
program (possibly written in d) makes the solution cleaner (i.e. you
are not infiltrating the code that actually does compiling with code
that does network fetching).
* I think specifying the entire url in the pragma is akin to
specifying the full path of a given module on your local disk. I
think it's not the right place for it, the person who is building the
code should be responsible for where the modules come from, and import
should continue to specify the module relative to the include path.
* A perfect (IMO) way to configure the fetch tool is by using the same
mechanism that configures dmd on how to get modules -- the include
path. For instance -Ihttp://xxx.yyy.zzz/package can be passed to the
compiler or put into the dmd.conf.
* DMD already has a good mechanism to specify configuration and you
would barely have to change anything internally.
Here's how it would work. I'll specify how it goes from command line
to final (note the http path is not a valid path, it's just an example):
dmd -Ihttp://www.dsource.org/projects/dcollections/import testproj.d
Now it's abundantly clear that dmd should have rdmd's 'make'
functionality built-in. Otherwise you'd have to specify TreeMap.d (or
library) on the command line.
1. dmd recognizes the url pattern and stores this as an 'external' path
2. dmd reads the file testproj.d and sees that it imports
dcollections.TreeMap
3. Using it's non-external paths, it cannot find the module.
4. It calls:
dget -Ihttp://www.dsource.org/projects/dcollections/import
dcollections.TreeMap
5. dget checks its internal cache to see if the file
dcollections/TreeMap.[d|di] already exists -- not found
6. dget uses internal logic to generate a request to download either
a. an entire package which contains the requested import (preferred)
b. just the specific file dcollections/TreeMap.d
7. Using the url as a key, it stores the TreeMap.d file in a cache so
it doesn't have to download it again (can be stored globally or local
to the user/project)
8. Pipes the file to stdout, dmd reads the file, and returns 0 for
success
9. dmd finishes compiling.
On a second run to dmd, it would go through the same process, but dget
succeeds on step 5 of finding it in the cache and pipes it to stdout.
Some issues with this scheme:
1. dependency checking would be difficult for a build tool (like make)
for doing incremental builds. However, traditionally one does not
specify standard library files as dependencies, so downloaded files
would probably be under this same category. I.e. if you need to
rebuild, you'd have to clear the cache and do a make clean (or
equivalent). Another option is to have dget check to see if the file
on the server has been modified.
2. It's possible that dget fetches files one at a time, which might be
very slow (on the first build). However, one can trigger whole
package downloads easily enough (for example, by making the include
path entry point at a zip file or tarball). dget should be smart
enough to handle extracting packages.
I can't really think of any other issues.
-Steve
dmd should be able to run multiple instances of dget without any
conflicts (also parallel builds etc.).
Other then that it looks quite good to me.
P.S. It seems like dget is, in fact, dcache :)
--
Dmitry Olshansky