Re: DIP11: Automatic downloading of libraries

2011-06-23 Thread Jonathan M Davis
On 2011-06-21 07:17, Andrei Alexandrescu wrote:
 On 6/21/11 9:14 AM, Lars T. Kyllingstad wrote:
  On Tue, 21 Jun 2011 08:21:57 -0500, Andrei Alexandrescu wrote:
  On 6/21/11 1:58 AM, Lars T. Kyllingstad wrote:
  On Mon, 20 Jun 2011 17:32:32 -0500, Andrei Alexandrescu wrote:
  On 6/20/11 4:28 PM, Jacob Carlborg wrote:
  BTW has std.benchmark gone through the regular review process?
  
  I was sure someone will ask that at some point :o). The planned change
  was to add a couple of functions, but then it got separated into its
  own module. If several people think it's worth putting std.benchmark
  through the review queue, let's do so. I'm sure the quality of the
  module will be gained.
  
  I think we should.  Also, now that TempAlloc isn't up for review
  anymore, and both std.log and std.path have to be postponed a few
  weeks, the queue is open. :)
  
  -Lars
  
  Perfect. Anyone would want to be the review manager? Lars? :o)
  
  I would, but in two weeks I am going away on vacation, and that will be
  in the middle of the review process.  Any other volunteers?
  
  -Lars
 
 BTW if libcurl is ready for review that should be the more urgent item.

It looks like libcurl needs more bake time first, so if we're going to review 
std.benchmark, it can go first. Since, no one else has stepped forward to do 
it, I can be the review manager. Given the relative simplicity of 
std.benchmark and the fact that something like half of it was in std.datetime 
to begin with, do you think that reviewing until July 1st (a little over a 
week) would be enough before voting on it, or do you think that it should go 
longer? We can always extend the time if it turns out that it needs a longer 
period than that, but if you think that it's likely to need more review and 
update, then we might as well select a longer time to begin with.

- Jonathan M Davis


Re: DIP11: Automatic downloading of libraries

2011-06-22 Thread Jacob Carlborg

On 2011-06-21 20:11, Jimmy Cao wrote:

On Tue, Jun 21, 2011 at 1:01 PM, Jacob Carlborg d...@me.com
mailto:d...@me.com wrote:

On 2011-06-21 19:36, Jonathan M Davis wrote:

On 2011-06-21 10:17, Jacob Carlborg wrote:

Maybe I was a bit too harsh saying that std.benchmark maybe
wasn't worth
adding. On the other hand isn't this what the review process
is about
(or maybe this is before the review process)? We can't include
EVERYTHING in Phobos or it will become like the Java/C# standard
library, I assume we don't want that.


Why not? Granted, we want quality code, and we only have so many
people
working on Phobos and only so many people to help vet code, but
assuming that
it can be written at the appropriate level of quality and that the
functionality is generally useful, I don't see why we wouldn't
want a large
standard library like Java and C# have. Given our level of
manpower, I don't
expect that we'll ever have a standard library that large, but I
don't see why
having a large standard library would be a bad thing as long as
it's of high
quality and its functionality is generally useful.

- Jonathan M Davis


I just got that impression. That we want a relative small standard
library and have other libraries available as well.

--
/Jacob Carlborg


What's wrong with having a standard library like C#'s?  It's one of the
greatest advantages of .NET programming.


I'm not saying it's something wrong with having a standard library as 
C#/Java. Again, I just got that impression. Emphasis on impression.


--
/Jacob Carlborg


Re: DIP11: Automatic downloading of libraries

2011-06-22 Thread Jacob Carlborg

On 2011-06-21 23:27, Byakkun wrote:

On Tue, 21 Jun 2011 21:01:07 +0300, Jacob Carlborg d...@me.com wrote:


On 2011-06-21 19:36, Jonathan M Davis wrote:

On 2011-06-21 10:17, Jacob Carlborg wrote:

Maybe I was a bit too harsh saying that std.benchmark maybe wasn't
worth
adding. On the other hand isn't this what the review process is about
(or maybe this is before the review process)? We can't include
EVERYTHING in Phobos or it will become like the Java/C# standard
library, I assume we don't want that.


Why not? Granted, we want quality code, and we only have so many people
working on Phobos and only so many people to help vet code, but
assuming that
it can be written at the appropriate level of quality and that the
functionality is generally useful, I don't see why we wouldn't want a
large
standard library like Java and C# have. Given our level of manpower,
I don't
expect that we'll ever have a standard library that large, but I
don't see why
having a large standard library would be a bad thing as long as it's
of high
quality and its functionality is generally useful.

- Jonathan M Davis


I just got that impression. That we want a relative small standard
library and have other libraries available as well.



I see only one perspective from which you would like to not have
standard libs as large as C# an Java provided the quality of the code is
good and that is the fact that you can't realistically hope to have the
IDEs they have which integrate facilities to access the documentation very
easily or one can just to rely on auto-completion (which also gives Java
and C# the luxury to use very very explicit and strait forward naming).
This is worthy of consideration for phobos (the fact
that it doesn't come bundled with an IDE like C#). Otherwise it is good
to have as much std as possible and useful. My only concern (excepting
bugs and holes in Phobos) is that the packages are not grouped at all
and that increases the time (at least for a noob) it take to search
through the documentation and the code. Also there is some ambiguity to
regarding the place of some functionality like std.array and std.string
(I fond myself surprised in other areas but I can't remember right now)
which I imagine it could be fixed simply by intelligently using D module
system. But maybe there are reasons for doing it this way which I don't
get.


Again, I'm NOT saying I don't want standard library like Java/C#.

--
/Jacob Carlborg


Re: DIP11: Automatic downloading of libraries

2011-06-22 Thread Jacob Carlborg

On 2011-06-22 06:13, Nick Sabalausky wrote:

Jacob Carlborgd...@me.com  wrote in message
news:itpn8m$1c1i$1...@digitalmars.com...


target works like this:

1. You call target passing in the name of the target and a block

2. target then call the block passing in an instance of a Target class
(or similar)

3. In the block you then specify all the necessary settings you need for
this particular target.

You should only call target once for each target. So, if you pass in
name2 instead of name you would create a new target. I haven't figured
out what should happen if you call target twice with the same name.

Also note that this would be sufficient:

target name do
 flags -l-lz
end

In that case you wouldn't even have to care about t or that it even
exists an instance behind the since. It would just be syntax.

You can have a look at how Rake and Rubgems do this:

If you look at the Rake examples:
http://en.wikipedia.org/wiki/Rake_%28software%29 then a target would work
the same as a Rake task.

Have a look at the top example of:
http://rubygems.rubyforge.org/rubygems-update/Gem/Specification.html



FWIW, I've been using Rake heavily on a non-D project for about a year or
so, and the more I use it the more I keep wishing I could just use D instead
of of Ruby. That may have a lot to do with why I'm so interested in seeing
Dake use D. Of course, I realize that Dake isn't Rake and isn't going to be
exactly the same, but it's still Ruby instead of D and that's proven to be
the #1 issue that I have with Rake.


Too bad you feel that about Ruby, I think it's a great language. Maybe 
you don't have a choice of using Rake or not but the reason I see why 
anyone would choose Rake is because the rakefiles are in Ruby.


--
/Jacob Carlborg


Re: DIP11: Automatic downloading of libraries

2011-06-21 Thread Lars T. Kyllingstad
On Mon, 20 Jun 2011 17:32:32 -0500, Andrei Alexandrescu wrote:
 On 6/20/11 4:28 PM, Jacob Carlborg wrote:
 BTW has std.benchmark gone through the regular review process?
 
 I was sure someone will ask that at some point :o). The planned change
 was to add a couple of functions, but then it got separated into its own
 module. If several people think it's worth putting std.benchmark through
 the review queue, let's do so. I'm sure the quality of the module will
 be gained.

I think we should.  Also, now that TempAlloc isn't up for review anymore, 
and both std.log and std.path have to be postponed a few weeks, the queue 
is open. :)

-Lars


Re: DIP11: Automatic downloading of libraries

2011-06-21 Thread Jacob Carlborg

On 2011-06-21 00:30, Dmitry Olshansky wrote:

On 21.06.2011 1:36, Jacob Carlborg wrote:

On 2011-06-20 22:45, Dmitry Olshansky wrote:

On 20.06.2011 23:39, Nick Sabalausky wrote:

Dmitry Olshanskydmitry.o...@gmail.com wrote in message
news:itn2el$2t2v$1...@digitalmars.com...

On 20.06.2011 12:25, Jacob Carlborg wrote:

On 2011-06-19 22:28, Dmitry Olshansky wrote:


Why having name as run-time parameter? I'd expect more like (given
there
is Target struct or class):
//somewhere at top
Target cool_lib, ...;

then:
with(cool_lib) {
flags = -L-lz;
}

I'd even expect special types like Executable, Library and so on.

The user shouldn't have to create the necessary object. If it
does, how
would the tool get it then?


If we settle on effectively evaluating orbspec like this:
//first module
module orb_orange;
mixin(import (orange.orbspec));
//

// builder entry point
void main()
{
foreach(member; __traits(allMembers, orb_orange))
{
static if(typeof(member) == Target){
//do necessary actions, sort out priority and construct a
worklist
}
else //static if (...) //...could be others I mentioned
{
}
}
//all the work goes there
}

Should be straightforward? Alternatively with local imports we can
pack it
in a struct instead of separate module, though errors in script
would be
harder to report (but at least static constructors would be
controlled!).
More adequatly would be, of course, to pump it to dmd from stdin...


Target would be part of Orb. Why not just make Target's ctor register
itself
with the rest of Orb?



Nice thinking, but default constructors for structs?
Of course, it could be a class... Then probably there could be usefull
derived things like these Executable, Library, etc.


I really don't like that the users needs to create the targets. The
good thing about Ruby is that the user can just call a function and
pass a block to the function. Then the tool can evaluate the block in
the context of an instance. The user would never have to care about
instances.


I'm not getting what's wrong with it. Your magical block is still
getting some _name_ as string right? I suspect it's even an advantage if
you can't type pass arbitrary strings to a block only proper instances,
e.g. it's harder to mistype a name due to a type checking.


This is starting to get confusing. You're supposed to be passing an 
arbitrary strings to the function and then _receive_ an instance to the 
block.



What's so good about having to type all these name over and over again
without keeping track of how many you inadvertently referenced?


You shouldn't have to repeat the name.


Taking your example, what if I typed name2 instead of name here, what
would be the tool actions:
target name do |t|
t.flags = -L-lz
end


target works like this:

1. You call target passing in the name of the target and a block

2. target then call the block passing in an instance of a Target class 
(or similar)


3. In the block you then specify all the necessary settings you need for 
this particular target.


You should only call target once for each target. So, if you pass in 
name2 instead of name you would create a new target. I haven't 
figured out what should happen if you call target twice with the same 
name.


Also note that this would be sufficient:

target name do
flags -l-lz
end

In that case you wouldn't even have to care about t or that it even 
exists an instance behind the since. It would just be syntax.


You can have a look at how Rake and Rubgems do this:

If you look at the Rake examples: 
http://en.wikipedia.org/wiki/Rake_%28software%29 then a target would 
work the same as a Rake task.


Have a look at the top example of: 
http://rubygems.rubyforge.org/rubygems-update/Gem/Specification.html



Create new target and set it's flags? I can't see a reasonable error
checking to disambiguate it at all.
More then that now I'm not sure what it was supposed to do in the first
place - update flags of existing Target instance with name name ?
Right now I think it could be much better to initialize them in the
first place.

IMHO every time I create a build script I usually care about number of
targets and their names.

P.S. Also about D as config language : take into account version
statements, here they make a lot of sense.


Yes, version statements will be available as well.

--
/Jacob Carlborg


Re: DIP11: Automatic downloading of libraries

2011-06-21 Thread Jacob Carlborg

On 2011-06-21 00:32, Andrei Alexandrescu wrote:

On 6/20/11 4:28 PM, Jacob Carlborg wrote:

See my reply to Dmitry.


I see this as a dogfood issue. If there are things that should be in
Phobos and aren't, it would gain everybody to add them to Phobos.


All of these are not missing. For some of the things I just like doing 
it differently then how Phobos does it.



Anyhow, it all depends on what you want to do with the tool. If it's
written in D1, we won't be able to put it on the github
D-programming-language/tools (which doesn't mean it won't become
widespread).


So now suddenly D1 is banned? Seems like you are trying to destroy all 
traces of D1. I think it would be better for all if you instead 
encourage people to use D of any version and not use D2.



BTW has std.benchmark gone through the regular review process?


I was sure someone will ask that at some point :o). The planned change
was to add a couple of functions, but then it got separated into its own
module. If several people think it's worth putting std.benchmark through
the review queue, let's do so. I'm sure the quality of the module will
be gained.


Andrei


Why would std.benchmark be an exception? Shouldn't all new modules and 
big refactoring of existing ones go through the review process? If none 
one thinks it's worth putting std.benchmark through the review process 
then it seems to me that people isn't thinking it worth adding to Phobos.


--
/Jacob Carlborg


Re: DIP11: Automatic downloading of libraries

2011-06-21 Thread Dmitry Olshansky

On 21.06.2011 13:07, Jacob Carlborg wrote:

On 2011-06-21 00:30, Dmitry Olshansky wrote:

On 21.06.2011 1:36, Jacob Carlborg wrote:

On 2011-06-20 22:45, Dmitry Olshansky wrote:

On 20.06.2011 23:39, Nick Sabalausky wrote:

Dmitry Olshanskydmitry.o...@gmail.com wrote in message
news:itn2el$2t2v$1...@digitalmars.com...

On 20.06.2011 12:25, Jacob Carlborg wrote:

On 2011-06-19 22:28, Dmitry Olshansky wrote:


Why having name as run-time parameter? I'd expect more like (given
there
is Target struct or class):
//somewhere at top
Target cool_lib, ...;

then:
with(cool_lib) {
flags = -L-lz;
}

I'd even expect special types like Executable, Library and so on.

The user shouldn't have to create the necessary object. If it
does, how
would the tool get it then?


If we settle on effectively evaluating orbspec like this:
//first module
module orb_orange;
mixin(import (orange.orbspec));
//

// builder entry point
void main()
{
foreach(member; __traits(allMembers, orb_orange))
{
static if(typeof(member) == Target){
//do necessary actions, sort out priority and construct a
worklist
}
else //static if (...) //...could be others I mentioned
{
}
}
//all the work goes there
}

Should be straightforward? Alternatively with local imports we can
pack it
in a struct instead of separate module, though errors in script
would be
harder to report (but at least static constructors would be
controlled!).
More adequatly would be, of course, to pump it to dmd from stdin...


Target would be part of Orb. Why not just make Target's ctor register
itself
with the rest of Orb?



Nice thinking, but default constructors for structs?
Of course, it could be a class... Then probably there could be usefull
derived things like these Executable, Library, etc.


I really don't like that the users needs to create the targets. The
good thing about Ruby is that the user can just call a function and
pass a block to the function. Then the tool can evaluate the block in
the context of an instance. The user would never have to care about
instances.


I'm not getting what's wrong with it. Your magical block is still
getting some _name_ as string right? I suspect it's even an advantage if
you can't type pass arbitrary strings to a block only proper instances,
e.g. it's harder to mistype a name due to a type checking.


This is starting to get confusing. You're supposed to be passing an 
arbitrary strings to the function and then _receive_ an instance to 
the block.



What's so good about having to type all these name over and over again
without keeping track of how many you inadvertently referenced?


You shouldn't have to repeat the name.


Taking your example, what if I typed name2 instead of name here, what
would be the tool actions:
target name do |t|
t.flags = -L-lz
end


target works like this:

1. You call target passing in the name of the target and a block

2. target then call the block passing in an instance of a Target 
class (or similar)


3. In the block you then specify all the necessary settings you need 
for this particular target.


You should only call target once for each target. So, if you pass in 
name2 instead of name you would create a new target. I haven't 
figured out what should happen if you call target twice with the 
same name.


Also note that this would be sufficient:

target name do
flags -l-lz
end

So it's a way to _create_ instances. I suspected there could be need to 
add some extra options to existing. Imagine creating special version of 
package, IMO it's better when all this extra is packaged at one place 
not in every block.


BTW this doesn't look any better then possible D version:

spec = Gem::Specification.new do |s|
  s.name = 'example'
  s.version = '1.0'
  s.summary = 'Example gem specification'
  ...
end

In any case there is now instance named spec, right? So user still have 
to manage some variables...



In that case you wouldn't even have to care about t or that it even 
exists an instance behind the since. It would just be syntax.


You can have a look at how Rake and Rubgems do this:

If you look at the Rake examples: 
http://en.wikipedia.org/wiki/Rake_%28software%29 then a target would 
work the same as a Rake task.




Have a look at the top example of: 
http://rubygems.rubyforge.org/rubygems-update/Gem/Specification.html



Create new target and set it's flags? I can't see a reasonable error
checking to disambiguate it at all.
More then that now I'm not sure what it was supposed to do in the first
place - update flags of existing Target instance with name name ?
Right now I think it could be much better to initialize them in the
first place.

IMHO every time I create a build script I usually care about number of
targets and their names.

P.S. Also about D as config language : take into account version
statements, here they make a lot of sense.


Yes, version statements will be available as well.




--
Dmitry Olshansky



Re: DIP11: Automatic downloading of libraries

2011-06-21 Thread Jacob Carlborg

On 2011-06-21 12:04, Dmitry Olshansky wrote:

On 21.06.2011 13:07, Jacob Carlborg wrote:

target works like this:

1. You call target passing in the name of the target and a block

2. target then call the block passing in an instance of a Target
class (or similar)

3. In the block you then specify all the necessary settings you need
for this particular target.

You should only call target once for each target. So, if you pass in
name2 instead of name you would create a new target. I haven't
figured out what should happen if you call target twice with the
same name.

Also note that this would be sufficient:

target name do
flags -l-lz
end


So it's a way to _create_ instances. I suspected there could be need to
add some extra options to existing. Imagine creating special version of
package, IMO it's better when all this extra is packaged at one place
not in every block.

BTW this doesn't look any better then possible D version:

spec = Gem::Specification.new do |s|
s.name = 'example'
s.version = '1.0'
s.summary = 'Example gem specification'
...
end

In any case there is now instance named spec, right? So user still have
to manage some variables...


No, no, no. Have you read my previous messages and the wiki? That above 
syntax is used by Rubygems, Rake uses a similar and Orbit and Dake will 
also use a similar syntax but will still be slightly different. The 
concepts are the same, with calling a method and passing along a block.


The syntax used by Orbit doesn't actually need blocks at all because you 
can only have one package in one orbspec. The syntax will look like this:


name example
version 1.0
summary Example gem specification

Dake will have blocks in the syntax for its config files, this is 
because multiple targets and tasks are supported within the same file. 
The syntax will look like this:


target name do
flags -L-l
product foobar
type :executable
end

In this case, name would refer to a single D file or a directory with 
multiple D files. If you want to have settings for multiple targets then 
you just put that code outside any of the blocks, at the global scope 
(or pass a block to a method name global, haven't decided yet).


And similar for tasks:

task foo do
# do something
end

A task is just some code you can run from the command line via the tool:

dake foo

As you can see, no variables and no instances for the user to keep track 
of. Seems that I actually do need to write down a complete specification 
for these config/spec files.


--
/Jacob Carlborg


Re: DIP11: Automatic downloading of libraries

2011-06-21 Thread Dmitry Olshansky

On 21.06.2011 15:53, Jacob Carlborg wrote:

On 2011-06-21 12:04, Dmitry Olshansky wrote:

On 21.06.2011 13:07, Jacob Carlborg wrote:

target works like this:

1. You call target passing in the name of the target and a block

2. target then call the block passing in an instance of a Target
class (or similar)

3. In the block you then specify all the necessary settings you need
for this particular target.

You should only call target once for each target. So, if you pass in
name2 instead of name you would create a new target. I haven't
figured out what should happen if you call target twice with the
same name.

Also note that this would be sufficient:

target name do
flags -l-lz
end


So it's a way to _create_ instances. I suspected there could be need to
add some extra options to existing. Imagine creating special version of
package, IMO it's better when all this extra is packaged at one place
not in every block.

BTW this doesn't look any better then possible D version:

spec = Gem::Specification.new do |s|
s.name = 'example'
s.version = '1.0'
s.summary = 'Example gem specification'
...
end

In any case there is now instance named spec, right? So user still have
to manage some variables...


No, no, no. Have you read my previous messages and the wiki? That 
above syntax is used by Rubygems, Rake uses a similar and Orbit and 
Dake will also use a similar syntax but will still be slightly 
different. The concepts are the same, with calling a method and 
passing along a block.


The syntax used by Orbit doesn't actually need blocks at all because 
you can only have one package in one orbspec. The syntax will look 
like this:


name example
version 1.0
summary Example gem specification


Very sensible, no arguments.



Dake will have blocks in the syntax for its config files, this is 
because multiple targets and tasks are supported within the same file. 
The syntax will look like this:


target name do
flags -L-l
product foobar
type :executable
end



Yes, this is the one I'm not entirely happy with, admittedly because 
it's a ruby script (and it's respective magic).
 I just hope you can embed Ruby interpreter inside dake so that user 
need not to care about having proper ruby installation (and it's correct 
version too).


In this case, name would refer to a single D file or a directory 
with multiple D files. If you want to have settings for multiple 
targets then you just put that code outside any of the blocks, at the 
global scope (or pass a block to a method name global, haven't 
decided yet).


Good so far.



And similar for tasks:

task foo do
# do something
end

A task is just some code you can run from the command line via the tool:

dake foo

As you can see, no variables and no instances for the user to keep 
track of. Seems that I actually do need to write down a complete 
specification for these config/spec files.




I'm still looking for a clean version statements... but I'll just have 
to wait till you have the full spec I guess.


--
Dmitry Olshansky



Re: DIP11: Automatic downloading of libraries

2011-06-21 Thread Jacob Carlborg

On 2011-06-21 14:12, Dmitry Olshansky wrote:

On 21.06.2011 15:53, Jacob Carlborg wrote:

On 2011-06-21 12:04, Dmitry Olshansky wrote:

On 21.06.2011 13:07, Jacob Carlborg wrote:

target works like this:

1. You call target passing in the name of the target and a block

2. target then call the block passing in an instance of a Target
class (or similar)

3. In the block you then specify all the necessary settings you need
for this particular target.

You should only call target once for each target. So, if you pass in
name2 instead of name you would create a new target. I haven't
figured out what should happen if you call target twice with the
same name.

Also note that this would be sufficient:

target name do
flags -l-lz
end


So it's a way to _create_ instances. I suspected there could be need to
add some extra options to existing. Imagine creating special version of
package, IMO it's better when all this extra is packaged at one place
not in every block.

BTW this doesn't look any better then possible D version:

spec = Gem::Specification.new do |s|
s.name = 'example'
s.version = '1.0'
s.summary = 'Example gem specification'
...
end

In any case there is now instance named spec, right? So user still have
to manage some variables...


No, no, no. Have you read my previous messages and the wiki? That
above syntax is used by Rubygems, Rake uses a similar and Orbit and
Dake will also use a similar syntax but will still be slightly
different. The concepts are the same, with calling a method and
passing along a block.

The syntax used by Orbit doesn't actually need blocks at all because
you can only have one package in one orbspec. The syntax will look
like this:

name example
version 1.0
summary Example gem specification


Very sensible, no arguments.



Dake will have blocks in the syntax for its config files, this is
because multiple targets and tasks are supported within the same file.
The syntax will look like this:

target name do
flags -L-l
product foobar
type :executable
end



Yes, this is the one I'm not entirely happy with, admittedly because
it's a ruby script (and it's respective magic).
I just hope you can embed Ruby interpreter inside dake so that user need
not to care about having proper ruby installation (and it's correct
version too).


Of course, Ruby will be embedded. I already have this working in Orbit. 
I'm very careful when choosing the dependencies my application/tools 
depends on. Another reason I'm not happy about switching to D2, then it 
would depend on libcurl.



In this case, name would refer to a single D file or a directory
with multiple D files. If you want to have settings for multiple
targets then you just put that code outside any of the blocks, at the
global scope (or pass a block to a method name global, haven't
decided yet).


Good so far.



And similar for tasks:

task foo do
# do something
end

A task is just some code you can run from the command line via the tool:

dake foo

As you can see, no variables and no instances for the user to keep
track of. Seems that I actually do need to write down a complete
specification for these config/spec files.



I'm still looking for a clean version statements... but I'll just have
to wait till you have the full spec I guess.


if version.linux || version.osx
#
end

This allows to use versions just as bool values.

--
/Jacob Carlborg


Re: DIP11: Automatic downloading of libraries

2011-06-21 Thread Andrei Alexandrescu

On 6/21/11 1:58 AM, Lars T. Kyllingstad wrote:

On Mon, 20 Jun 2011 17:32:32 -0500, Andrei Alexandrescu wrote:

On 6/20/11 4:28 PM, Jacob Carlborg wrote:

BTW has std.benchmark gone through the regular review process?


I was sure someone will ask that at some point :o). The planned change
was to add a couple of functions, but then it got separated into its own
module. If several people think it's worth putting std.benchmark through
the review queue, let's do so. I'm sure the quality of the module will
be gained.


I think we should.  Also, now that TempAlloc isn't up for review anymore,
and both std.log and std.path have to be postponed a few weeks, the queue
is open. :)

-Lars


Perfect. Anyone would want to be the review manager? Lars? :o)

Andrei


Re: DIP11: Automatic downloading of libraries

2011-06-21 Thread KennyTM~

On Jun 21, 11 21:03, Jacob Carlborg wrote:

On 2011-06-21 14:12, Dmitry Olshansky wrote:

[snip]


Of course, Ruby will be embedded. I already have this working in Orbit.
I'm very careful when choosing the dependencies my application/tools
depends on. Another reason I'm not happy about switching to D2, then it
would depend on libcurl.



It doesn't. You need libcurl only if you need to use the etc.c.curl 
interface.




Re: DIP11: Automatic downloading of libraries

2011-06-21 Thread Andrei Alexandrescu

On 6/21/11 4:18 AM, Jacob Carlborg wrote:

On 2011-06-21 00:32, Andrei Alexandrescu wrote:

On 6/20/11 4:28 PM, Jacob Carlborg wrote:

See my reply to Dmitry.


I see this as a dogfood issue. If there are things that should be in
Phobos and aren't, it would gain everybody to add them to Phobos.


All of these are not missing. For some of the things I just like doing
it differently then how Phobos does it.


I understand.


Anyhow, it all depends on what you want to do with the tool. If it's
written in D1, we won't be able to put it on the github
D-programming-language/tools (which doesn't mean it won't become
widespread).


So now suddenly D1 is banned? Seems like you are trying to destroy all
traces of D1. I think it would be better for all if you instead
encourage people to use D of any version and not use D2.


No need to politicize this - as I said, it's a matter of dogfood, as 
well as one of focusing our efforts. You seem to not like the way D and 
its standard library work, which is entirely fine, except when it comes 
about adding an official tool.



BTW has std.benchmark gone through the regular review process?


I was sure someone will ask that at some point :o). The planned change
was to add a couple of functions, but then it got separated into its own
module. If several people think it's worth putting std.benchmark through
the review queue, let's do so. I'm sure the quality of the module will
be gained.


Andrei


Why would std.benchmark be an exception? Shouldn't all new modules and
big refactoring of existing ones go through the review process?


Again, the matter has been incidental - the module has grown from the 
desire to reduce std.datetime. The new code only adds a couple of 
functions. Going through the review process will definitely be helpful.



If none
one thinks it's worth putting std.benchmark through the review process
then it seems to me that people isn't thinking it worth adding to Phobos.


I wrote these functions for two reasons. One, I want to add a collection 
of benchmarks to Phobos itself so we can keep tabs on performance. 
Second, few people know how to write a benchmark and these functions 
help to some extent, so the functions may be of interest beyond Phobos.


My perception is that there is an underlying matter making you look for 
every opportunity to pick a fight. Your posts as of late have been 
increasingly abrupt. Only in the post I'm replying to you have attempted 
to ascribe political motives to me, to frame me as one who thinks is 
above the rules, and to question the worthiness of my work. Instead of 
doing all that, it may be more productive to focus on the core matter 
and figuring out a way to resolve it.



Thanks,

Andrei


Re: DIP11: Automatic downloading of libraries

2011-06-21 Thread Lars T. Kyllingstad
On Tue, 21 Jun 2011 08:21:57 -0500, Andrei Alexandrescu wrote:

 On 6/21/11 1:58 AM, Lars T. Kyllingstad wrote:
 On Mon, 20 Jun 2011 17:32:32 -0500, Andrei Alexandrescu wrote:
 On 6/20/11 4:28 PM, Jacob Carlborg wrote:
 BTW has std.benchmark gone through the regular review process?

 I was sure someone will ask that at some point :o). The planned change
 was to add a couple of functions, but then it got separated into its
 own module. If several people think it's worth putting std.benchmark
 through the review queue, let's do so. I'm sure the quality of the
 module will be gained.

 I think we should.  Also, now that TempAlloc isn't up for review
 anymore, and both std.log and std.path have to be postponed a few
 weeks, the queue is open. :)

 -Lars
 
 Perfect. Anyone would want to be the review manager? Lars? :o)

I would, but in two weeks I am going away on vacation, and that will be 
in the middle of the review process.  Any other volunteers?

-Lars


Re: DIP11: Automatic downloading of libraries

2011-06-21 Thread Andrei Alexandrescu

On 6/21/11 9:14 AM, Lars T. Kyllingstad wrote:

On Tue, 21 Jun 2011 08:21:57 -0500, Andrei Alexandrescu wrote:


On 6/21/11 1:58 AM, Lars T. Kyllingstad wrote:

On Mon, 20 Jun 2011 17:32:32 -0500, Andrei Alexandrescu wrote:

On 6/20/11 4:28 PM, Jacob Carlborg wrote:

BTW has std.benchmark gone through the regular review process?


I was sure someone will ask that at some point :o). The planned change
was to add a couple of functions, but then it got separated into its
own module. If several people think it's worth putting std.benchmark
through the review queue, let's do so. I'm sure the quality of the
module will be gained.


I think we should.  Also, now that TempAlloc isn't up for review
anymore, and both std.log and std.path have to be postponed a few
weeks, the queue is open. :)

-Lars


Perfect. Anyone would want to be the review manager? Lars? :o)


I would, but in two weeks I am going away on vacation, and that will be
in the middle of the review process.  Any other volunteers?

-Lars


BTW if libcurl is ready for review that should be the more urgent item.

Andrei


Re: DIP11: Automatic downloading of libraries

2011-06-21 Thread Jacob Carlborg

On 2011-06-21 15:26, KennyTM~ wrote:

On Jun 21, 11 21:03, Jacob Carlborg wrote:

On 2011-06-21 14:12, Dmitry Olshansky wrote:

[snip]


Of course, Ruby will be embedded. I already have this working in Orbit.
I'm very careful when choosing the dependencies my application/tools
depends on. Another reason I'm not happy about switching to D2, then it
would depend on libcurl.



It doesn't. You need libcurl only if you need to use the etc.c.curl
interface.


First, I need curl, or similar. Second, as I've said in a previous post 
I don't want to and will not use a low level socket.


--
/Jacob Carlborg


Re: DIP11: Automatic downloading of libraries

2011-06-21 Thread Adam D. Ruppe
Jacob Carlborg wrote:
 First, I need curl, or similar.

If you like, you're free to use the http implementation from my
build2.d http://arsdnet.net/dcode/build2.d - look for HTTP
Implementation near the bottom.

The (commented out) Network Wrapper get() function a little above
shows how to use it for basic stuff.


Doesn't have https support or anything else fancy like that, but it
works.


Re: DIP11: Automatic downloading of libraries

2011-06-21 Thread Jacob Carlborg

On 2011-06-21 16:02, Andrei Alexandrescu wrote:

On 6/21/11 4:18 AM, Jacob Carlborg wrote:

On 2011-06-21 00:32, Andrei Alexandrescu wrote:

On 6/20/11 4:28 PM, Jacob Carlborg wrote:

See my reply to Dmitry.


I see this as a dogfood issue. If there are things that should be in
Phobos and aren't, it would gain everybody to add them to Phobos.


All of these are not missing. For some of the things I just like doing
it differently then how Phobos does it.


I understand.


Anyhow, it all depends on what you want to do with the tool. If it's
written in D1, we won't be able to put it on the github
D-programming-language/tools (which doesn't mean it won't become
widespread).


So now suddenly D1 is banned? Seems like you are trying to destroy all
traces of D1. I think it would be better for all if you instead
encourage people to use D of any version and not use D2.


No need to politicize this - as I said, it's a matter of dogfood, as
well as one of focusing our efforts. You seem to not like the way D and
its standard library work, which is entirely fine, except when it comes
about adding an official tool.


I do like D1 and in general D2. What I'm having most problem with is 
Phobos and that D2 sometimes (too often for me) doesn't work.


If we talk about making it an official tool I can understand that you 
want it to be written in D2 and Phobos.


On the other hand I think that the D community should encourage all 
developers using D, regardless of which version or standard library they 
use. The community is too small for anything else.



BTW has std.benchmark gone through the regular review process?


I was sure someone will ask that at some point :o). The planned change
was to add a couple of functions, but then it got separated into its own
module. If several people think it's worth putting std.benchmark through
the review queue, let's do so. I'm sure the quality of the module will
be gained.


Andrei


Why would std.benchmark be an exception? Shouldn't all new modules and
big refactoring of existing ones go through the review process?


Again, the matter has been incidental - the module has grown from the
desire to reduce std.datetime. The new code only adds a couple of
functions. Going through the review process will definitely be helpful.


If none
one thinks it's worth putting std.benchmark through the review process
then it seems to me that people isn't thinking it worth adding to Phobos.


I wrote these functions for two reasons. One, I want to add a collection
of benchmarks to Phobos itself so we can keep tabs on performance.
Second, few people know how to write a benchmark and these functions
help to some extent, so the functions may be of interest beyond Phobos.

My perception is that there is an underlying matter making you look for
every opportunity to pick a fight. Your posts as of late have been
increasingly abrupt. Only in the post I'm replying to you have attempted
to ascribe political motives to me, to frame me as one who thinks is
above the rules, and to question the worthiness of my work. Instead of
doing all that, it may be more productive to focus on the core matter
and figuring out a way to resolve it.


Thanks,

Andrei


I'm sorry if my posts are abrupt. I'm not very good at writing in the 
first place and my native language not being English doesn't help. 
Sometimes I just want to answer something to just basically indicate 
that I've read the reply, that may look abrupt, I don't know.


I just want to say one more thing (hoping you don't think I'm too 
offensive) and that is that you sometimes seem to want to pretend that 
there is no D1 and never has been.


Maybe I was a bit too harsh saying that std.benchmark maybe wasn't worth 
adding. On the other hand isn't this what the review process is about 
(or maybe this is before the review process)? We can't include 
EVERYTHING in Phobos or it will become like the Java/C# standard 
library, I assume we don't want that. I just saw a new module with 
almost 1k lines of code and some additional changes as well and was 
wondering why this haven't gone through the review process.


In the end I'm just trying to defend my code and ideas. Should I've not 
answered the feedback I got on my ideas?


Anyway, I have no problem dropping this discussion and focusing on the 
core matter and figuring out a way to resolve it.


--
/Jacob Carlborg


Re: DIP11: Automatic downloading of libraries

2011-06-21 Thread Jonathan M Davis
On 2011-06-21 10:17, Jacob Carlborg wrote:
 Maybe I was a bit too harsh saying that std.benchmark maybe wasn't worth
 adding. On the other hand isn't this what the review process is about
 (or maybe this is before the review process)? We can't include
 EVERYTHING in Phobos or it will become like the Java/C# standard
 library, I assume we don't want that.

Why not? Granted, we want quality code, and we only have so many people 
working on Phobos and only so many people to help vet code, but assuming that 
it can be written at the appropriate level of quality and that the 
functionality is generally useful, I don't see why we wouldn't want a large 
standard library like Java and C# have. Given our level of manpower, I don't 
expect that we'll ever have a standard library that large, but I don't see why 
having a large standard library would be a bad thing as long as it's of high 
quality and its functionality is generally useful.

- Jonathan M Davis


Re: DIP11: Automatic downloading of libraries

2011-06-21 Thread Jacob Carlborg

On 2011-06-21 19:36, Jonathan M Davis wrote:

On 2011-06-21 10:17, Jacob Carlborg wrote:

Maybe I was a bit too harsh saying that std.benchmark maybe wasn't worth
adding. On the other hand isn't this what the review process is about
(or maybe this is before the review process)? We can't include
EVERYTHING in Phobos or it will become like the Java/C# standard
library, I assume we don't want that.


Why not? Granted, we want quality code, and we only have so many people
working on Phobos and only so many people to help vet code, but assuming that
it can be written at the appropriate level of quality and that the
functionality is generally useful, I don't see why we wouldn't want a large
standard library like Java and C# have. Given our level of manpower, I don't
expect that we'll ever have a standard library that large, but I don't see why
having a large standard library would be a bad thing as long as it's of high
quality and its functionality is generally useful.

- Jonathan M Davis


I just got that impression. That we want a relative small standard 
library and have other libraries available as well.


--
/Jacob Carlborg


Re: DIP11: Automatic downloading of libraries

2011-06-21 Thread Jimmy Cao
On Tue, Jun 21, 2011 at 1:01 PM, Jacob Carlborg d...@me.com wrote:

 On 2011-06-21 19:36, Jonathan M Davis wrote:

 On 2011-06-21 10:17, Jacob Carlborg wrote:

 Maybe I was a bit too harsh saying that std.benchmark maybe wasn't worth
 adding. On the other hand isn't this what the review process is about
 (or maybe this is before the review process)? We can't include
 EVERYTHING in Phobos or it will become like the Java/C# standard
 library, I assume we don't want that.


 Why not? Granted, we want quality code, and we only have so many people
 working on Phobos and only so many people to help vet code, but assuming
 that
 it can be written at the appropriate level of quality and that the
 functionality is generally useful, I don't see why we wouldn't want a
 large
 standard library like Java and C# have. Given our level of manpower, I
 don't
 expect that we'll ever have a standard library that large, but I don't see
 why
 having a large standard library would be a bad thing as long as it's of
 high
 quality and its functionality is generally useful.

 - Jonathan M Davis


 I just got that impression. That we want a relative small standard library
 and have other libraries available as well.

 --
 /Jacob Carlborg


What's wrong with having a standard library like C#'s?  It's one of the
greatest advantages of .NET programming.


Re: DIP11: Automatic downloading of libraries

2011-06-21 Thread Jonathan M Davis
On 2011-06-21 11:01, Jacob Carlborg wrote:
 On 2011-06-21 19:36, Jonathan M Davis wrote:
  On 2011-06-21 10:17, Jacob Carlborg wrote:
  Maybe I was a bit too harsh saying that std.benchmark maybe wasn't worth
  adding. On the other hand isn't this what the review process is about
  (or maybe this is before the review process)? We can't include
  EVERYTHING in Phobos or it will become like the Java/C# standard
  library, I assume we don't want that.
  
  Why not? Granted, we want quality code, and we only have so many people
  working on Phobos and only so many people to help vet code, but assuming
  that it can be written at the appropriate level of quality and that the
  functionality is generally useful, I don't see why we wouldn't want a
  large standard library like Java and C# have. Given our level of
  manpower, I don't expect that we'll ever have a standard library that
  large, but I don't see why having a large standard library would be a
  bad thing as long as it's of high quality and its functionality is
  generally useful.
  
  - Jonathan M Davis
 
 I just got that impression. That we want a relative small standard
 library and have other libraries available as well.

I don't know how everyone else feels about it, but I see no problem with 
having a large standard library as long as it's of high quality and its 
functionality is generally useful. Java and C#'s standard libraries are 
generally considered a valuable asset. One of the major advantages of Python 
which frequently gets touted is its large standard library. I definitely see a 
large standard library as advantageous. The trick is being able to develop it, 
having it of high quality, and actually have everything in it be generally 
useful. We don't have a lot of people working on Phobos, so naturally, it's 
going to be smaller. If quality is a major focus, then the size is going to 
tend to be smaller as well. And if we try and avoid functionality which is 
overly-specific and not generally useful, then that's going to make the 
library smaller as well.

We have been pushing for both high quality and general usefulness in what is 
added to Phobos, so it hasn't exactly been growing by leaps and bounds, and 
with the limited resources that we have, it takes time to improve and enlarge 
it even if we want to be large. So, Phobos is naturally smaller than many 
standard libraries (particularly those backed by large companies) and will 
continue to be so. But I think that having a large, high quality, generally 
useful standard library is very much what we should be striving for, even if 
for now that's pretty much restricted to high quality and generally useful.

Now, maybe there are other folks on the Phobos dev team or on the newsgroup 
which want Phobos to be small, but I really think that experience has shown 
that large standard libraries are generally an asset to a language. The trick 
is ensuring that the functionality that they have is of high quality and 
appropriately general for a standard library.

- Jonathan M Davis


Re: DIP11: Automatic downloading of libraries

2011-06-21 Thread Byakkun

On Tue, 21 Jun 2011 21:01:07 +0300, Jacob Carlborg d...@me.com wrote:


On 2011-06-21 19:36, Jonathan M Davis wrote:

On 2011-06-21 10:17, Jacob Carlborg wrote:
Maybe I was a bit too harsh saying that std.benchmark maybe wasn't  
worth

adding. On the other hand isn't this what the review process is about
(or maybe this is before the review process)? We can't include
EVERYTHING in Phobos or it will become like the Java/C# standard
library, I assume we don't want that.


Why not? Granted, we want quality code, and we only have so many people
working on Phobos and only so many people to help vet code, but  
assuming that

it can be written at the appropriate level of quality and that the
functionality is generally useful, I don't see why we wouldn't want a  
large
standard library like Java and C# have. Given our level of manpower, I  
don't
expect that we'll ever have a standard library that large, but I don't  
see why
having a large standard library would be a bad thing as long as it's of  
high

quality and its functionality is generally useful.

- Jonathan M Davis


I just got that impression. That we want a relative small standard  
library and have other libraries available as well.




I see only one perspective from which you would like to not have
standard libs as large as C# an Java provided the quality of the code is
good and that is the fact that you can't realistically hope to have the
IDEs they have which integrate facilities to access the documentation very
easily or one can just to rely on auto-completion (which also gives Java
and C# the luxury to use very very explicit and strait forward naming).
This is worthy of consideration for phobos (the fact
that it doesn't come bundled with an IDE like C#). Otherwise it is good
to have as much std as possible and useful. My only concern (excepting
bugs and holes in Phobos) is that the packages are not grouped at all
and that increases the time (at least for a noob) it take to search
through the documentation and the code. Also there is some ambiguity to
regarding the place of some functionality like std.array and std.string
(I fond myself surprised in other areas but I can't remember right now)
which I imagine it could be fixed simply by intelligently using D module
system. But maybe there are reasons for doing it this way which I don't
get.
--
Using Opera's revolutionary email client: http://www.opera.com/mail/


Re: DIP11: Automatic downloading of libraries

2011-06-21 Thread Jonathan M Davis
On 2011-06-21 14:27, Byakkun wrote:
 On Tue, 21 Jun 2011 21:01:07 +0300, Jacob Carlborg d...@me.com wrote:
  On 2011-06-21 19:36, Jonathan M Davis wrote:
  On 2011-06-21 10:17, Jacob Carlborg wrote:
  Maybe I was a bit too harsh saying that std.benchmark maybe wasn't
  worth
  adding. On the other hand isn't this what the review process is about
  (or maybe this is before the review process)? We can't include
  EVERYTHING in Phobos or it will become like the Java/C# standard
  library, I assume we don't want that.
  
  Why not? Granted, we want quality code, and we only have so many people
  working on Phobos and only so many people to help vet code, but
  assuming that
  it can be written at the appropriate level of quality and that the
  functionality is generally useful, I don't see why we wouldn't want a
  large
  standard library like Java and C# have. Given our level of manpower, I
  don't
  expect that we'll ever have a standard library that large, but I don't
  see why
  having a large standard library would be a bad thing as long as it's of
  high
  quality and its functionality is generally useful.
  
  - Jonathan M Davis
  
  I just got that impression. That we want a relative small standard
  library and have other libraries available as well.
 
 I see only one perspective from which you would like to not have
 standard libs as large as C# an Java provided the quality of the code is
 good and that is the fact that you can't realistically hope to have the
 IDEs they have which integrate facilities to access the documentation very
 easily or one can just to rely on auto-completion (which also gives Java
 and C# the luxury to use very very explicit and strait forward naming).
 This is worthy of consideration for phobos (the fact
 that it doesn't come bundled with an IDE like C#). Otherwise it is good
 to have as much std as possible and useful. My only concern (excepting
 bugs and holes in Phobos) is that the packages are not grouped at all
 and that increases the time (at least for a noob) it take to search
 through the documentation and the code. Also there is some ambiguity to
 regarding the place of some functionality like std.array and std.string
 (I fond myself surprised in other areas but I can't remember right now)
 which I imagine it could be fixed simply by intelligently using D module
 system. But maybe there are reasons for doing it this way which I don't
 get.

As far as std.array vs std.string goes, functionality which generalizes to 
arrays is supposed to be in std.array, whereas functionality which only makes 
sense for strings belongs in std.string. For instance, toUpper makes sense in 
std.string but not in std.array, since it only makes sense to uppercase 
strings, not general characters, whereas a function like replicate makes sense 
for arrays in general, so it's in std.array. Where it's likely to be 
surprising is with functions like split where you would initially think that 
it applies only to strings, but it has an overload which is more generally 
applicable, so it's in std.array. And several functions were moved to 
std.array from std.string a couple of releases back, so if you were used to 
having them in std.string, it could throw you off. There are probably a few 
places where functions might be better moved to another module, and there are 
definitely cases where it's debatable whether they belong in one module or 
another, but overall things are fairly well organized. In some cases, we may 
eventually have to move to a deeper hierarchy, but with what we have at the 
moment, I don't think a deeper hierarchy would help us much. It's not like 
Java where everything is a class and every class is in its own module. In that 
kind of environment, you pretty much have to have a deep hierarchy. But that's 
not the case with D.

- Jonathan M Davis


Re: DIP11: Automatic downloading of libraries

2011-06-21 Thread Nick Sabalausky
Jacob Carlborg d...@me.com wrote in message 
news:itpn8m$1c1i$1...@digitalmars.com...

 target works like this:

 1. You call target passing in the name of the target and a block

 2. target then call the block passing in an instance of a Target class 
 (or similar)

 3. In the block you then specify all the necessary settings you need for 
 this particular target.

 You should only call target once for each target. So, if you pass in 
 name2 instead of name you would create a new target. I haven't figured 
 out what should happen if you call target twice with the same name.

 Also note that this would be sufficient:

 target name do
 flags -l-lz
 end

 In that case you wouldn't even have to care about t or that it even 
 exists an instance behind the since. It would just be syntax.

 You can have a look at how Rake and Rubgems do this:

 If you look at the Rake examples: 
 http://en.wikipedia.org/wiki/Rake_%28software%29 then a target would work 
 the same as a Rake task.

 Have a look at the top example of: 
 http://rubygems.rubyforge.org/rubygems-update/Gem/Specification.html


FWIW, I've been using Rake heavily on a non-D project for about a year or 
so, and the more I use it the more I keep wishing I could just use D instead 
of of Ruby. That may have a lot to do with why I'm so interested in seeing 
Dake use D. Of course, I realize that Dake isn't Rake and isn't going to be 
exactly the same, but it's still Ruby instead of D and that's proven to be 
the #1 issue that I have with Rake.





Re: DIP11: Automatic downloading of libraries

2011-06-20 Thread Jacob Carlborg

On 2011-06-19 22:28, Dmitry Olshansky wrote:


Why having name as run-time parameter? I'd expect more like (given there
is Target struct or class):
//somewhere at top
Target cool_lib, ...;

then:
with(cool_lib) {
flags = -L-lz;
}

I'd even expect special types like Executable, Library and so on.


The user shouldn't have to create the necessary object. If it does, how 
would the tool get it then?


--
/Jacob Carlborg


Re: DIP11: Automatic downloading of libraries

2011-06-20 Thread Dmitry Olshansky

On 20.06.2011 12:25, Jacob Carlborg wrote:

On 2011-06-19 22:28, Dmitry Olshansky wrote:


Why having name as run-time parameter? I'd expect more like (given there
is Target struct or class):
//somewhere at top
Target cool_lib, ...;

then:
with(cool_lib) {
flags = -L-lz;
}

I'd even expect special types like Executable, Library and so on.


The user shouldn't have to create the necessary object. If it does, 
how would the tool get it then?



If we settle on effectively evaluating orbspec like this:
//first module
module orb_orange;
mixin(import (orange.orbspec));
//

// builder entry point
void main()
{
foreach(member; __traits(allMembers, orb_orange))
{
static if(typeof(member) == Target){
//do necessary actions,  sort out priority and construct a 
worklist

}
else //static if (...) //...could be others I mentioned
{
}
}
//all the work goes there
}

Should be straightforward? Alternatively with local imports we can pack 
it in a struct instead of separate module, though errors in script would 
be harder to report (but at least static constructors would be 
controlled!). More adequatly would be, of course, to pump it to dmd from 
stdin...


--
Dmitry Olshansky



Re: DIP11: Automatic downloading of libraries

2011-06-20 Thread Jacob Carlborg

On 2011-06-20 10:59, Dmitry Olshansky wrote:

On 20.06.2011 12:25, Jacob Carlborg wrote:

On 2011-06-19 22:28, Dmitry Olshansky wrote:


Why having name as run-time parameter? I'd expect more like (given there
is Target struct or class):
//somewhere at top
Target cool_lib, ...;

then:
with(cool_lib) {
flags = -L-lz;
}

I'd even expect special types like Executable, Library and so on.


The user shouldn't have to create the necessary object. If it does,
how would the tool get it then?


If we settle on effectively evaluating orbspec like this:
//first module
module orb_orange;
mixin(import (orange.orbspec));
//

// builder entry point
void main()
{
foreach(member; __traits(allMembers, orb_orange))
{
static if(typeof(member) == Target){
//do necessary actions, sort out priority and construct a worklist
}
else //static if (...) //...could be others I mentioned
{
}
}
//all the work goes there
}

Should be straightforward? Alternatively with local imports we can pack
it in a struct instead of separate module, though errors in script would
be harder to report (but at least static constructors would be
controlled!). More adequatly would be, of course, to pump it to dmd from
stdin...


I had no idea that you could do that. It seems somewhat complicated and 
like a hack. Also note that Orbit is currently written in D1, which 
doesn't have __traits.


--
/Jacob Carlborg


Re: DIP11: Automatic downloading of libraries

2011-06-20 Thread Dmitry Olshansky

On 20.06.2011 15:35, Jacob Carlborg wrote:

On 2011-06-20 10:59, Dmitry Olshansky wrote:

On 20.06.2011 12:25, Jacob Carlborg wrote:

On 2011-06-19 22:28, Dmitry Olshansky wrote:

Why having name as run-time parameter? I'd expect more like (given 
there

is Target struct or class):
//somewhere at top
Target cool_lib, ...;

then:
with(cool_lib) {
flags = -L-lz;
}

I'd even expect special types like Executable, Library and so on.


The user shouldn't have to create the necessary object. If it does,
how would the tool get it then?


If we settle on effectively evaluating orbspec like this:
//first module
module orb_orange;
mixin(import (orange.orbspec));
//

// builder entry point
void main()
{
foreach(member; __traits(allMembers, orb_orange))
{
static if(typeof(member) == Target){
//do necessary actions, sort out priority and construct a worklist
}
else //static if (...) //...could be others I mentioned
{
}
}
//all the work goes there
}

Should be straightforward? Alternatively with local imports we can pack
it in a struct instead of separate module, though errors in script would
be harder to report (but at least static constructors would be
controlled!). More adequatly would be, of course, to pump it to dmd from
stdin...


I had no idea that you could do that. It seems somewhat complicated 
and like a hack. Also note that Orbit is currently written in D1, 
which doesn't have __traits.




Well, everything about compile-time introspection could be labeled like 
a hack. In fact I just seen the aforementioned hack on a much grander 
scale being used in upcoming std module, see std.benchmarking:

https://github.com/D-Programming-Language/phobos/pull/85/files#L1R577
And personally hacks should  look ugly or they are just features or at 
best shortcuts ;)


Personal things aside I still suggest you to switch it to D2. I can 
understand if Phobos is just not up to snuff for you yet (btw cute curl 
wrapper is coming in a matter of days). But other then that... just look 
at all these candies ( opDispatch anyone? )   :)
And even if porting is a piece of work, I suspect there a lot of people 
out there that would love to help this project.

(given the lofty goal that config would be written in D, and not Ruby)

--
Dmitry Olshansky



Re: DIP11: Automatic downloading of libraries

2011-06-20 Thread Dmitry Olshansky

On 20.06.2011 16:35, Dmitry Olshansky wrote:

On 20.06.2011 15:35, Jacob Carlborg wrote:

On 2011-06-20 10:59, Dmitry Olshansky wrote:

On 20.06.2011 12:25, Jacob Carlborg wrote:

On 2011-06-19 22:28, Dmitry Olshansky wrote:

Why having name as run-time parameter? I'd expect more like (given 
there

is Target struct or class):
//somewhere at top
Target cool_lib, ...;

then:
with(cool_lib) {
flags = -L-lz;
}

I'd even expect special types like Executable, Library and so on.


The user shouldn't have to create the necessary object. If it does,
how would the tool get it then?


If we settle on effectively evaluating orbspec like this:
//first module
module orb_orange;
mixin(import (orange.orbspec));
//

// builder entry point
void main()
{
foreach(member; __traits(allMembers, orb_orange))
{
static if(typeof(member) == Target){
//do necessary actions, sort out priority and construct a worklist
}
else //static if (...) //...could be others I mentioned
{
}
}
//all the work goes there
}

Should be straightforward? Alternatively with local imports we can pack
it in a struct instead of separate module, though errors in script 
would

be harder to report (but at least static constructors would be
controlled!). More adequatly would be, of course, to pump it to dmd 
from

stdin...


I had no idea that you could do that. It seems somewhat complicated 
and like a hack. Also note that Orbit is currently written in D1, 
which doesn't have __traits.




Well, everything about compile-time introspection could be labeled 
like a hack. In fact I just seen the aforementioned hack on a much 
grander scale being used in upcoming std module, see std.benchmarking:

https://github.com/D-Programming-Language/phobos/pull/85/files#L1R577
And personally hacks should  look ugly or they are just features or at 
best shortcuts ;)


Personal things aside I still suggest you to switch it to D2. I can 
understand if Phobos is just not up to snuff for you yet (btw cute 
curl wrapper is coming in a matter of days). But other then that... 
just look at all these candies ( opDispatch anyone? )   :)
And even if porting is a piece of work, I suspect there a lot of 
people out there that would love to help this project.

(given the lofty goal that config would be written in D, and not Ruby)

Just looked through the source , it seems like you are doing a lot of 
work that's already been done in Phobos, so it might be worth doing a 
port to D2. Some simple wrappers might be needed, but ultimately:

util.traits -- std.traits
core.array -- std.array + std.algorithm
io.path -- std.file  std.path
orgb.util.OptinoParser -- std.getopt

util.singleton should probably be pulled into Phobos, but a thread safe 
shared version.


--
Dmitry Olshansky



Re: DIP11: Automatic downloading of libraries

2011-06-20 Thread Adam Ruppe
Jacob Carlborg wrote:
 I had no idea that you could do that. It seems somewhat complicated
 and like a hack.

There's nothing really hacky about that - it's a defined and fairly
complete part of the language.

It's simpler than it looks too... the syntax is slightly long, but
conceptually, you're just looping over an array of members.

Combined with the stuff in std.traits to make it a little simpler,
there's lots of nice stuff you can do in there.


Re: DIP11: Automatic downloading of libraries

2011-06-20 Thread Steven Schveighoffer
On Sat, 18 Jun 2011 23:33:29 -0400, Daniel Murphy  
yebbl...@nospamgmail.com wrote:



Jacob Carlborg d...@me.com wrote in message
news:iti35g$2r4r$2...@digitalmars.com...


That seems cool. But, you would want to write the pluing in D and that's
not possible yet on all platforms? Or should everything be done with
extern(C), does that work?



Yeah, it won't be possible to do it all in D until we have .so's working  
on
linux etc, which I think is a while off yet.  Although this could be  
worked
around by writing a small loader in c++ and using another process  
(written

in D) to do the actual work.  Maybe it would be easier to build dmd as a
shared lib (or a static lib) and just provide a different front...

My point is that the compiler can quite easily be modified to allow it to
pass pretty much anything (missing imports, pragma(lib), etc) to a build
tool, and it should be fairly straightforward for the build tool to pass
things back in (adding objects to the linker etc).  This could allow  
single

pass full compilation even when the libraries need to be fetched off the
internet.  It could also allow seperate compilation of several source  
files

at once, without having to re-do parsing+semantic each time.  Can dmd
currently do this?
Most importantly it keeps knowledge about urls and downloading files  
outside

the compiler, where IMO it does not belong.


Note the current proposal does exactly what you are looking for, but does  
it via processes and command line instead of dlls.  This opens up numerous  
avenues of implementation (including re-using already existing utilities),  
plus keeps it actually separated (i.e. a dll/so can easily corrupt the  
memory of the application, whereas a separate process cannot).


-Steve


Re: DIP11: Automatic downloading of libraries

2011-06-20 Thread Andrei Alexandrescu

On 6/20/11 6:35 AM, Jacob Carlborg wrote:

On 2011-06-20 10:59, Dmitry Olshansky wrote:

On 20.06.2011 12:25, Jacob Carlborg wrote:

On 2011-06-19 22:28, Dmitry Olshansky wrote:


Why having name as run-time parameter? I'd expect more like (given
there
is Target struct or class):
//somewhere at top
Target cool_lib, ...;

then:
with(cool_lib) {
flags = -L-lz;
}

I'd even expect special types like Executable, Library and so on.


The user shouldn't have to create the necessary object. If it does,
how would the tool get it then?


If we settle on effectively evaluating orbspec like this:
//first module
module orb_orange;
mixin(import (orange.orbspec));
//

// builder entry point
void main()
{
foreach(member; __traits(allMembers, orb_orange))
{
static if(typeof(member) == Target){
//do necessary actions, sort out priority and construct a worklist
}
else //static if (...) //...could be others I mentioned
{
}
}
//all the work goes there
}

Should be straightforward? Alternatively with local imports we can pack
it in a struct instead of separate module, though errors in script would
be harder to report (but at least static constructors would be
controlled!). More adequatly would be, of course, to pump it to dmd from
stdin...


I had no idea that you could do that. It seems somewhat complicated and
like a hack. Also note that Orbit is currently written in D1, which
doesn't have __traits.


std.benchmark (https://github.com/D-Programming-Language/phobos/pull/85) 
does that, too. Overall I believe porting Orbit to D2 and making it use 
D2 instead of Ruby in configuration would increase its chances to become 
popular and accepted in tools/.


Andrei


Re: DIP11: Automatic downloading of libraries

2011-06-20 Thread Nick Sabalausky
Dmitry Olshansky dmitry.o...@gmail.com wrote in message 
news:itn2el$2t2v$1...@digitalmars.com...
 On 20.06.2011 12:25, Jacob Carlborg wrote:
 On 2011-06-19 22:28, Dmitry Olshansky wrote:

 Why having name as run-time parameter? I'd expect more like (given there
 is Target struct or class):
 //somewhere at top
 Target cool_lib, ...;

 then:
 with(cool_lib) {
 flags = -L-lz;
 }

 I'd even expect special types like Executable, Library and so on.

 The user shouldn't have to create the necessary object. If it does, how 
 would the tool get it then?

 If we settle on effectively evaluating orbspec like this:
 //first module
 module orb_orange;
 mixin(import (orange.orbspec));
 //

 // builder entry point
 void main()
 {
 foreach(member; __traits(allMembers, orb_orange))
 {
 static if(typeof(member) == Target){
 //do necessary actions,  sort out priority and construct a 
 worklist
 }
 else //static if (...) //...could be others I mentioned
 {
 }
 }
 //all the work goes there
 }

 Should be straightforward? Alternatively with local imports we can pack it 
 in a struct instead of separate module, though errors in script would be 
 harder to report (but at least static constructors would be controlled!). 
 More adequatly would be, of course, to pump it to dmd from stdin...


Target would be part of Orb. Why not just make Target's ctor register itself 
with the rest of Orb?




Re: DIP11: Automatic downloading of libraries

2011-06-20 Thread Jacob Carlborg

On 2011-06-20 14:35, Dmitry Olshansky wrote:

On 20.06.2011 15:35, Jacob Carlborg wrote:

On 2011-06-20 10:59, Dmitry Olshansky wrote:

On 20.06.2011 12:25, Jacob Carlborg wrote:

On 2011-06-19 22:28, Dmitry Olshansky wrote:


Why having name as run-time parameter? I'd expect more like (given
there
is Target struct or class):
//somewhere at top
Target cool_lib, ...;

then:
with(cool_lib) {
flags = -L-lz;
}

I'd even expect special types like Executable, Library and so on.


The user shouldn't have to create the necessary object. If it does,
how would the tool get it then?


If we settle on effectively evaluating orbspec like this:
//first module
module orb_orange;
mixin(import (orange.orbspec));
//

// builder entry point
void main()
{
foreach(member; __traits(allMembers, orb_orange))
{
static if(typeof(member) == Target){
//do necessary actions, sort out priority and construct a worklist
}
else //static if (...) //...could be others I mentioned
{
}
}
//all the work goes there
}

Should be straightforward? Alternatively with local imports we can pack
it in a struct instead of separate module, though errors in script would
be harder to report (but at least static constructors would be
controlled!). More adequatly would be, of course, to pump it to dmd from
stdin...


I had no idea that you could do that. It seems somewhat complicated
and like a hack. Also note that Orbit is currently written in D1,
which doesn't have __traits.



Well, everything about compile-time introspection could be labeled like
a hack. In fact I just seen the aforementioned hack on a much grander
scale being used in upcoming std module, see std.benchmarking:
https://github.com/D-Programming-Language/phobos/pull/85/files#L1R577
And personally hacks should look ugly or they are just features or at
best shortcuts ;)


I personally think that just because Phobos uses these features will not 
make them less hackish.



Personal things aside I still suggest you to switch it to D2. I can
understand if Phobos is just not up to snuff for you yet (btw cute curl
wrapper is coming in a matter of days). But other then that... just look
at all these candies ( opDispatch anyone? ) :)
And even if porting is a piece of work, I suspect there a lot of people
out there that would love to help this project.
(given the lofty goal that config would be written in D, and not Ruby)


D2 has many new cool feature and I would love to use some of them, but 
every time I try they don't work. I'm tried of using a language that's 
not ready. I still think Tango is a better library and I like it better 
than Phobos. Although Phobos is doing a great job of filling in the 
feature gaps in every new release.


--
/Jacob Carlborg


Re: DIP11: Automatic downloading of libraries

2011-06-20 Thread Dmitry Olshansky

On 20.06.2011 23:39, Nick Sabalausky wrote:

Dmitry Olshanskydmitry.o...@gmail.com  wrote in message
news:itn2el$2t2v$1...@digitalmars.com...

On 20.06.2011 12:25, Jacob Carlborg wrote:

On 2011-06-19 22:28, Dmitry Olshansky wrote:


Why having name as run-time parameter? I'd expect more like (given there
is Target struct or class):
//somewhere at top
Target cool_lib, ...;

then:
with(cool_lib) {
flags = -L-lz;
}

I'd even expect special types like Executable, Library and so on.

The user shouldn't have to create the necessary object. If it does, how
would the tool get it then?


If we settle on effectively evaluating orbspec like this:
//first module
module orb_orange;
mixin(import (orange.orbspec));
//

// builder entry point
void main()
{
 foreach(member; __traits(allMembers, orb_orange))
 {
 static if(typeof(member) == Target){
 //do necessary actions,  sort out priority and construct a
worklist
 }
 else //static if (...) //...could be others I mentioned
 {
 }
 }
 //all the work goes there
}

Should be straightforward? Alternatively with local imports we can pack it
in a struct instead of separate module, though errors in script would be
harder to report (but at least static constructors would be controlled!).
More adequatly would be, of course, to pump it to dmd from stdin...


Target would be part of Orb. Why not just make Target's ctor register itself
with the rest of Orb?



Nice thinking, but default constructors for structs?
 Of course, it could be a class... Then probably there could be usefull 
derived things like these Executable, Library,  etc.


--
Dmitry Olshansky



Re: DIP11: Automatic downloading of libraries

2011-06-20 Thread Jacob Carlborg

On 2011-06-20 14:49, Dmitry Olshansky wrote:

On 20.06.2011 16:35, Dmitry Olshansky wrote:

On 20.06.2011 15:35, Jacob Carlborg wrote:

On 2011-06-20 10:59, Dmitry Olshansky wrote:

On 20.06.2011 12:25, Jacob Carlborg wrote:

On 2011-06-19 22:28, Dmitry Olshansky wrote:


Why having name as run-time parameter? I'd expect more like (given
there
is Target struct or class):
//somewhere at top
Target cool_lib, ...;

then:
with(cool_lib) {
flags = -L-lz;
}

I'd even expect special types like Executable, Library and so on.


The user shouldn't have to create the necessary object. If it does,
how would the tool get it then?


If we settle on effectively evaluating orbspec like this:
//first module
module orb_orange;
mixin(import (orange.orbspec));
//

// builder entry point
void main()
{
foreach(member; __traits(allMembers, orb_orange))
{
static if(typeof(member) == Target){
//do necessary actions, sort out priority and construct a worklist
}
else //static if (...) //...could be others I mentioned
{
}
}
//all the work goes there
}

Should be straightforward? Alternatively with local imports we can pack
it in a struct instead of separate module, though errors in script
would
be harder to report (but at least static constructors would be
controlled!). More adequatly would be, of course, to pump it to dmd
from
stdin...


I had no idea that you could do that. It seems somewhat complicated
and like a hack. Also note that Orbit is currently written in D1,
which doesn't have __traits.



Well, everything about compile-time introspection could be labeled
like a hack. In fact I just seen the aforementioned hack on a much
grander scale being used in upcoming std module, see std.benchmarking:
https://github.com/D-Programming-Language/phobos/pull/85/files#L1R577
And personally hacks should look ugly or they are just features or at
best shortcuts ;)

Personal things aside I still suggest you to switch it to D2. I can
understand if Phobos is just not up to snuff for you yet (btw cute
curl wrapper is coming in a matter of days). But other then that...
just look at all these candies ( opDispatch anyone? ) :)
And even if porting is a piece of work, I suspect there a lot of
people out there that would love to help this project.
(given the lofty goal that config would be written in D, and not Ruby)


Just looked through the source , it seems like you are doing a lot of
work that's already been done in Phobos, so it might be worth doing a
port to D2. Some simple wrappers might be needed, but ultimately:


First I have to say that these simple models are no reason to port to 
D2. Second, here are a couple of other reasons:


* These modules (at least some of them) are quite old, pieces of some of 
them originate back from 2007 (before D2)


* These modules also started out as common API for Phobos and Tango 
functions


* Some of these modules also contains specific functions and names for 
easing Java and C++ porting


Overall I like the API of the modules, some functions are aliases for 
Tango/Phobos functions with names I like better and some are just 
wrappers with a new API.



util.traits -- std.traits


As far as I can see, most of these functions don't exist in std.traits.


core.array -- std.array + std.algorithm


When I work with arrays I want to work with arrays not some other kind 
of type like a range. I do understand the theoretical idea about having 
containers and algorithm separated but in practice I've never needed it.



io.path -- std.file  std.path


Some of these exist in std.file and some don't.


orgb.util.OptinoParser -- std.getopt


This is a wrapper for the Tango argument parse, because I like this API 
better.



util.singleton should probably be pulled into Phobos, but a thread safe
shared version.


Yes, but it isn't in Phobos yet.

--
/Jacob Carlborg


Re: DIP11: Automatic downloading of libraries

2011-06-20 Thread Jacob Carlborg

On 2011-06-20 15:28, Andrei Alexandrescu wrote:

On 6/20/11 6:35 AM, Jacob Carlborg wrote:

On 2011-06-20 10:59, Dmitry Olshansky wrote:

On 20.06.2011 12:25, Jacob Carlborg wrote:

On 2011-06-19 22:28, Dmitry Olshansky wrote:


Why having name as run-time parameter? I'd expect more like (given
there
is Target struct or class):
//somewhere at top
Target cool_lib, ...;

then:
with(cool_lib) {
flags = -L-lz;
}

I'd even expect special types like Executable, Library and so on.


The user shouldn't have to create the necessary object. If it does,
how would the tool get it then?


If we settle on effectively evaluating orbspec like this:
//first module
module orb_orange;
mixin(import (orange.orbspec));
//

// builder entry point
void main()
{
foreach(member; __traits(allMembers, orb_orange))
{
static if(typeof(member) == Target){
//do necessary actions, sort out priority and construct a worklist
}
else //static if (...) //...could be others I mentioned
{
}
}
//all the work goes there
}

Should be straightforward? Alternatively with local imports we can pack
it in a struct instead of separate module, though errors in script would
be harder to report (but at least static constructors would be
controlled!). More adequatly would be, of course, to pump it to dmd from
stdin...


I had no idea that you could do that. It seems somewhat complicated and
like a hack. Also note that Orbit is currently written in D1, which
doesn't have __traits.


std.benchmark (https://github.com/D-Programming-Language/phobos/pull/85)
does that, too. Overall I believe porting Orbit to D2 and making it use
D2 instead of Ruby in configuration would increase its chances to become
popular and accepted in tools/.

Andrei


See my reply to Dmitry. BTW has std.benchmark gone through the regular 
review process?


--
/Jacob Carlborg


Re: DIP11: Automatic downloading of libraries

2011-06-20 Thread Jacob Carlborg

On 2011-06-20 22:45, Dmitry Olshansky wrote:

On 20.06.2011 23:39, Nick Sabalausky wrote:

Dmitry Olshanskydmitry.o...@gmail.com wrote in message
news:itn2el$2t2v$1...@digitalmars.com...

On 20.06.2011 12:25, Jacob Carlborg wrote:

On 2011-06-19 22:28, Dmitry Olshansky wrote:


Why having name as run-time parameter? I'd expect more like (given
there
is Target struct or class):
//somewhere at top
Target cool_lib, ...;

then:
with(cool_lib) {
flags = -L-lz;
}

I'd even expect special types like Executable, Library and so on.

The user shouldn't have to create the necessary object. If it does, how
would the tool get it then?


If we settle on effectively evaluating orbspec like this:
//first module
module orb_orange;
mixin(import (orange.orbspec));
//

// builder entry point
void main()
{
foreach(member; __traits(allMembers, orb_orange))
{
static if(typeof(member) == Target){
//do necessary actions, sort out priority and construct a
worklist
}
else //static if (...) //...could be others I mentioned
{
}
}
//all the work goes there
}

Should be straightforward? Alternatively with local imports we can
pack it
in a struct instead of separate module, though errors in script would be
harder to report (but at least static constructors would be
controlled!).
More adequatly would be, of course, to pump it to dmd from stdin...


Target would be part of Orb. Why not just make Target's ctor register
itself
with the rest of Orb?



Nice thinking, but default constructors for structs?
Of course, it could be a class... Then probably there could be usefull
derived things like these Executable, Library, etc.


I really don't like that the users needs to create the targets. The good 
thing about Ruby is that the user can just call a function and pass a 
block to the function. Then the tool can evaluate the block in the 
context of an instance. The user would never have to care about instances.


--
/Jacob Carlborg


Re: DIP11: Automatic downloading of libraries

2011-06-20 Thread Dmitry Olshansky

On 21.06.2011 1:36, Jacob Carlborg wrote:

On 2011-06-20 22:45, Dmitry Olshansky wrote:

On 20.06.2011 23:39, Nick Sabalausky wrote:

Dmitry Olshanskydmitry.o...@gmail.com wrote in message
news:itn2el$2t2v$1...@digitalmars.com...

On 20.06.2011 12:25, Jacob Carlborg wrote:

On 2011-06-19 22:28, Dmitry Olshansky wrote:


Why having name as run-time parameter? I'd expect more like (given
there
is Target struct or class):
//somewhere at top
Target cool_lib, ...;

then:
with(cool_lib) {
flags = -L-lz;
}

I'd even expect special types like Executable, Library and so on.
The user shouldn't have to create the necessary object. If it 
does, how

would the tool get it then?


If we settle on effectively evaluating orbspec like this:
//first module
module orb_orange;
mixin(import (orange.orbspec));
//

// builder entry point
void main()
{
foreach(member; __traits(allMembers, orb_orange))
{
static if(typeof(member) == Target){
//do necessary actions, sort out priority and construct a
worklist
}
else //static if (...) //...could be others I mentioned
{
}
}
//all the work goes there
}

Should be straightforward? Alternatively with local imports we can
pack it
in a struct instead of separate module, though errors in script 
would be

harder to report (but at least static constructors would be
controlled!).
More adequatly would be, of course, to pump it to dmd from stdin...


Target would be part of Orb. Why not just make Target's ctor register
itself
with the rest of Orb?



Nice thinking, but default constructors for structs?
Of course, it could be a class... Then probably there could be usefull
derived things like these Executable, Library, etc.


I really don't like that the users needs to create the targets. The 
good thing about Ruby is that the user can just call a function and 
pass a block to the function. Then the tool can evaluate the block in 
the context of an instance. The user would never have to care about 
instances.


I'm not getting what's wrong with  it.  Your magical block is still 
getting some _name_ as string right? I suspect it's even an advantage if 
you can't type pass arbitrary strings to a block  only proper instances, 
e.g. it's harder to mistype a name due to a type checking.


What's so good about having to type all these name over and over again 
without keeping track of how many you inadvertently referenced?


Taking your example, what if I typed name2 instead of name here, what 
would be the tool actions:

target name do |t|
t.flags = -L-lz
end

Create new target and set it's flags? I can't see a reasonable error 
checking to disambiguate it at all.
More then that now I'm not sure what it was supposed to do in the first 
place - update flags of existing Target instance with name name ? 
Right now I think it could be much better to initialize them in the 
first place.


IMHO every time I create a build script I usually care about number of 
targets and their names.


P.S. Also about D as config language : take into account version 
statements, here they make a lot of sense.


--
Dmitry Olshansky



Re: DIP11: Automatic downloading of libraries

2011-06-20 Thread Andrei Alexandrescu

On 6/20/11 4:28 PM, Jacob Carlborg wrote:

See my reply to Dmitry.


I see this as a dogfood issue. If there are things that should be in
Phobos and aren't, it would gain everybody to add them to Phobos.

Anyhow, it all depends on what you want to do with the tool. If it's
written in D1, we won't be able to put it on the github
D-programming-language/tools (which doesn't mean it won't become
widespread).


BTW has std.benchmark gone through the regular review process?


I was sure someone will ask that at some point :o). The planned change
was to add a couple of functions, but then it got separated into its own
module. If several people think it's worth putting std.benchmark through
the review queue, let's do so. I'm sure the quality of the module will
be gained.


Andrei


Re: DIP11: Automatic downloading of libraries

2011-06-19 Thread Ary Manzana

On 6/18/11 6:38 PM, Jacob Carlborg wrote:

On 2011-06-18 07:00, Nick Sabalausky wrote:

Jacob Carlborgd...@me.com wrote in message
news:itgamg$2ggr$4...@digitalmars.com...

On 2011-06-17 18:45, Jose Armando Garcia wrote:

On Fri, Jun 17, 2011 at 1:15 PM, Jacob Carlborgd...@me.com wrote:

On 2011-06-14 15:53, Andrei Alexandrescu wrote:
Instead of complaining about others ideas (I'll probably do that as
well
:)
), here's my idea:
https://github.com/jacob-carlborg/orbit/wiki/Oribt-Package-Manager-for-D




From website:
Spec and Config Files
The dakefile and the orbspec file is written in Ruby. Why?


Why ruby and not D with mixin? I am willing to volunteer some time to
this if help is needed.

-Jose


As I stated just below The dakefile and the orbspec file is written in
Ruby. Why?. D is too verbose for simple files like these. How would it
even work? Wrap everything in a main method, compile and then run?



That would be better than forcing Ruby on people.


So you prefer this, in favor of the Ruby syntax:

version_(1.0.0);
author(Jacob Carlborg);
type(Type.library);
imports([a.d, b.di]); // an array of import files


Lol!

I was going to write exactly the same answer...


Re: DIP11: Automatic downloading of libraries

2011-06-19 Thread Jacob Carlborg

On 2011-06-18 21:35, Nick Sabalausky wrote:

Jacob Carlborgd...@me.com  wrote in message
news:iti310$2r4r$1...@digitalmars.com...

On 2011-06-18 07:00, Nick Sabalausky wrote:

Jacob Carlborgd...@me.com   wrote in message
news:itgamg$2ggr$4...@digitalmars.com...

On 2011-06-17 18:45, Jose Armando Garcia wrote:

On Fri, Jun 17, 2011 at 1:15 PM, Jacob Carlborgd...@me.comwrote:

On 2011-06-14 15:53, Andrei Alexandrescu wrote:
Instead of complaining about others ideas (I'll probably do that as
well
:)
), here's my idea:
https://github.com/jacob-carlborg/orbit/wiki/Oribt-Package-Manager-for-D



   From website:
Spec and Config Files
The dakefile and the orbspec file is written in Ruby. Why?


Why ruby and not D with mixin? I am willing to volunteer some time to
this if help is needed.

-Jose


As I stated just below The dakefile and the orbspec file is written in
Ruby. Why?. D is too verbose for simple files like these. How would it
even work? Wrap everything in a main method, compile and then run?



That would be better than forcing Ruby on people.


So you prefer this, in favor of the Ruby syntax:

version_(1.0.0);
author(Jacob Carlborg);
type(Type.library);
imports([a.d, b.di]); // an array of import files

Or maybe it has to be:

Orb.create((Orb orb) {
 orb.version_(1.0.0);
 orb.author(Jacob Carlborg);
 orb.type(Type.library);
 orb.imports([a.d, b.di]); // an array of import files
});

I think it's unnecessary verbose.


I'd probably consider something more like:

orb.ver = 1.0.0;
orb.author = Jacob Carlborg;
orb.type = Type.library;
orb.imports = [a.d, b.di]; // an array of import files


That would be doable in Ruby as well. I though it would be better to not 
have to write orb. in front of every method. Note the following syntax 
is not possible in Ruby:


ver = 1.0.0
author = Jacob Carlborg
type = Type.library
imports = [a.d, b.di]

The above syntax is what I would prefer but it doesn't work in Ruby, 
would create local variables and not call instance methods. Because of 
that I chose the syntax I chose, the least verbose syntax I could think of.



And yes, I think these would be better simply because they're in D. The user
doesn't have to switch languages.


BTW, DMD, Phobos and druntime is forcing makefiles on people (I hate
makefiles).


I hate makefiles too, but that's not an accurate comparison:

1. On windows, DMD comes with the make needed, and on linux everyone already
has GNU make. With Orb/Ruby, many people will have to go and download Ruby
and install it.`


No need to download and install Ruby, it's embedded in the tool.


2. People who *use* DMD to build their software *never* have to read or
write a single line of makefile. *Only* people who modify the process of
building DMD/Phobos/druntime need to do that. But anyone (or at least most
people) who uses Orb to build their software will have to write Ruby. It
would only be comparable if Orb only used Ruby to build Orb itself.


Ok, fair enough.


DSSS forced an INI-similar syntax on people.



INI-syntax is trivial. Especially compared to Ruby (or D for that matter, to
be perfectly fair).


I was thinking that the Ruby syntax was as easy and trivial as the 
INI-syntax if you just use the basic, like I have in the examples. No 
need to use if-statements, loops or classes. That's just for packages 
that need to do very special things.


To take the DSSS syntax again as an example:

# legal syntax
version (Windows) {
}

# illegal syntax
version (Windows)
{
)

I assume this is because the lexer/parser is very simple. You don't have 
this problem if you use a complete language for the config/spec files.



If the config/spec files should be in D then, as far as I know, the tool
needs to:

1. read the file
2. add a main method
3. write a new file
4. compile the new file
5. run the resulting binary



More like:

1. compile the pre-existing main-wrapper:

 // main_wrapper.d
 void main()
 {
 mixin(import(orbconf.d));
 }

like this:

 $ dmd main_wrapper.d -J{path containing user's orbconf.d}

If the user specifies a filename other than the standard name, then it's
still not much more:

 // main_wrapperB.d
 void main()
 {
 mixin(import(std_orbconf.d));
 }

Then write std_orbconf.d:

 // std_orbconf.d
 mixin(import(renamed_orbconf.d));

$ dmd main_wrapperB.d --J{path containing user's
renamed_orbconf.d} -J{path containing std_orbconf.d}


Also, remember that this should only need to be rebuilt if renamed_orbconf.d
(or something it depends on) has changed. So you can do like rdmd does: call
dmd to *just* get the deps of the main_wrapper.d+orbconf.d combination
(which is much faster than an actual build) and only rebuild if they've
changed - which won't be often. And again even this is only needed when the
user isn't using the standard config file name.

2. run the resulting binary



This seems very unnecessary to me. Unnecessary IO, unnecessary
compilation, unnecessary 

Re: DIP11: Automatic downloading of libraries

2011-06-19 Thread Jacob Carlborg

On 2011-06-18 21:38, Nick Sabalausky wrote:

Jacob Carlborgd...@me.com  wrote in message
That doesn't address the matter of needing to install Ruby.


No need for installing, it's embedded in the tool.


It also throws away this stated benefit: When the files are written in a
complete language you have great flexibility and can take full advantage of
[a full-fledged programming language].


It's there if you need it, most people won't (I would guess).

--
/Jacob Carlborg


Re: DIP11: Automatic downloading of libraries

2011-06-19 Thread Jacob Carlborg

On 2011-06-18 22:02, Andrei Alexandrescu wrote:

On 06/18/2011 02:35 PM, Nick Sabalausky wrote:

I'd probably consider something more like:

orb.ver = 1.0.0;
orb.author = Jacob Carlborg;
orb.type = Type.library;
orb.imports = [a.d, b.di]; // an array of import files

And yes, I think these would be better simply because they're in D.
The user
doesn't have to switch languages.


Just to add an opinion - I think doing this work in D would foster
creative uses of the language and be beneficial for improving the
language itself and its standard library.

Andrei


Fair point. But I'm using the tools I think are best fitted for the job. 
If I don't think D is good enough for the job I won't use it. If it it 
shows that D is good enough I can use D instead. Note that the whole 
tool is written in D, it's just that config/spec files that uses Ruby.


--
/Jacob Carlborg


Re: DIP11: Automatic downloading of libraries

2011-06-19 Thread Lutger Blijdestijn
Jacob Carlborg wrote:

 On 2011-06-14 15:53, Andrei Alexandrescu wrote:
 http://www.wikiservice.at/d/wiki.cgi?LanguageDevel/DIPs/DIP11

 Destroy.


 Andrei
 
 Instead of complaining about others ideas (I'll probably do that as well
 :) ), here's my idea:
 https://github.com/jacob-carlborg/orbit/wiki/Oribt-Package-Manager-for-D
 
 I'm working on both of the tools mentioned in the above link. The ideas
 for the package manager are heavily based on Rubygems.
 

Looks good, and has a cool name too! I love the reference to the mars / 
phobos theme.

After 'cloning into orbit...', I think I'm missing a ruby ffi binding. Is it 
possible to build it already? Or is it too early for that?

If I'm not mistaken the dependency on ruby is nicely factored into a very 
small part of orbit and could easily be replaced if someone would be 
inclined to do so. I'd prefer this over ruby, but I prefer ruby over the 
dsss format. In the end, what matters is the value of the tool.


Re: DIP11: Automatic downloading of libraries

2011-06-19 Thread Jacob Carlborg

On 2011-06-19 15:32, Lutger Blijdestijn wrote:

Jacob Carlborg wrote:


On 2011-06-14 15:53, Andrei Alexandrescu wrote:

http://www.wikiservice.at/d/wiki.cgi?LanguageDevel/DIPs/DIP11

Destroy.


Andrei


Instead of complaining about others ideas (I'll probably do that as well
:) ), here's my idea:
https://github.com/jacob-carlborg/orbit/wiki/Oribt-Package-Manager-for-D

I'm working on both of the tools mentioned in the above link. The ideas
for the package manager are heavily based on Rubygems.



Looks good, and has a cool name too! I love the reference to the mars /
phobos theme.

After 'cloning into orbit...', I think I'm missing a ruby ffi binding. Is it
possible to build it already? Or is it too early for that?

If I'm not mistaken the dependency on ruby is nicely factored into a very
small part of orbit and could easily be replaced if someone would be
inclined to do so. I'd prefer this over ruby, but I prefer ruby over the
dsss format. In the end, what matters is the value of the tool.


Oh yeah. I have the Ruby bindings on my computer only. I'll upload the 
bindings as well. The repository is not actually ready for the public 
yet. I just created the repository so I could easily access the code on 
all my computers and now I had use for the wiki as well.


--
/Jacob Carlborg


Re: DIP11: Automatic downloading of libraries

2011-06-19 Thread Johannes Pfau
Lutger Blijdestijn wrote:
Jacob Carlborg wrote:

 On 2011-06-14 15:53, Andrei Alexandrescu wrote:
 http://www.wikiservice.at/d/wiki.cgi?LanguageDevel/DIPs/DIP11

 Destroy.


 Andrei
 
 Instead of complaining about others ideas (I'll probably do that as
 well :) ), here's my idea:
 https://github.com/jacob-carlborg/orbit/wiki/Oribt-Package-Manager-for-D
 
 I'm working on both of the tools mentioned in the above link. The
 ideas for the package manager are heavily based on Rubygems.
 

Looks good, and has a cool name too! I love the reference to the
mars / phobos theme.

After 'cloning into orbit...', I think I'm missing a ruby ffi binding.
Is it possible to build it already? Or is it too early for that?

If I'm not mistaken the dependency on ruby is nicely factored into a
very small part of orbit and could easily be replaced if someone would
be inclined to do so. I'd prefer this over ruby, but I prefer ruby
over the dsss format. In the end, what matters is the value of the
tool.

I personally think that ruby is a good choice for the config format
(lua, python, whatever would be fine too), as we definitely need a
programming language for advanced use cases (debian uses makefiles,
which are a pita, but together with bash and external tools they still
count as a programming language)

It should be noted though that replacing the config syntax later on will
be difficult: even if it's factored out nicely in the code, we
could have thousands of d packages using the old format. In order not
to break those, we'd have to deprecate the old format, but still leave
it available for some time, which leads to more dependencies and
problems... 

-- 
Johannes Pfau



Re: DIP11: Automatic downloading of libraries

2011-06-19 Thread Lutger Blijdestijn
Johannes Pfau wrote:

 Lutger Blijdestijn wrote:
Jacob Carlborg wrote:

 On 2011-06-14 15:53, Andrei Alexandrescu wrote:
 http://www.wikiservice.at/d/wiki.cgi?LanguageDevel/DIPs/DIP11

 Destroy.


 Andrei
 
 Instead of complaining about others ideas (I'll probably do that as
 well :) ), here's my idea:
 https://github.com/jacob-carlborg/orbit/wiki/Oribt-Package-Manager-for-D
 
 I'm working on both of the tools mentioned in the above link. The
 ideas for the package manager are heavily based on Rubygems.
 

Looks good, and has a cool name too! I love the reference to the
mars / phobos theme.

After 'cloning into orbit...', I think I'm missing a ruby ffi binding.
Is it possible to build it already? Or is it too early for that?

If I'm not mistaken the dependency on ruby is nicely factored into a
very small part of orbit and could easily be replaced if someone would
be inclined to do so. I'd prefer this over ruby, but I prefer ruby
over the dsss format. In the end, what matters is the value of the
tool.
 
 I personally think that ruby is a good choice for the config format
 (lua, python, whatever would be fine too), as we definitely need a
 programming language for advanced use cases (debian uses makefiles,
 which are a pita, but together with bash and external tools they still
 count as a programming language)
 
 It should be noted though that replacing the config syntax later on will
 be difficult: even if it's factored out nicely in the code, we
 could have thousands of d packages using the old format. In order not
 to break those, we'd have to deprecate the old format, but still leave
 it available for some time, which leads to more dependencies and
 problems...
 

For D programmers that need this kind of advanced functionality it means 
they have to learn ruby as well. Whereas it's pretty safe to assume they 
already know D :)

Another advantage of D is that built related scripts and extensions can be 
distributed easily with orbit itself.

I'm thinking that maybe it is possible for dakefile.rb and dakefile.d to 
coexist in the same tool? I'm not sure if that creates problems, or if such 
extra complexity is worth it.

However, those that really want to use D could try to convince Jacob 
Carlborg that D is a good alternative by implementing it, if he is open to 
that.


Re: DIP11: Automatic downloading of libraries

2011-06-19 Thread Nick Sabalausky
Jacob Carlborg d...@me.com wrote in message 
news:itko61$1qdm$1...@digitalmars.com...
 On 2011-06-18 21:35, Nick Sabalausky wrote:

 I'd probably consider something more like:

 orb.ver = 1.0.0;
 orb.author = Jacob Carlborg;
 orb.type = Type.library;
 orb.imports = [a.d, b.di]; // an array of import files

 That would be doable in Ruby as well. I though it would be better to not 
 have to write orb. in front of every method. Note the following syntax 
 is not possible in Ruby:

 ver = 1.0.0
 author = Jacob Carlborg
 type = Type.library
 imports = [a.d, b.di]

 The above syntax is what I would prefer but it doesn't work in Ruby, would 
 create local variables and not call instance methods. Because of that I 
 chose the syntax I chose, the least verbose syntax I could think of.


That syntax should be doable in D.


 DSSS forced an INI-similar syntax on people.


 INI-syntax is trivial. Especially compared to Ruby (or D for that matter, 
 to
 be perfectly fair).

 I was thinking that the Ruby syntax was as easy and trivial as the 
 INI-syntax if you just use the basic, like I have in the examples. No need 
 to use if-statements, loops or classes. That's just for packages that need 
 to do very special things.


But then the people who do such fancy things have to do it in Ruby instead 
of D.

 To take the DSSS syntax again as an example:

 # legal syntax
 version (Windows) {
 }

 # illegal syntax
 version (Windows)
 {
 )

 I assume this is because the lexer/parser is very simple. You don't have 
 this problem if you use a complete language for the config/spec files.


Right. And D is a complete language.


 1. The amount of extra stuff is fairly minimal. *Especially* in the 
 majority
 of cases where the user uses the standard name (orbconf.d or whatever 
 you
 want to call it).

 OK, I guess you can get away without the IO, but you still need the extra 
 processes.


That should be pretty quick.





Re: DIP11: Automatic downloading of libraries

2011-06-19 Thread Jacob Carlborg

On 2011-06-19 19:15, Johannes Pfau wrote:

Lutger Blijdestijn wrote:

Jacob Carlborg wrote:


On 2011-06-14 15:53, Andrei Alexandrescu wrote:

http://www.wikiservice.at/d/wiki.cgi?LanguageDevel/DIPs/DIP11

Destroy.


Andrei


Instead of complaining about others ideas (I'll probably do that as
well :) ), here's my idea:
https://github.com/jacob-carlborg/orbit/wiki/Oribt-Package-Manager-for-D

I'm working on both of the tools mentioned in the above link. The
ideas for the package manager are heavily based on Rubygems.



Looks good, and has a cool name too! I love the reference to the
mars / phobos theme.

After 'cloning into orbit...', I think I'm missing a ruby ffi binding.
Is it possible to build it already? Or is it too early for that?

If I'm not mistaken the dependency on ruby is nicely factored into a
very small part of orbit and could easily be replaced if someone would
be inclined to do so. I'd prefer this over ruby, but I prefer ruby
over the dsss format. In the end, what matters is the value of the
tool.


I personally think that ruby is a good choice for the config format
(lua, python, whatever would be fine too), as we definitely need a
programming language for advanced use cases (debian uses makefiles,
which are a pita, but together with bash and external tools they still
count as a programming language)


I completely agree. I key feature for why I chose Ruby is because it 
allows you to call a method with out parentheses, don't know about the 
other above mentioned languages.



It should be noted though that replacing the config syntax later on will
be difficult: even if it's factored out nicely in the code, we
could have thousands of d packages using the old format. In order not
to break those, we'd have to deprecate the old format, but still leave
it available for some time, which leads to more dependencies and
problems...


Yes, that would be a big problem. But, the advantage we have is that we 
can change the language when developing the tool, if necessary. I mean 
before we get any more packages than just test packages.


--
/Jacob Carlborg


Re: DIP11: Automatic downloading of libraries

2011-06-19 Thread Jacob Carlborg

On 2011-06-19 19:37, Lutger Blijdestijn wrote:

Johannes Pfau wrote:


Lutger Blijdestijn wrote:

Jacob Carlborg wrote:


On 2011-06-14 15:53, Andrei Alexandrescu wrote:

http://www.wikiservice.at/d/wiki.cgi?LanguageDevel/DIPs/DIP11

Destroy.


Andrei


Instead of complaining about others ideas (I'll probably do that as
well :) ), here's my idea:
https://github.com/jacob-carlborg/orbit/wiki/Oribt-Package-Manager-for-D

I'm working on both of the tools mentioned in the above link. The
ideas for the package manager are heavily based on Rubygems.



Looks good, and has a cool name too! I love the reference to the
mars / phobos theme.

After 'cloning into orbit...', I think I'm missing a ruby ffi binding.
Is it possible to build it already? Or is it too early for that?

If I'm not mistaken the dependency on ruby is nicely factored into a
very small part of orbit and could easily be replaced if someone would
be inclined to do so. I'd prefer this over ruby, but I prefer ruby
over the dsss format. In the end, what matters is the value of the
tool.


I personally think that ruby is a good choice for the config format
(lua, python, whatever would be fine too), as we definitely need a
programming language for advanced use cases (debian uses makefiles,
which are a pita, but together with bash and external tools they still
count as a programming language)

It should be noted though that replacing the config syntax later on will
be difficult: even if it's factored out nicely in the code, we
could have thousands of d packages using the old format. In order not
to break those, we'd have to deprecate the old format, but still leave
it available for some time, which leads to more dependencies and
problems...



For D programmers that need this kind of advanced functionality it means
they have to learn ruby as well. Whereas it's pretty safe to assume they
already know D :)

Another advantage of D is that built related scripts and extensions can be
distributed easily with orbit itself.


That's true. It would be possible to write extensions in D even when the 
config language is Ruby, although it would be more complicated.



I'm thinking that maybe it is possible for dakefile.rb and dakefile.d to
coexist in the same tool? I'm not sure if that creates problems, or if such
extra complexity is worth it.


I don't think it's worth it. It also depends on how much the complexity 
increases.



However, those that really want to use D could try to convince Jacob
Carlborg that D is a good alternative by implementing it, if he is open to
that.


I'm always open to suggestions.

--
/Jacob Carlborg


Re: DIP11: Automatic downloading of libraries

2011-06-19 Thread Jacob Carlborg

On 2011-06-19 20:41, Nick Sabalausky wrote:

Jacob Carlborgd...@me.com  wrote in message
news:itko61$1qdm$1...@digitalmars.com...

On 2011-06-18 21:35, Nick Sabalausky wrote:


I'd probably consider something more like:

orb.ver = 1.0.0;
orb.author = Jacob Carlborg;
orb.type = Type.library;
orb.imports = [a.d, b.di]; // an array of import files


That would be doable in Ruby as well. I though it would be better to not
have to write orb. in front of every method. Note the following syntax
is not possible in Ruby:

ver = 1.0.0
author = Jacob Carlborg
type = Type.library
imports = [a.d, b.di]

The above syntax is what I would prefer but it doesn't work in Ruby, would
create local variables and not call instance methods. Because of that I
chose the syntax I chose, the least verbose syntax I could think of.



That syntax should be doable in D.



DSSS forced an INI-similar syntax on people.



INI-syntax is trivial. Especially compared to Ruby (or D for that matter,
to
be perfectly fair).


I was thinking that the Ruby syntax was as easy and trivial as the
INI-syntax if you just use the basic, like I have in the examples. No need
to use if-statements, loops or classes. That's just for packages that need
to do very special things.



But then the people who do such fancy things have to do it in Ruby instead
of D.


To take the DSSS syntax again as an example:

# legal syntax
version (Windows) {
}

# illegal syntax
version (Windows)
{
)

I assume this is because the lexer/parser is very simple. You don't have
this problem if you use a complete language for the config/spec files.



Right. And D is a complete language.



1. The amount of extra stuff is fairly minimal. *Especially* in the
majority
of cases where the user uses the standard name (orbconf.d or whatever
you
want to call it).


OK, I guess you can get away without the IO, but you still need the extra
processes.



That should be pretty quick.


Ok, for now I will continue with Ruby and see how it goes. One thing I 
do think looks really ugly in D are delegates. For the Dake config file, 
I'm thinking that it would allow several targets and tasks (like rake), 
which would looks something like this (in Ruby):


target name do |t|
t.flags = -L-lz
end

In D this would look something like this:

target(name, (Target t) {
t.flags = -L-lz
});

Or with operator overload abuse:

target(name) in (Target t) {
t.flags = -L-lz
};

I would so love if this syntax (that's been suggested before) was supported:

target(name, Target t) {
t.flags = -L-lz
}

If anyone have better ideas for how this can be done I'm listening.

One other thing, syntax below can be thought of like a compile time eval:

void main ()
{
mixin(import(file.d));
}

Does anyone have an idea if it would be possible to do the corresponding 
to instance eval that is available in some scripting languages?


--
/Jacob Carlborg


Re: DIP11: Automatic downloading of libraries

2011-06-19 Thread Dmitry Olshansky

On 19.06.2011 23:57, Jacob Carlborg wrote:

On 2011-06-19 20:41, Nick Sabalausky wrote:

Jacob Carlborgd...@me.com  wrote in message
news:itko61$1qdm$1...@digitalmars.com...

On 2011-06-18 21:35, Nick Sabalausky wrote:


I'd probably consider something more like:

orb.ver = 1.0.0;
orb.author = Jacob Carlborg;
orb.type = Type.library;
orb.imports = [a.d, b.di]; // an array of import files


That would be doable in Ruby as well. I though it would be better to 
not
have to write orb. in front of every method. Note the following 
syntax

is not possible in Ruby:

ver = 1.0.0
author = Jacob Carlborg
type = Type.library
imports = [a.d, b.di]

The above syntax is what I would prefer but it doesn't work in Ruby, 
would

create local variables and not call instance methods. Because of that I
chose the syntax I chose, the least verbose syntax I could think of.



That syntax should be doable in D.



DSSS forced an INI-similar syntax on people.



INI-syntax is trivial. Especially compared to Ruby (or D for that 
matter,

to
be perfectly fair).


I was thinking that the Ruby syntax was as easy and trivial as the
INI-syntax if you just use the basic, like I have in the examples. 
No need
to use if-statements, loops or classes. That's just for packages 
that need

to do very special things.



But then the people who do such fancy things have to do it in Ruby 
instead

of D.


To take the DSSS syntax again as an example:

# legal syntax
version (Windows) {
}

# illegal syntax
version (Windows)
{
)

I assume this is because the lexer/parser is very simple. You don't 
have

this problem if you use a complete language for the config/spec files.



Right. And D is a complete language.



1. The amount of extra stuff is fairly minimal. *Especially* in the
majority
of cases where the user uses the standard name (orbconf.d or 
whatever

you
want to call it).


OK, I guess you can get away without the IO, but you still need the 
extra

processes.



That should be pretty quick.


Ok, for now I will continue with Ruby and see how it goes. One thing I 
do think looks really ugly in D are delegates. For the Dake config 
file, I'm thinking that it would allow several targets and tasks (like 
rake), which would looks something like this (in Ruby):


target name do |t|
t.flags = -L-lz
end

In D this would look something like this:

target(name, (Target t) {
t.flags = -L-lz
});

Or with operator overload abuse:

target(name) in (Target t) {
t.flags = -L-lz
};

I would so love if this syntax (that's been suggested before) was 
supported:


target(name, Target t) {
t.flags = -L-lz
}


Why having name as run-time parameter? I'd expect more like (given there 
is Target struct or class):

//somewhere at top
Target cool_lib, ...;

then:
with(cool_lib) {
flags = -L-lz;
}

I'd even expect special types like Executable, Library and so on.


If anyone have better ideas for how this can be done I'm listening.

One other thing, syntax below can be thought of like a compile time eval:

void main ()
{
mixin(import(file.d));
}

Does anyone have an idea if it would be possible to do the 
corresponding to instance eval that is available in some scripting 
languages?





--
Dmitry Olshansky



Re: DIP11: Automatic downloading of libraries

2011-06-18 Thread Daniel Murphy
One option that I see is to create a compiler plugin interface that lets a 
build tool/package manager hook the compiler's import resolution process.

A very (very) basic implementation: (windows only)
https://github.com/yebblies/dmd/tree/importresolve

For those who don't want to read the source code:
The user (or the build tool, or in sc.ini/dmd.conf) supplies a dll/so on the 
command line with:
dmd -ih=mylib.dll
Which exports a single function _importhandler that is called when a file 
is not found on the include path.  It passes the module name and the 
contents of the describing pragma, if any.
eg.
pragma(libver, collection, version, hash)
import xpackage.xmodule;

calls
filename = importhandler(xpackage.xmodule, collection, version, 
hash)

and lets the library download, update etc, and return the full filename of 
the required library. 




Re: DIP11: Automatic downloading of libraries

2011-06-18 Thread Ary Manzana

On 6/17/11 11:15 PM, Jacob Carlborg wrote:

On 2011-06-14 15:53, Andrei Alexandrescu wrote:

http://www.wikiservice.at/d/wiki.cgi?LanguageDevel/DIPs/DIP11

Destroy.


Andrei


Instead of complaining about others ideas (I'll probably do that as well
:) ), here's my idea:
https://github.com/jacob-carlborg/orbit/wiki/Oribt-Package-Manager-for-D

I'm working on both of the tools mentioned in the above link. The ideas
for the package manager are heavily based on Rubygems.


Very nice :-)

I hope this wins, hehe...



Re: DIP11: Automatic downloading of libraries

2011-06-18 Thread Jacob Carlborg

On 2011-06-18 07:00, Nick Sabalausky wrote:

Jacob Carlborgd...@me.com  wrote in message
news:itgamg$2ggr$4...@digitalmars.com...

On 2011-06-17 18:45, Jose Armando Garcia wrote:

On Fri, Jun 17, 2011 at 1:15 PM, Jacob Carlborgd...@me.com   wrote:

On 2011-06-14 15:53, Andrei Alexandrescu wrote:
Instead of complaining about others ideas (I'll probably do that as well
:)
), here's my idea:
https://github.com/jacob-carlborg/orbit/wiki/Oribt-Package-Manager-for-D



 From website:
Spec and Config Files
The dakefile and the orbspec file is written in Ruby. Why?


Why ruby and not D with mixin? I am willing to volunteer some time to
this if help is needed.

-Jose


As I stated just below The dakefile and the orbspec file is written in
Ruby. Why?. D is too verbose for simple files like these. How would it
even work? Wrap everything in a main method, compile and then run?



That would be better than forcing Ruby on people.


So you prefer this, in favor of the Ruby syntax:

version_(1.0.0);
author(Jacob Carlborg);
type(Type.library);
imports([a.d, b.di]); // an array of import files

Or maybe it has to be:

Orb.create((Orb orb) {
orb.version_(1.0.0);
orb.author(Jacob Carlborg);
orb.type(Type.library);
orb.imports([a.d, b.di]); // an array of import files
});

I think it's unnecessary verbose. BTW, DMD, Phobos and druntime is 
forcing makefiles on people (I hate makefiles). DSSS forced an 
INI-similar syntax on people.


If the config/spec files should be in D then, as far as I know, the tool 
needs to:


1. read the file
2. add a main method
3. write a new file
4. compile the new file
5. run the resulting binary

This seems very unnecessary to me. Unnecessary IO, unnecessary 
compilation, unnecessary processes (two new processes). The only thing 
this will do is slowing down everything.


--
/Jacob Carlborg


Re: DIP11: Automatic downloading of libraries

2011-06-18 Thread Jacob Carlborg

On 2011-06-18 08:09, Daniel Murphy wrote:

One option that I see is to create a compiler plugin interface that lets a
build tool/package manager hook the compiler's import resolution process.

A very (very) basic implementation: (windows only)
https://github.com/yebblies/dmd/tree/importresolve

For those who don't want to read the source code:
The user (or the build tool, or in sc.ini/dmd.conf) supplies a dll/so on the
command line with:
dmd -ih=mylib.dll
Which exports a single function _importhandler that is called when a file
is not found on the include path.  It passes the module name and the
contents of the describing pragma, if any.
eg.
pragma(libver, collection, version, hash)
import xpackage.xmodule;

calls
filename = importhandler(xpackage.xmodule, collection, version,
hash)

and lets the library download, update etc, and return the full filename of
the required library.


That seems cool. But, you would want to write the pluing in D and that's 
not possible yet on all platforms? Or should everything be done with 
extern(C), does that work?


--
/Jacob Carlborg


Re: DIP11: Automatic downloading of libraries

2011-06-18 Thread Jacob Carlborg

On 2011-06-18 07:00, Nick Sabalausky wrote:

Jacob Carlborgd...@me.com  wrote in message
news:itgamg$2ggr$4...@digitalmars.com...

On 2011-06-17 18:45, Jose Armando Garcia wrote:

On Fri, Jun 17, 2011 at 1:15 PM, Jacob Carlborgd...@me.com   wrote:

On 2011-06-14 15:53, Andrei Alexandrescu wrote:
Instead of complaining about others ideas (I'll probably do that as well
:)
), here's my idea:
https://github.com/jacob-carlborg/orbit/wiki/Oribt-Package-Manager-for-D



 From website:
Spec and Config Files
The dakefile and the orbspec file is written in Ruby. Why?


Why ruby and not D with mixin? I am willing to volunteer some time to
this if help is needed.

-Jose


As I stated just below The dakefile and the orbspec file is written in
Ruby. Why?. D is too verbose for simple files like these. How would it
even work? Wrap everything in a main method, compile and then run?



That would be better than forcing Ruby on people.


You can just pretend it's not Ruby and think of it as a custom format 
instead :)


--
/Jacob Carlborg


Re: DIP11: Automatic downloading of libraries

2011-06-18 Thread Nick Sabalausky
Jacob Carlborg d...@me.com wrote in message 
news:iti310$2r4r$1...@digitalmars.com...
 On 2011-06-18 07:00, Nick Sabalausky wrote:
 Jacob Carlborgd...@me.com  wrote in message
 news:itgamg$2ggr$4...@digitalmars.com...
 On 2011-06-17 18:45, Jose Armando Garcia wrote:
 On Fri, Jun 17, 2011 at 1:15 PM, Jacob Carlborgd...@me.com   wrote:
 On 2011-06-14 15:53, Andrei Alexandrescu wrote:
 Instead of complaining about others ideas (I'll probably do that as 
 well
 :)
 ), here's my idea:
 https://github.com/jacob-carlborg/orbit/wiki/Oribt-Package-Manager-for-D

  From website:
 Spec and Config Files
 The dakefile and the orbspec file is written in Ruby. Why?

 Why ruby and not D with mixin? I am willing to volunteer some time to
 this if help is needed.

 -Jose

 As I stated just below The dakefile and the orbspec file is written in
 Ruby. Why?. D is too verbose for simple files like these. How would it
 even work? Wrap everything in a main method, compile and then run?


 That would be better than forcing Ruby on people.

 So you prefer this, in favor of the Ruby syntax:

 version_(1.0.0);
 author(Jacob Carlborg);
 type(Type.library);
 imports([a.d, b.di]); // an array of import files

 Or maybe it has to be:

 Orb.create((Orb orb) {
 orb.version_(1.0.0);
 orb.author(Jacob Carlborg);
 orb.type(Type.library);
 orb.imports([a.d, b.di]); // an array of import files
 });

 I think it's unnecessary verbose.

I'd probably consider something more like:

orb.ver = 1.0.0;
orb.author = Jacob Carlborg;
orb.type = Type.library;
orb.imports = [a.d, b.di]; // an array of import files

And yes, I think these would be better simply because they're in D. The user 
doesn't have to switch languages.

 BTW, DMD, Phobos and druntime is forcing makefiles on people (I hate 
 makefiles).

I hate makefiles too, but that's not an accurate comparison:

1. On windows, DMD comes with the make needed, and on linux everyone already 
has GNU make. With Orb/Ruby, many people will have to go and download Ruby 
and install it.`

2. People who *use* DMD to build their software *never* have to read or 
write a single line of makefile. *Only* people who modify the process of 
building DMD/Phobos/druntime need to do that. But anyone (or at least most 
people) who uses Orb to build their software will have to write Ruby. It 
would only be comparable if Orb only used Ruby to build Orb itself.


 DSSS forced an INI-similar syntax on people.


INI-syntax is trivial. Especially compared to Ruby (or D for that matter, to 
be perfectly fair).

 If the config/spec files should be in D then, as far as I know, the tool 
 needs to:

 1. read the file
 2. add a main method
 3. write a new file
 4. compile the new file
 5. run the resulting binary


More like:

1. compile the pre-existing main-wrapper:

// main_wrapper.d
void main()
{
mixin(import(orbconf.d));
}

like this:

$ dmd main_wrapper.d -J{path containing user's orbconf.d}

If the user specifies a filename other than the standard name, then it's 
still not much more:

// main_wrapperB.d
void main()
{
mixin(import(std_orbconf.d));
}

Then write std_orbconf.d:

// std_orbconf.d
mixin(import(renamed_orbconf.d));

$ dmd main_wrapperB.d --J{path containing user's 
renamed_orbconf.d} -J{path containing std_orbconf.d}


Also, remember that this should only need to be rebuilt if renamed_orbconf.d 
(or something it depends on) has changed. So you can do like rdmd does: call 
dmd to *just* get the deps of the main_wrapper.d+orbconf.d combination 
(which is much faster than an actual build) and only rebuild if they've 
changed - which won't be often. And again even this is only needed when the 
user isn't using the standard config file name.

2. run the resulting binary


 This seems very unnecessary to me. Unnecessary IO, unnecessary 
 compilation, unnecessary processes (two new processes). The only thing 
 this will do is slowing down everything.


1. The amount of extra stuff is fairly minimal. *Especially* in the majority 
of cases where the user uses the standard name (orbconf.d or whatever you 
want to call it).

2. Using Ruby will slow things down, too. It's not exactly known for being a 
language that's fast to compilerun on the level of D.





Re: DIP11: Automatic downloading of libraries

2011-06-18 Thread Nick Sabalausky
Jacob Carlborg d...@me.com wrote in message 
news:iti3bf$2r4r$4...@digitalmars.com...
 On 2011-06-18 07:00, Nick Sabalausky wrote:
 Jacob Carlborgd...@me.com  wrote in message
 news:itgamg$2ggr$4...@digitalmars.com...
 On 2011-06-17 18:45, Jose Armando Garcia wrote:
 On Fri, Jun 17, 2011 at 1:15 PM, Jacob Carlborgd...@me.com   wrote:
 On 2011-06-14 15:53, Andrei Alexandrescu wrote:
 Instead of complaining about others ideas (I'll probably do that as 
 well
 :)
 ), here's my idea:
 https://github.com/jacob-carlborg/orbit/wiki/Oribt-Package-Manager-for-D

  From website:
 Spec and Config Files
 The dakefile and the orbspec file is written in Ruby. Why?

 Why ruby and not D with mixin? I am willing to volunteer some time to
 this if help is needed.

 -Jose

 As I stated just below The dakefile and the orbspec file is written in
 Ruby. Why?. D is too verbose for simple files like these. How would it
 even work? Wrap everything in a main method, compile and then run?


 That would be better than forcing Ruby on people.

 You can just pretend it's not Ruby and think of it as a custom format 
 instead :)


That doesn't address the matter of needing to install Ruby.

It also throws away this stated benefit: When the files are written in a 
complete language you have great flexibility and can take full advantage of 
[a full-fledged programming language].





Re: DIP11: Automatic downloading of libraries

2011-06-18 Thread Andrei Alexandrescu

On 06/18/2011 02:35 PM, Nick Sabalausky wrote:

Jacob Carlborgd...@me.com  wrote in message
news:iti310$2r4r$1...@digitalmars.com...

On 2011-06-18 07:00, Nick Sabalausky wrote:

Jacob Carlborgd...@me.com   wrote in message
news:itgamg$2ggr$4...@digitalmars.com...

On 2011-06-17 18:45, Jose Armando Garcia wrote:

On Fri, Jun 17, 2011 at 1:15 PM, Jacob Carlborgd...@me.comwrote:

On 2011-06-14 15:53, Andrei Alexandrescu wrote:
Instead of complaining about others ideas (I'll probably do that as
well
:)
), here's my idea:
https://github.com/jacob-carlborg/orbit/wiki/Oribt-Package-Manager-for-D



   From website:
Spec and Config Files
The dakefile and the orbspec file is written in Ruby. Why?


Why ruby and not D with mixin? I am willing to volunteer some time to
this if help is needed.

-Jose


As I stated just below The dakefile and the orbspec file is written in
Ruby. Why?. D is too verbose for simple files like these. How would it
even work? Wrap everything in a main method, compile and then run?



That would be better than forcing Ruby on people.


So you prefer this, in favor of the Ruby syntax:

version_(1.0.0);
author(Jacob Carlborg);
type(Type.library);
imports([a.d, b.di]); // an array of import files

Or maybe it has to be:

Orb.create((Orb orb) {
 orb.version_(1.0.0);
 orb.author(Jacob Carlborg);
 orb.type(Type.library);
 orb.imports([a.d, b.di]); // an array of import files
});

I think it's unnecessary verbose.


I'd probably consider something more like:

orb.ver = 1.0.0;
orb.author = Jacob Carlborg;
orb.type = Type.library;
orb.imports = [a.d, b.di]; // an array of import files

And yes, I think these would be better simply because they're in D. The user
doesn't have to switch languages.


Just to add an opinion - I think doing this work in D would foster 
creative uses of the language and be beneficial for improving the 
language itself and its standard library.


Andrei


Re: DIP11: Automatic downloading of libraries

2011-06-18 Thread Daniel Murphy
Jacob Carlborg d...@me.com wrote in message 
news:iti35g$2r4r$2...@digitalmars.com...

 That seems cool. But, you would want to write the pluing in D and that's 
 not possible yet on all platforms? Or should everything be done with 
 extern(C), does that work?


Yeah, it won't be possible to do it all in D until we have .so's working on 
linux etc, which I think is a while off yet.  Although this could be worked 
around by writing a small loader in c++ and using another process (written 
in D) to do the actual work.  Maybe it would be easier to build dmd as a 
shared lib (or a static lib) and just provide a different front...

My point is that the compiler can quite easily be modified to allow it to 
pass pretty much anything (missing imports, pragma(lib), etc) to a build 
tool, and it should be fairly straightforward for the build tool to pass 
things back in (adding objects to the linker etc).  This could allow single 
pass full compilation even when the libraries need to be fetched off the 
internet.  It could also allow seperate compilation of several source files 
at once, without having to re-do parsing+semantic each time.  Can dmd 
currently do this?
Most importantly it keeps knowledge about urls and downloading files outside 
the compiler, where IMO it does not belong. 




Re: DIP11: Automatic downloading of libraries

2011-06-17 Thread Jacob Carlborg

On 2011-06-14 15:53, Andrei Alexandrescu wrote:

http://www.wikiservice.at/d/wiki.cgi?LanguageDevel/DIPs/DIP11

Destroy.


Andrei


Instead of complaining about others ideas (I'll probably do that as well 
:) ), here's my idea: 
https://github.com/jacob-carlborg/orbit/wiki/Oribt-Package-Manager-for-D


I'm working on both of the tools mentioned in the above link. The ideas 
for the package manager are heavily based on Rubygems.


--
/Jacob Carlborg


Re: DIP11: Automatic downloading of libraries

2011-06-17 Thread Jacob Carlborg

On 2011-06-14 18:34, Vladimir Panteleev wrote:

On Tue, 14 Jun 2011 17:47:32 +0300, Andrei Alexandrescu
seewebsiteforem...@erdani.org wrote:


Finding weakness in a proposal is easy. The more difficult thing to do
is to find ways to improve it or propose alternatives.


I think the only solid alternative is to stop trying to reinvent the
wheel, and start up our photocopiers (copy CPAN/Gems/PECL).


That's what I'm doing (copying Rubygems), see 
https://github.com/jacob-carlborg/orbit/wiki/Oribt-Package-Manager-for-D



--
/Jacob Carlborg


Re: DIP11: Automatic downloading of libraries

2011-06-17 Thread Jacob Carlborg

On 2011-06-14 16:09, Vladimir Panteleev wrote:

On Tue, 14 Jun 2011 16:53:16 +0300, Andrei Alexandrescu
seewebsiteforem...@erdani.org wrote:


http://www.wikiservice.at/d/wiki.cgi?LanguageDevel/DIPs/DIP11


Why this is a bad idea:
1) It hard-codes URLs in source code. Projects often move to other
code-hosting services. PHP, Python, Perl, not sure about Ruby all have a
central website which stores package metadata.
2) It requires that the raw source code be available via HTTP. Not all
code hosting services allow this. GitHub will redirect all HTTP requests
to HTTPS.
3) It only solves the problem for D modules, but not any other possible
dependencies.

I understand that this is a very urgent problem, but my opinion is that
this half-arsed solution will only delay implementing and cause
migration problems to a real solution, which should be able to handle
svn/hg/git checkout, proper packages with custom build scripts,
versioning, miscellaneous dependencies, publishing, etc.


I agree with this, see my ideas in an answer to the original post.

--
/Jacob Carlborg


Re: DIP11: Automatic downloading of libraries

2011-06-17 Thread Jacob Carlborg

On 2011-06-14 17:31, Andrei Alexandrescu wrote:

On 6/14/11 10:27 AM, Graham Fawcett wrote:

On Tue, 14 Jun 2011 08:53:16 -0500, Andrei Alexandrescu wrote:


http://www.wikiservice.at/d/wiki.cgi?LanguageDevel/DIPs/DIP11

Destroy.


Andrei


What's the expected payload format? A text/plain D source-file? A zip or
tar archive?


Text for now.


If the tool will download individual text files it will be quite 
ineffective, it's like comparing a svn checkout with a git clone. The 
git clone is a lot more effective. It would be better to download an 
archive of some sort.



If an archive, what's the required directory layout? Should
dsource.foo.baz be required to be in /dsource/foo/baz.d within the
archive?


I agree we need to address these in the future, and also binary
distributions (e.g. .di files + .a/.lib files).


And if not an archive, how to reasonably handle multi-file packages?


Consider a library acme consisting of three files: widgets.d,
gadgets.d, fidgets.d in http://acme.com/d/;. It also depends on the
external library monads on http://nad.mo/d;.

// User code:
pragma(lib, acme, http://acme.com/d/;);
import acme.widgets;
... use ...

// widgets.d
// Assume it depends on other stuff in the same lib
// and on monads.d
pragma(lib, monads, http://nad.mo/d/;);
import acme.gadgets, acme.fidgets, monads.io;

This is all that's needed for the compiler to download and compile
everything needed.


Andrei



--
/Jacob Carlborg


Re: DIP11: Automatic downloading of libraries

2011-06-17 Thread David Nadlinger

On 6/17/11 6:18 PM, Jacob Carlborg wrote:

On 2011-06-14 18:34, Vladimir Panteleev wrote:

On Tue, 14 Jun 2011 17:47:32 +0300, Andrei Alexandrescu
seewebsiteforem...@erdani.org wrote:


Finding weakness in a proposal is easy. The more difficult thing to do
is to find ways to improve it or propose alternatives.


I think the only solid alternative is to stop trying to reinvent the
wheel, and start up our photocopiers (copy CPAN/Gems/PECL).


That's what I'm doing (copying Rubygems), see
https://github.com/jacob-carlborg/orbit/wiki/Oribt-Package-Manager-for-D


Oh, sorry, I just wanted to fix the typo in the headline, but broke the 
link as well. Now at: 
https://github.com/jacob-carlborg/orbit/wiki/Orbit-Package-Manager-for-D


David




Re: DIP11: Automatic downloading of libraries

2011-06-17 Thread David Nadlinger

On 6/17/11 6:15 PM, Jacob Carlborg wrote:

Instead of complaining about others ideas (I'll probably do that as well
:) ), here's my idea:
https://github.com/jacob-carlborg/orbit/wiki/Oribt-Package-Manager-for-D


Sorry, I just wanted to fix the headline, but that changed the URL as 
well. Now at: 
https://github.com/jacob-carlborg/orbit/wiki/Orbit-Package-Manager-for-D


David


Re: DIP11: Automatic downloading of libraries

2011-06-17 Thread Adam D. Ruppe
 It would be better to download an archive of some sort.

For cases when this is necessary, it'd be easy enough to grab
a .zip for the package rather than the .d for the module.

The .zips could take a cue from the Slackware package format too.
They simply puts things in the appropriate folders
to be added to your installation, then zip it right up.

src/package.name/file.d
bin/package.name.dll
lib/package.name.lib
package.name.txt  (this contains metadata if you want it)


You can unzip it locally, in your dmd folder, or whatever and then
the files are available for use if your -L and -I paths are good.

Thus, the same basic idea covers binary library distributions too.

This could be in addition to grabbing the .d files alone.


Re: DIP11: Automatic downloading of libraries

2011-06-17 Thread Jose Armando Garcia
On Fri, Jun 17, 2011 at 1:15 PM, Jacob Carlborg d...@me.com wrote:
 On 2011-06-14 15:53, Andrei Alexandrescu wrote:
 Instead of complaining about others ideas (I'll probably do that as well :)
 ), here's my idea:
 https://github.com/jacob-carlborg/orbit/wiki/Oribt-Package-Manager-for-D

From website:
 Spec and Config Files
 The dakefile and the orbspec file is written in Ruby. Why?

Why ruby and not D with mixin? I am willing to volunteer some time to
this if help is needed.

-Jose


Re: DIP11: Automatic downloading of libraries

2011-06-17 Thread Jacob Carlborg

On 2011-06-15 17:37, David Gileadi wrote:

On 6/14/11 6:53 AM, Andrei Alexandrescu wrote:

http://www.wikiservice.at/d/wiki.cgi?LanguageDevel/DIPs/DIP11

Destroy.


I keep thinking that if we build a separate dget, dmd could call it even
if there weren't a URL embedded in the source. If dget had a list of
central repositories then it could simply look in them for the
package/module and compilation would magically work with or without a
pragma.

In any case I suspect that a more formal versioning system is needed.
One way of supporting versions would be to make dget aware of source
control systems like svn, mercurial and git which support tags.

The pragma could support source control URLs, and could also include an
optional version. dget could be aware of common source control clients,
and could try calling them if installed, looking for the code tagged
with the provided version. If no version were specified then head/master
would be used.


If you just want to clone a repository from, i.e. github or bitbucket 
you can just do a simple HTTP download, no need for a SCM client.


--
/Jacob Carlborg


Re: DIP11: Automatic downloading of libraries

2011-06-17 Thread Jacob Carlborg

On 2011-06-15 20:11, Robert Clipsham wrote:

On 15/06/2011 16:15, Andrei Alexandrescu wrote:

pragma(lib) doesn't (and can't) work as it is, why do you want to add
more useless pragmas?


Then we should yank it or change it. That pragma was defined in a
completely different context from today's, and right now we have a much
larger user base to draw experience and insight from.


Note that rebuild had pragma(link) which got around this problem - it
was the build tool, it could keep track of all of these without
modifying object files or other such hackery. So I guess pragma(lib)
could be fixed in the hypothetical tool.


It's too bad that pragma(lib) doesn't behave like pragma(link), it's 
quite an easy fix as well. It's also unnecessary to have two tools 
reading the same files, the build tool for reading pragma(link) and the 
compiler for reading the whole file to compile it.


--
/Jacob Carlborg


Re: DIP11: Automatic downloading of libraries

2011-06-17 Thread Jacob Carlborg

On 2011-06-17 18:45, Jose Armando Garcia wrote:

On Fri, Jun 17, 2011 at 1:15 PM, Jacob Carlborgd...@me.com  wrote:

On 2011-06-14 15:53, Andrei Alexandrescu wrote:
Instead of complaining about others ideas (I'll probably do that as well :)
), here's my idea:
https://github.com/jacob-carlborg/orbit/wiki/Oribt-Package-Manager-for-D



From website:
Spec and Config Files
The dakefile and the orbspec file is written in Ruby. Why?


Why ruby and not D with mixin? I am willing to volunteer some time to
this if help is needed.

-Jose


As I stated just below The dakefile and the orbspec file is written in 
Ruby. Why?. D is too verbose for simple files like these. How would it 
even work? Wrap everything in a main method, compile and then run?


--
/Jacob Carlborg


Re: DIP11: Automatic downloading of libraries

2011-06-17 Thread Nick Sabalausky
Jacob Carlborg d...@me.com wrote in message 
news:itgamg$2ggr$4...@digitalmars.com...
 On 2011-06-17 18:45, Jose Armando Garcia wrote:
 On Fri, Jun 17, 2011 at 1:15 PM, Jacob Carlborgd...@me.com  wrote:
 On 2011-06-14 15:53, Andrei Alexandrescu wrote:
 Instead of complaining about others ideas (I'll probably do that as well 
 :)
 ), here's my idea:
 https://github.com/jacob-carlborg/orbit/wiki/Oribt-Package-Manager-for-D

 From website:
 Spec and Config Files
 The dakefile and the orbspec file is written in Ruby. Why?

 Why ruby and not D with mixin? I am willing to volunteer some time to
 this if help is needed.

 -Jose

 As I stated just below The dakefile and the orbspec file is written in 
 Ruby. Why?. D is too verbose for simple files like these. How would it 
 even work? Wrap everything in a main method, compile and then run?


That would be better than forcing Ruby on people.




Re: DIP11: Automatic downloading of libraries

2011-06-15 Thread Steven Schveighoffer
On Wed, 15 Jun 2011 14:37:29 -0400, Andrei Alexandrescu  
seewebsiteforem...@erdani.org wrote:



On 6/15/11 12:35 PM, Steven Schveighoffer wrote:

I propose the following:


Excellent. I'm on board with everything. Could you please update the DIP  
reflecting these ideas?




Updated, also added it to the DIP index.

-Steve


Re: DIP11: Automatic downloading of libraries

2011-06-15 Thread Mike Wey

On 06/15/2011 11:10 PM, Andrei Alexandrescu wrote:

On 6/15/11 3:48 PM, Mike Wey wrote:

First i didn't read all of the posts in this thread, so some of these
might already be answered.

In the first paragraph the DIP talks about Automatic downloading of
*libraries* while all the posts here talk about downloading files.
This is also reflected in the Package case paragraph since the
compiler / separate tool will first try to download a .di file.
Which generally is a d import or header file, which doesn't need to
include the implementation, so the compiled library should also be
downloaded or linking would fail, right?


That is correct. We need to address the scenario in which a .di file
requires the existence of a .a/.lib file.


Also the proposal doesn't do anything with versioning, while larger
updates will probably get a different url, bug fixes might still
introduce regressions that silently break an application that uses the
library.


I think this is a policy matter that depends on the URLs published by
the library writer.


But a different url for every bugfix would be difficult to maintain.




And now you'll have to track down witch library introduced the
bug, and more importantly your app broke overnight and while you didn't
change anything. (other that recompiling)

To find out how downloading the files would work i did some tests with
GtkD.

Building GtkD itself takes 1m56.
Building an Helloworld app that uses the prebuild library takes 0m01.

The Helloworld app need 133 files from GtkD.
Building the app and the files it needs takes 0m24.

The source of the HelloWord application can be found here:
http://www.dsource.org/projects/gtkd/browser/trunk/demos/gtk/HelloWorld.d



Thanks for the measurements. So my understanding is that the slow
helloworld essentially compiles those 133 files from GtkD in addition to
helloworld itself?


Yes, thats correct.




Thanks,

Andrei


--
Mike Wey


Re: DIP11: Automatic downloading of libraries

2011-06-15 Thread Ary Manzana

On 6/15/11 8:33 PM, Steven Schveighoffer wrote:

On Tue, 14 Jun 2011 09:53:16 -0400, Andrei Alexandrescu
seewebsiteforem...@erdani.org wrote:


http://www.wikiservice.at/d/wiki.cgi?LanguageDevel/DIPs/DIP11

Destroy.


I put this as replies in several threads, but I'll throw it out there as
its own thread:

* You already agree that having the fetching done by a separate program
(possibly written in d) makes the solution cleaner (i.e. you are not
infiltrating the code that actually does compiling with code that does
network fetching).

* I think specifying the entire url in the pragma is akin to specifying
the full path of a given module on your local disk. I think it's not the
right place for it, the person who is building the code should be
responsible for where the modules come from, and import should continue
to specify the module relative to the include path.

* A perfect (IMO) way to configure the fetch tool is by using the same
mechanism that configures dmd on how to get modules -- the include path.
For instance -Ihttp://xxx.yyy.zzz/package can be passed to the compiler
or put into the dmd.conf.

* DMD already has a good mechanism to specify configuration and you
would barely have to change anything internally.

Here's how it would work. I'll specify how it goes from command line to
final (note the http path is not a valid path, it's just an example):

dmd -Ihttp://www.dsource.org/projects/dcollections/import testproj.d

1. dmd recognizes the url pattern and stores this as an 'external' path
2. dmd reads the file testproj.d and sees that it imports
dcollections.TreeMap
3. Using it's non-external paths, it cannot find the module.
4. It calls:
dget -Ihttp://www.dsource.org/projects/dcollections/import
dcollections.TreeMap
5. dget checks its internal cache to see if the file
dcollections/TreeMap.[d|di] already exists -- not found
6. dget uses internal logic to generate a request to download either
a. an entire package which contains the requested import (preferred)
b. just the specific file dcollections/TreeMap.d
7. Using the url as a key, it stores the TreeMap.d file in a cache so it
doesn't have to download it again (can be stored globally or local to
the user/project)
8. Pipes the file to stdout, dmd reads the file, and returns 0 for success
9. dmd finishes compiling.


So if I have a library with three modules, a.d, b.d, c.d, which depend 
on another library, I should put that pragma(importpath) on each of them 
with the same url?


Or maybe I could create a fake d file with that pragma, and make the 
three modules import it so I just specify it once.


Re: DIP11: Automatic downloading of libraries

2011-06-15 Thread Steven Schveighoffer
On Wed, 15 Jun 2011 23:23:43 -0400, Ary Manzana a...@esperanto.org.ar  
wrote:



On 6/15/11 8:33 PM, Steven Schveighoffer wrote:

On Tue, 14 Jun 2011 09:53:16 -0400, Andrei Alexandrescu
seewebsiteforem...@erdani.org wrote:


http://www.wikiservice.at/d/wiki.cgi?LanguageDevel/DIPs/DIP11

Destroy.


I put this as replies in several threads, but I'll throw it out there as
its own thread:

* You already agree that having the fetching done by a separate program
(possibly written in d) makes the solution cleaner (i.e. you are not
infiltrating the code that actually does compiling with code that does
network fetching).

* I think specifying the entire url in the pragma is akin to specifying
the full path of a given module on your local disk. I think it's not the
right place for it, the person who is building the code should be
responsible for where the modules come from, and import should continue
to specify the module relative to the include path.

* A perfect (IMO) way to configure the fetch tool is by using the same
mechanism that configures dmd on how to get modules -- the include path.
For instance -Ihttp://xxx.yyy.zzz/package can be passed to the compiler
or put into the dmd.conf.

* DMD already has a good mechanism to specify configuration and you
would barely have to change anything internally.

Here's how it would work. I'll specify how it goes from command line to
final (note the http path is not a valid path, it's just an example):

dmd -Ihttp://www.dsource.org/projects/dcollections/import testproj.d

1. dmd recognizes the url pattern and stores this as an 'external' path
2. dmd reads the file testproj.d and sees that it imports
dcollections.TreeMap
3. Using it's non-external paths, it cannot find the module.
4. It calls:
dget -Ihttp://www.dsource.org/projects/dcollections/import
dcollections.TreeMap
5. dget checks its internal cache to see if the file
dcollections/TreeMap.[d|di] already exists -- not found
6. dget uses internal logic to generate a request to download either
a. an entire package which contains the requested import (preferred)
b. just the specific file dcollections/TreeMap.d
7. Using the url as a key, it stores the TreeMap.d file in a cache so it
doesn't have to download it again (can be stored globally or local to
the user/project)
8. Pipes the file to stdout, dmd reads the file, and returns 0 for  
success

9. dmd finishes compiling.


So if I have a library with three modules, a.d, b.d, c.d, which depend  
on another library, I should put that pragma(importpath) on each of them  
with the same url?


With the updated proposal (please see the DIP now), you can do -I to  
specify the import path on the command line.  Otherwise, yes, you have to  
duplicate it.


Or maybe I could create a fake d file with that pragma, and make the  
three modules import it so I just specify it once.


As long as the fake d file imports the files you need publicly, it should  
be pulled in, yes.  But the import pragma only affects imports from the  
current file.  I think that seems right, because you don't want to worry  
about importing files that might affect your import paths.  I look at it  
like version statements.


-Steve


Re: DIP11: Automatic downloading of libraries

2011-06-15 Thread Nick Sabalausky
Andrei Alexandrescu seewebsiteforem...@erdani.org wrote in message 
news:itb6os$161f$1...@digitalmars.com...
 On 6/15/11 3:47 PM, Nick Sabalausky wrote:
 Andrei Alexandrescuseewebsiteforem...@erdani.org  wrote in message
 news:itagdr$29mt$1...@digitalmars.com...
 On 6/15/11 8:33 AM, Steven Schveighoffer wrote:
 I can't really think of any other issues.

 Allow me to repeat: the scheme as you mention it is unable to figure and
 load dependent remote libraries for remote libraries. It's essentially a
 flat scheme in which you know only the top remote library but nothing
 about the rest.

 The dip takes care of that by using transitivity and by relying on the
 presence of dependency information exactly where it belongs - in the
 dependent source files.

 Dependency information is already in the source: The import statement.

 The actual path to the depndencies does not belong in the source file - 
 that
 *is* a configuration matter, and cramming it into the source only makes
 configuring harder.

 Why? I mean I can't believe it just because you are saying it. On the face 
 of it, it seems that on the contrary, there's no more need for crummy 
 little configuration files definition, discovery, adjustment, parsing, 
 etc. Clearly such are needed in certain situations but I see no reason on 
 why they must be the only way to go.


I do have reasons, but TBH I really don't have any more time or energy for 
these uphill debates right now.





Re: DIP11: Automatic downloading of libraries

2011-06-15 Thread Nick Sabalausky
Adam D. Ruppe destructiona...@gmail.com wrote in message 
news:it9ch1$140r$1...@digitalmars.com...
 Nick Sabalausky wrote:
 You import a module *from* a library.

 I don't think libraries, aside from individual modules, should even
 exist in D, since you can and should put all interdependent stuff
 in a single file.


Well, even if that's a valid point, the problem still remains that many 
people don't feel that way and many (most?) projects don't work that way. 
Should we just leave those people/projects out in the dark? Your approach, 
on the other hand, can be achieved by making each module a separate library.



 You could do the same thing with a library concept, but would you?
 Do you download a whole library just so you can implement a shared
 interface that is otherwise unrelated?


Libraries are small and disk space/bandwidth is cheap. And note that that's 
being said by the #1 old-hardware guy around here.



 Also, if a library needs any special setup step, then this
 won't even work anyway.

 This is true, but I see it as a strike *against* packaged libraries,
 not for it.


Even if it's inappropriate for most libraries, such as your example, I do 
think there are good uses for it. But regardless, operating a a per-lib 
basis instead of per-file doesn't *force* us to support such a feature if we 
decided we didn't want it.

 I think a substantial number of people (*especially* windows
 users - it's unrealistic to expect windows users to use anything
 like junctions) would expect to be able to use an already-
 installed library without special setup
 for every single project that uses it.

 The download program could automatically make locally available
 libs just work without hitting the network too.


I'm just opposed to duplicate every lib in every project that uses it 
being the default.


 I think things like apt-get and 0install are very good models for
 us to follow

 Blargh. I often think I'm the last person people should listen
 to when it comes to package management because the topic always
 brings three words to my mind: shitload of fuck.

 I've never seen one that I actually like. I've seen only two
 that I don't hate with the burning passion of 1,000 suns, and
 both of them are pretty minimal (Slackware's old tgz system and
 my build.d. Note: they both suck, just not as much as the
 alternatives)

 On the other hand, this is exactly why I jump in these threads.
 There's some small part of me that thinks maybe, just maybe,
 we can be the first to create a system that's not a steaming pile
 of putrid dogmeat.


 Some specific things I hate about the ones I've used:

 1) What if I want a version that isn't in the repos? Installing
 a piece of software myself almost *always* breaks something since
 the package manager is too stupid to even realize there's a
 potential conflict and just does its own thing.

 This was one of biggest problems with Ruby gems when I was forced
 to use it a few years back and it comes up virtually every time
 I have to use yum.

 This is why I really like it only downloading a module if it's
 missing. If I put the module in myself, it's knows to not bother
 with it - the compile succeeds, so there's no need to invoke the
 downloader at all.


 2) What if I want to keep an old version for one app, but have
 the new version for another? This is one reason why my program
 default to local subdirectories - so there'd be no risk of stepping
 on other apps at all.


 3) Can I run it as non-root? CPAN seemed almost decent
 to me until I had to use it on a client's shared host server. It
 failed miserably. (this was 2006, like with gems, maybe they
 fixed it since then.)

 If it insists on installing operating system files as a dependency
 to my module, it's evil.


 4) Is it going to suddenly stop working if I leave it for a few
 months? It's extremely annoying to me when every command just
 complains about 404 (just run yum update! if it's so easy, why
 doesn't the stupid thing do it itself?).

 This is one reason why I really want an immutable repository.
 Append to it if you want, but don't invalidate my slices plz.



 Another one of my big problems with Ruby gems was that it was
 extremely painful to install on other operating systems. At the
 time, installing it on FreeBSD and Solaris wasted way too much
 of my time.

 A good package manager should be OS agnostic in installation,
 use, and implementation. It's job is to fetch me some D stuff
 to use. Leave the operating system related stuff to me. I will
 not give it root under any circumstances - a compiler and build
 tool has no legitimate requirement for it.

 (btw if it needs root because some user wanted a system wide thing,
 that's ok. Just never *require* it.)

These are all very good points that I think we should definitely keep in 
mind when designing this system. Also, have you looked at 0install? I think 
it may match a lot of what you say you want here (granted I've never 
actually used it). It 

Re: DIP11: Automatic downloading of libraries

2011-06-15 Thread Don

Adam D. Ruppe wrote:

Nick Sabalausky wrote:

I think things like apt-get and 0install are very good models for
us to follow


Blargh. I often think I'm the last person people should listen
to when it comes to package management because the topic always
brings three words to my mind: shitload of fuck.

I've never seen one that I actually like. I've seen only two
that I don't hate with the burning passion of 1,000 suns, and
both of them are pretty minimal (Slackware's old tgz system and
my build.d. Note: they both suck, just not as much as the
alternatives)

On the other hand, this is exactly why I jump in these threads.
There's some small part of me that thinks maybe, just maybe,
we can be the first to create a system that's not a steaming pile
of putrid dogmeat.



 Some specific things I hate about the ones I've used:
[snip]

This seems to me to be very similar to the situation with search engines 
prior to google. Remember AltaVista, where two out of every three search 
results were a broken link?


Seems to me, that what's ultimately needed is a huge compatibility 
matrix, containing every version of every library, and its compatibility 
with every version of every other library. Or something like that.


Package manager shouldn't silently use packages which have never been 
used with each other before.
It's a very difficult problem, I think, but at least package owners 
could manually supply a list of other packages they've tested with.


Re: DIP11: Automatic downloading of libraries

2011-06-15 Thread Ary Manzana

On 6/14/11 8:53 PM, Andrei Alexandrescu wrote:

http://www.wikiservice.at/d/wiki.cgi?LanguageDevel/DIPs/DIP11

Destroy.


Andrei


I think something like CPAN or RubyGems should be done in D, now :-)

I think it would give a big boost to D in many ways:

  * Have a repository of libraries searchable by command line and 
retrievable by command line. Many libraries provider can be registered, 
like dsource or others.
  * Then you can have a program that downloads all these libraries, one 
by one, and see if they compile, link, etc., correctly. If not, you've 
broken some of their code. You can choose to break it and notify them, 
or just not to break it.


A problem I see in D now is that it's constantly changing (ok, the spec 
is frozen, but somehow old libraries stop working) and this will give a 
lot of stability to D.


But please, don't reinvent the wheel. Solutions for this already exist 
and work pretty well.


Re: DIP11: Automatic downloading of libraries

2011-06-15 Thread Steven Schveighoffer
On Tue, 14 Jun 2011 16:47:01 -0400, Adam D. Ruppe  
destructiona...@gmail.com wrote:



BTW, I don't think it should be limited to just passing a
url to the helper program.

I'd do it something like this:

dget module.name url_from_pragma


I still don't like the url being stored in the source file -- where  
*specifically* on the network to get the file has nothing to do with  
compiling the code, and fixing a path problem shouldn't involve editing a  
source file -- there is too much risk.


For comparison, you don't have to specify a full path to the compiler of  
where to get modules, they are specified relative to the configured  
include paths.  I think this model works well, and we should be able to  
re-use it for this purpose also.  You could even just use urls as include  
paths:


-Ihttp://www.dsource.org/projects/dcollections/import

-Steve


Re: DIP11: Automatic downloading of libraries

2011-06-15 Thread Lars T. Kyllingstad
On Wed, 15 Jun 2011 08:57:04 -0400, Steven Schveighoffer wrote:

 On Tue, 14 Jun 2011 16:26:34 -0400, Andrei Alexandrescu
 seewebsiteforem...@erdani.org wrote:
 Would you agree with the setup in which the compiler interacts during
 compilation with an external executable, placed in the same dir as the
 compiler, and with this spec?

 dget url
 
 I'd rather have it be dget includepath module1 [module2 module3 ...]
 
 Then use -I to specify include paths that are url forms.  Then you
 specify the possible network include paths with:
 
 -Ihttp://path/to/source
 
 I think this goes well with the current dmd import model.
 
 dget would be responsible for caching and updating the cache if the
 remote file changes.

++vote;

-Lars


Re: DIP11: Automatic downloading of libraries

2011-06-15 Thread Steven Schveighoffer
On Tue, 14 Jun 2011 16:26:34 -0400, Andrei Alexandrescu  
seewebsiteforem...@erdani.org wrote:



On 6/14/11 2:34 PM, Robert Clipsham wrote:

On 14/06/2011 20:07, Andrei Alexandrescu wrote:

On 6/14/11 1:22 PM, Robert Clipsham wrote:

On 14/06/2011 14:53, Andrei Alexandrescu wrote:

http://www.wikiservice.at/d/wiki.cgi?LanguageDevel/DIPs/DIP11

Destroy.


Andrei


This doesn't seem like the right solution to the problem - the correct
solution, in my opinion, is to have a build tool/package manager  
handle

this, not the compiler.

Problems I see:
* Remote server gets hacked, everyone using the library now
executes malicious code


This liability is not different from a traditional setup.


Perhaps, but with a proper package management tool this can be avoided
with sha sums etc, this can't happen with a direct get. Admittedly this
line of defense falls if the intermediate server is hacked.


You may want to update the proposal with the appropriate security  
artifacts.


[snip]

I don't have a problem with automatically downloading source during a
first build, I do see a problem with getting the compiler to do it
though. I don't believe the compiler should have anything to do with
getting source code, unless the compiler also becomes a package manager
and build tool.


Would you agree with the setup in which the compiler interacts during  
compilation with an external executable, placed in the same dir as the  
compiler, and with this spec?


dget url


I'd rather have it be dget includepath module1 [module2 module3 ...]

Then use -I to specify include paths that are url forms.  Then you specify  
the possible network include paths with:


-Ihttp://path/to/source

I think this goes well with the current dmd import model.

dget would be responsible for caching and updating the cache if the remote  
file changes.


-Steve


Re: DIP11: Automatic downloading of libraries

2011-06-15 Thread Steven Schveighoffer

On Tue, 14 Jun 2011 22:24:04 -0400, Nick Sabalausky a@a.a wrote:


Andrei Alexandrescu seewebsiteforem...@erdani.org wrote in message
news:4df7d92a.8050...@erdani.org...

On 6/14/11 4:38 PM, Nick Sabalausky wrote:

- Putting it in the compiler forces it all to be written in C++. As an
external tool, we could use D.


Having the compiler communicate with a download tool supplied with the
distribution seems to be a very promising approach that would address  
this

concern.



A two way compiler - build tool channel is messier than build tool
invoked compier, and I don't really see much benefit.


It's neither.  It's not a build tool, it's a fetch tool.  The build tool  
has nothing to do with getting the modules.


The drawback here is that the build tool has to interface with said fetch  
tool in order to do incremental builds.


However, we could make an assumption that files that are downloaded are  
rather static, and therefore, the target doesn't depend on them.  To  
override this, just do a rebuild-from-scratch on the rare occasion you  
have to update the files.



- By default, it ends up downloading an entire library one inferred
source
file at a time. Why? Libraries are a packaged whole. Standard behavior
should be for libraries should be treated as such.


Fair point, though in fact the effect is that one ends up downloading
exactly the used modules from that library and potentially others.



I really don't see a problem with that. And you'll typically end up  
needing

most, if not all, anyway. It's very difficult to see this as an actual
drawback.


When requesting a given module, it might be that it's part of a package (I  
would say most definitely).  The fetch tool could know to get the entire  
package and extract it into the cache.



- Does every project that uses libX have to download it separately? If
not
(or really even if so), how does the compiler handle different versions
of
the lib and prevent dll hell? Versioning seems to be an afterthought  
in
this DIP - and that's a guaranteed way to eventually find yourself in  
dll

hell.


Versioning is a policy matter that can, I think, be addressed within the
URL structure. This proposal tries to support versioning without
explicitly imposing it or standing in its way.



That's exactly my point. If you leave it open like that, everyone will  
come
up with thier own way to do it, many will not even give it any attention  
at
all, and most of those approaches will end up being wrong WRT avoiding  
dll

hell. Hence, dll hell will get in and library users will end up having to
deal it. The only way to avoid it is to design it out of the system up  
from

*with explicitly imposing it*.


If the proposal becomes one where the include path specifies base urls,  
then the build tool can specify exact versions.


The cache should be responsible for making sure files named the same from  
different URLs do not conflict.


for example:

-Ihttp://url.to.project/v1.2.3

in one project and

-Ihttp://url.to.project/v1.2.4

in another.

I still feel that specifying the url in the source is the wrong approach  
-- it puts too much information into the source, and any small change  
requires modifying source code.  We don't specify full paths for local  
imports, why should we specify full paths for remote ones?


-Steve


Re: DIP11: Automatic downloading of libraries

2011-06-15 Thread Steven Schveighoffer
On Tue, 14 Jun 2011 09:53:16 -0400, Andrei Alexandrescu  
seewebsiteforem...@erdani.org wrote:



http://www.wikiservice.at/d/wiki.cgi?LanguageDevel/DIPs/DIP11

Destroy.


I put this as replies in several threads, but I'll throw it out there as  
its own thread:


* You already agree that having the fetching done by a separate program  
(possibly written in d) makes the solution cleaner (i.e. you are not  
infiltrating the code that actually does compiling with code that does  
network fetching).


* I think specifying the entire url in the pragma is akin to specifying  
the full path of a given module on your local disk.  I think it's not the  
right place for it, the person who is building the code should be  
responsible for where the modules come from, and import should continue to  
specify the module relative to the include path.


* A perfect (IMO) way to configure the fetch tool is by using the same  
mechanism that configures dmd on how to get modules -- the include path.   
For instance -Ihttp://xxx.yyy.zzz/package can be passed to the compiler or  
put into the dmd.conf.


* DMD already has a good mechanism to specify configuration and you would  
barely have to change anything internally.


Here's how it would work.  I'll specify how it goes from command line to  
final (note the http path is not a valid path, it's just an example):


dmd -Ihttp://www.dsource.org/projects/dcollections/import testproj.d

1. dmd recognizes the url pattern and stores this as an 'external' path
2. dmd reads the file testproj.d and sees that it imports  
dcollections.TreeMap

3. Using it's non-external paths, it cannot find the module.
4. It calls:
dget -Ihttp://www.dsource.org/projects/dcollections/import  
dcollections.TreeMap
5. dget checks its internal cache to see if the file  
dcollections/TreeMap.[d|di] already exists -- not found

6. dget uses internal logic to generate a request to download either
   a. an entire package which contains the requested import (preferred)
   b. just the specific file dcollections/TreeMap.d
7. Using the url as a key, it stores the TreeMap.d file in a cache so it  
doesn't have to download it again (can be stored globally or local to the  
user/project)

8. Pipes the file to stdout, dmd reads the file, and returns 0 for success
9. dmd finishes compiling.

On a second run to dmd, it would go through the same process, but dget  
succeeds on step 5 of finding it in the cache and pipes it to stdout.


Some issues with this scheme:

1. dependency checking would be difficult for a build tool (like make) for  
doing incremental builds.  However, traditionally one does not specify  
standard library files as dependencies, so downloaded files would probably  
be under this same category.  I.e. if you need to rebuild, you'd have to  
clear the cache and do a make clean (or equivalent).  Another option is to  
have dget check to see if the file on the server has been modified.


2. It's possible that dget fetches files one at a time, which might be  
very slow (on the first build).  However, one can trigger whole package  
downloads easily enough (for example, by making the include path entry  
point at a zip file or tarball).  dget should be smart enough to handle  
extracting packages.


I can't really think of any other issues.

-Steve


Re: DIP11: Automatic downloading of libraries

2011-06-15 Thread Andrei Alexandrescu

On 6/15/11 7:53 AM, Steven Schveighoffer wrote:

On Tue, 14 Jun 2011 16:47:01 -0400, Adam D. Ruppe
destructiona...@gmail.com wrote:


BTW, I don't think it should be limited to just passing a
url to the helper program.

I'd do it something like this:

dget module.name url_from_pragma


I still don't like the url being stored in the source file -- where
*specifically* on the network to get the file has nothing to do with
compiling the code, and fixing a path problem shouldn't involve editing
a source file -- there is too much risk.


First, clearly we need command-line equivalents for the pragmas. They 
can be subsequently loaded from a config file. The embedded URLs are for 
people who want to distribute libraries without requiring their users to 
change their config files. I think that simplifies matters for many. 
Again - the ULTIMATE place where dependencies exist is in the source files.



For comparison, you don't have to specify a full path to the compiler of
where to get modules, they are specified relative to the configured
include paths. I think this model works well, and we should be able to
re-use it for this purpose also. You could even just use urls as include
paths:

-Ihttp://www.dsource.org/projects/dcollections/import


I also think that model works well, except HTTP does not offer search 
the same way a filesystem does. You could do that with FTP though.



Andrei


Re: DIP11: Automatic downloading of libraries

2011-06-15 Thread Andrei Alexandrescu

On 6/14/11 8:44 PM, Nick Sabalausky wrote:

Adam D. Ruppedestructiona...@gmail.com  wrote in message
news:it91b0$aa0$1...@digitalmars.com...

Nick Sabalausky wrote:

Just one extra deps-gathering invokation each time a
deps-gathering invokation finds unsatisfied depenencies, and *only*
the first time you build.


It could probably cache the last successful command...


Nothing would need to be cached. After the initial gather everything and
build build, all it would ever have to do is exactly what RDMD already does
right now: Run DMD once to find the deps, check them to see if anything
needs rebuilt, and if so, run DMD the second time to build. There'd never be
any need for more than those two invokations (and the first one tends to be
much faster anyway) until a new library dependency is introduced.


I think this works, but I personally find it clumsy. Particularly 
because when dmd fails, you don't know exactly why - may have been an 
import, may have been something else. So the utility needs to 
essentially remember the last import attempted (won't work when the 
compiler will use multiple threads) and scrape dmd's stderr output and 
parse it for something that looks like a specific module not found 
error message (see http://arsdnet.net/dcode/build.d). It's quite a shaky 
design that relies on a bunch of stars aligning.


Andrei


Re: DIP11: Automatic downloading of libraries

2011-06-15 Thread Steven Schveighoffer
On Wed, 15 Jun 2011 09:53:31 -0400, Andrei Alexandrescu  
seewebsiteforem...@erdani.org wrote:



On 6/15/11 7:53 AM, Steven Schveighoffer wrote:

On Tue, 14 Jun 2011 16:47:01 -0400, Adam D. Ruppe
destructiona...@gmail.com wrote:


BTW, I don't think it should be limited to just passing a
url to the helper program.

I'd do it something like this:

dget module.name url_from_pragma


I still don't like the url being stored in the source file -- where
*specifically* on the network to get the file has nothing to do with
compiling the code, and fixing a path problem shouldn't involve editing
a source file -- there is too much risk.


First, clearly we need command-line equivalents for the pragmas. They  
can be subsequently loaded from a config file. The embedded URLs are for  
people who want to distribute libraries without requiring their users to  
change their config files. I think that simplifies matters for many.  
Again - the ULTIMATE place where dependencies exist is in the source  
files.


We have been getting along swimmingly without pragmas for adding local  
include paths.  Why do we need to add them using pragmas for network  
include paths?


Also, I don't see the major difference in someone who's making a piece of  
software from adding the include path to their source file vs. adding it  
to their build script.


But in any case, it doesn't matter if both options are available -- it  
doesn't hurt to have a pragma option as long as a config option is  
available.  I just don't want to *require* the pragma solution.





For comparison, you don't have to specify a full path to the compiler of
where to get modules, they are specified relative to the configured
include paths. I think this model works well, and we should be able to
re-use it for this purpose also. You could even just use urls as include
paths:

-Ihttp://www.dsource.org/projects/dcollections/import


I also think that model works well, except HTTP does not offer search  
the same way a filesystem does. You could do that with FTP though.


dget would just add the appropriate path:

import dcollections.TreeMap =
get  
http://www.dsource.org/projects/dcollections/import/dcollections/TreeMap.d

hm.. doesn't work
get  
http://www.dsource.org/projects/dcollections/import/dcollections/TreeMap.di

ok, there it is!

As I said in another post, you could also specify a zip file or tarball as  
a base path, and the whole package is downloaded instead.  We may need  
some sort of manifest instead in order to verify the import will be found  
instead of downloading the entire package to find out.


-Steve


Re: DIP11: Automatic downloading of libraries

2011-06-15 Thread Andrei Alexandrescu

On 6/15/11 8:33 AM, Steven Schveighoffer wrote:

On Tue, 14 Jun 2011 09:53:16 -0400, Andrei Alexandrescu
seewebsiteforem...@erdani.org wrote:


http://www.wikiservice.at/d/wiki.cgi?LanguageDevel/DIPs/DIP11

Destroy.


I put this as replies in several threads, but I'll throw it out there as
its own thread:

* You already agree that having the fetching done by a separate program
(possibly written in d) makes the solution cleaner (i.e. you are not
infiltrating the code that actually does compiling with code that does
network fetching).


I agree.


* I think specifying the entire url in the pragma is akin to specifying
the full path of a given module on your local disk. I think it's not the
right place for it, the person who is building the code should be
responsible for where the modules come from, and import should continue
to specify the module relative to the include path.


I understand. It hasn't been rare that I would have preferred to specify 
an -I equivalent through a pragma in my D programs. Otherwise all of a 
sudden I needed to have a more elaborate dmd/rdmd line, and then I 
thought, heck, I need a script or makefile or a dmd.conf to build this 
simple script... I don't think one is good and the other is bad. Both 
have their uses.


BTW, Perl and Python (and probably others) have a way to specify paths 
for imports.


http://www.perlhowto.com/extending_the_library_path
http://stackoverflow.com/questions/279237/python-import-a-module-from-a-folder


* A perfect (IMO) way to configure the fetch tool is by using the same
mechanism that configures dmd on how to get modules -- the include path.
For instance -Ihttp://xxx.yyy.zzz/package can be passed to the compiler
or put into the dmd.conf.


HTTP is not a filesystem so the mechanism must be different. I added a 
section Command-line equivalent: 
http://www.wikiservice.at/d/wiki.cgi?LanguageDevel/DIPs/DIP11#section10


My concern about using cmdline/conf exclusively remains. There must be a 
way to specify dependencies where they belong - with the source. That is 
_literally_ where they belong!


One additional problem is one remote library that depends on another. 
You end up needing to add K URLs where K is the number of dependent 
libraries. The process of doing so will be mightily annoying - repeated 
failure to compile and RTFMs.



* DMD already has a good mechanism to specify configuration and you
would barely have to change anything internally.

Here's how it would work. I'll specify how it goes from command line to
final (note the http path is not a valid path, it's just an example):

dmd -Ihttp://www.dsource.org/projects/dcollections/import testproj.d

1. dmd recognizes the url pattern and stores this as an 'external' path
2. dmd reads the file testproj.d and sees that it imports
dcollections.TreeMap
3. Using it's non-external paths, it cannot find the module.
4. It calls:
dget -Ihttp://www.dsource.org/projects/dcollections/import
dcollections.TreeMap
5. dget checks its internal cache to see if the file
dcollections/TreeMap.[d|di] already exists -- not found
6. dget uses internal logic to generate a request to download either
a. an entire package which contains the requested import (preferred)
b. just the specific file dcollections/TreeMap.d
7. Using the url as a key, it stores the TreeMap.d file in a cache so it
doesn't have to download it again (can be stored globally or local to
the user/project)
8. Pipes the file to stdout, dmd reads the file, and returns 0 for success
9. dmd finishes compiling.


Not so fast. What if dcollections depends on stevesutils, to be found on 
http://www.stevesu.ti/ls and larspath, to be found on http://la.rs/path? 
The thing will fail to compile, and there will be no informative message 
on what to do next.



Andrei


Re: DIP11: Automatic downloading of libraries

2011-06-15 Thread Andrei Alexandrescu

On 6/15/11 9:13 AM, Steven Schveighoffer wrote:

On Wed, 15 Jun 2011 09:53:31 -0400, Andrei Alexandrescu
seewebsiteforem...@erdani.org wrote:


On 6/15/11 7:53 AM, Steven Schveighoffer wrote:

On Tue, 14 Jun 2011 16:47:01 -0400, Adam D. Ruppe
destructiona...@gmail.com wrote:


BTW, I don't think it should be limited to just passing a
url to the helper program.

I'd do it something like this:

dget module.name url_from_pragma


I still don't like the url being stored in the source file -- where
*specifically* on the network to get the file has nothing to do with
compiling the code, and fixing a path problem shouldn't involve editing
a source file -- there is too much risk.


First, clearly we need command-line equivalents for the pragmas. They
can be subsequently loaded from a config file. The embedded URLs are
for people who want to distribute libraries without requiring their
users to change their config files. I think that simplifies matters
for many. Again - the ULTIMATE place where dependencies exist is in
the source files.


We have been getting along swimmingly without pragmas for adding local
include paths. Why do we need to add them using pragmas for network
include paths?


That doesn't mean the situation is beyond improvement. If I had my way 
I'd add pragma(liburl) AND pragma(libpath).



Also, I don't see the major difference in someone who's making a piece
of software from adding the include path to their source file vs. adding
it to their build script.


Because in the former case the whole need for a build script may be 
obviated. That's where I'm trying to be.



But in any case, it doesn't matter if both options are available -- it
doesn't hurt to have a pragma option as long as a config option is
available. I just don't want to *require* the pragma solution.


Sounds good. I actually had the same notion, just forgot to mention it 
in the dip (fixed).



For comparison, you don't have to specify a full path to the compiler of
where to get modules, they are specified relative to the configured
include paths. I think this model works well, and we should be able to
re-use it for this purpose also. You could even just use urls as include
paths:

-Ihttp://www.dsource.org/projects/dcollections/import


I also think that model works well, except HTTP does not offer search
the same way a filesystem does. You could do that with FTP though.


dget would just add the appropriate path:

import dcollections.TreeMap =
get
http://www.dsource.org/projects/dcollections/import/dcollections/TreeMap.d
hm.. doesn't work
get
http://www.dsource.org/projects/dcollections/import/dcollections/TreeMap.di
ok, there it is!


This assumes the URL contains the package prefix. That would work, but 
imposes too much on the URL structure. I find the notation -Upackage=url 
more general.



As I said in another post, you could also specify a zip file or tarball
as a base path, and the whole package is downloaded instead. We may need
some sort of manifest instead in order to verify the import will be
found instead of downloading the entire package to find out.


Sounds cool.


Andrei


Re: DIP11: Automatic downloading of libraries

2011-06-15 Thread Andrei Alexandrescu

On 6/15/11 8:33 AM, Steven Schveighoffer wrote:

I can't really think of any other issues.


Allow me to repeat: the scheme as you mention it is unable to figure and 
load dependent remote libraries for remote libraries. It's essentially a 
flat scheme in which you know only the top remote library but nothing 
about the rest.


The dip takes care of that by using transitivity and by relying on the 
presence of dependency information exactly where it belongs - in the 
dependent source files. Separating that information from source files 
has two liabilities. First, it breaks the whole transitivity thing. 
Second, it adds yet another itsy-bitsy pellet of metadata/config/whatevs 
files that need to be minded. I just don't see the advantage of imposing 
that.



Andrei



Re: DIP11: Automatic downloading of libraries

2011-06-15 Thread Dmitry Olshansky

On 15.06.2011 17:33, Steven Schveighoffer wrote:
On Tue, 14 Jun 2011 09:53:16 -0400, Andrei Alexandrescu 
seewebsiteforem...@erdani.org wrote:



http://www.wikiservice.at/d/wiki.cgi?LanguageDevel/DIPs/DIP11

Destroy.


I put this as replies in several threads, but I'll throw it out there 
as its own thread:


* You already agree that having the fetching done by a separate 
program (possibly written in d) makes the solution cleaner (i.e. you 
are not infiltrating the code that actually does compiling with code 
that does network fetching).


* I think specifying the entire url in the pragma is akin to 
specifying the full path of a given module on your local disk.  I 
think it's not the right place for it, the person who is building the 
code should be responsible for where the modules come from, and import 
should continue to specify the module relative to the include path.


* A perfect (IMO) way to configure the fetch tool is by using the same 
mechanism that configures dmd on how to get modules -- the include 
path.  For instance -Ihttp://xxx.yyy.zzz/package can be passed to the 
compiler or put into the dmd.conf.


* DMD already has a good mechanism to specify configuration and you 
would barely have to change anything internally.


Here's how it would work.  I'll specify how it goes from command line 
to final (note the http path is not a valid path, it's just an example):


dmd -Ihttp://www.dsource.org/projects/dcollections/import testproj.d


Now it's abundantly clear that dmd should have rdmd's 'make' 
functionality built-in. Otherwise you'd have to specify TreeMap.d (or 
library) on the command line.




1. dmd recognizes the url pattern and stores this as an 'external' path
2. dmd reads the file testproj.d and sees that it imports 
dcollections.TreeMap

3. Using it's non-external paths, it cannot find the module.
4. It calls:
dget -Ihttp://www.dsource.org/projects/dcollections/import 
dcollections.TreeMap
5. dget checks its internal cache to see if the file 
dcollections/TreeMap.[d|di] already exists -- not found

6. dget uses internal logic to generate a request to download either
   a. an entire package which contains the requested import (preferred)
   b. just the specific file dcollections/TreeMap.d
7. Using the url as a key, it stores the TreeMap.d file in a cache so 
it doesn't have to download it again (can be stored globally or local 
to the user/project)
8. Pipes the file to stdout, dmd reads the file, and returns 0 for 
success

9. dmd finishes compiling.

On a second run to dmd, it would go through the same process, but dget 
succeeds on step 5 of finding it in the cache and pipes it to stdout.


Some issues with this scheme:

1. dependency checking would be difficult for a build tool (like make) 
for doing incremental builds.  However, traditionally one does not 
specify standard library files as dependencies, so downloaded files 
would probably be under this same category.  I.e. if you need to 
rebuild, you'd have to clear the cache and do a make clean (or 
equivalent).  Another option is to have dget check to see if the file 
on the server has been modified.


2. It's possible that dget fetches files one at a time, which might be 
very slow (on the first build).  However, one can trigger whole 
package downloads easily enough (for example, by making the include 
path entry point at a zip file or tarball).  dget should be smart 
enough to handle extracting packages.


I can't really think of any other issues.

-Steve


dmd should be able to run multiple instances of dget without any 
conflicts (also parallel builds etc.).

Other then that it looks quite good to me.

P.S. It seems like dget is, in fact, dcache :)

--
Dmitry Olshansky



Re: DIP11: Automatic downloading of libraries

2011-06-15 Thread Steven Schveighoffer
On Wed, 15 Jun 2011 10:38:28 -0400, Andrei Alexandrescu  
seewebsiteforem...@erdani.org wrote:



On 6/15/11 8:33 AM, Steven Schveighoffer wrote:

I can't really think of any other issues.


Allow me to repeat: the scheme as you mention it is unable to figure and  
load dependent remote libraries for remote libraries. It's essentially a  
flat scheme in which you know only the top remote library but nothing  
about the rest.


The dip takes care of that by using transitivity and by relying on the  
presence of dependency information exactly where it belongs - in the  
dependent source files. Separating that information from source files  
has two liabilities. First, it breaks the whole transitivity thing.  
Second, it adds yet another itsy-bitsy pellet of metadata/config/whatevs  
files that need to be minded. I just don't see the advantage of imposing  
that.


Yes, these are good points.  But I think Dmitry brought up good points too  
(how do you specify that TreeMap.d needs to be compiled too?).


One possible solution is a central repository of code.  So basically, you  
can depend on other projects as long as they are sanely namespaced and  
live under one include path.


I think dsource should provide something like this.  For example:

http://www.dsource.org/import

then if you wanted dcollections.TreeMap, the import would be:

http://www.dsource.org/import/dcollections/TreeMap.d

Of course, that still doesn't solve Dmitry's problem.  We need to think of  
a way to do that too.


Still thinking

-Steve


Re: DIP11: Automatic downloading of libraries

2011-06-15 Thread Robert Clipsham

On 15/06/2011 15:33, Andrei Alexandrescu wrote:

On 6/15/11 9:13 AM, Steven Schveighoffer wrote:

We have been getting along swimmingly without pragmas for adding local
include paths. Why do we need to add them using pragmas for network
include paths?


That doesn't mean the situation is beyond improvement. If I had my way
I'd add pragma(liburl) AND pragma(libpath).


pragma(lib) doesn't (and can't) work as it is, why do you want to add 
more useless pragmas? Command line arguments are the correct way to go 
here. Not to mention that paths won't be standardized across machines 
most likely so the latter would be useless.



Also, I don't see the major difference in someone who's making a piece
of software from adding the include path to their source file vs. adding
it to their build script.


Because in the former case the whole need for a build script may be
obviated. That's where I'm trying to be.


This can't happen in a lot of cases, eg if you're interfacing with a 
scripting language, you need certain files automatically generating 
during build etc. Admittedly, for the most part, you'll just want to be 
able to build libraries given a directory or an executable given a file 
with _Dmain() in. There'll still be a lot of cases where you want to 
specify some things to be dynamic libs, other static libs, and what if 
any of it you want in a resulting binary.



But in any case, it doesn't matter if both options are available -- it
doesn't hurt to have a pragma option as long as a config option is
available. I just don't want to *require* the pragma solution.


Sounds good. I actually had the same notion, just forgot to mention it
in the dip (fixed).


I'd agree with Steven that we need command line arguments for it, I 
completely disagree about pragmas though given that they don't work (as 
mentioned above). Just because I know you're going to ask:


# a.d has a pragma(lib) in it
$ dmd a.d
$ dmd b.d
$ dmd a.o b.o
Linker errors

This is unavoidable unless you put metadata in the object files, and 
even then you leave clutter in the resulting binary, unless you specify 
that the linker should remove it (I don't know if it can).



dget would just add the appropriate path:

import dcollections.TreeMap =
get
http://www.dsource.org/projects/dcollections/import/dcollections/TreeMap.d

hm.. doesn't work
get
http://www.dsource.org/projects/dcollections/import/dcollections/TreeMap.di

ok, there it is!


This assumes the URL contains the package prefix. That would work, but
imposes too much on the URL structure. I find the notation -Upackage=url
more general.


I personally think there should be a central repository listing packages 
and their URLs etc, which massively simplifies what needs passing on a 
command line. Eg -RmyPackage would cause myPackage to be looked up on 
the central server, which will have the relevant URL etc.


Of course, there should be some sort of override method for private 
remote servers.



As I said in another post, you could also specify a zip file or tarball
as a base path, and the whole package is downloaded instead. We may need
some sort of manifest instead in order to verify the import will be
found instead of downloading the entire package to find out.


Sounds cool.


I don't believe this tool should exist without compression being default.

--
Robert
http://octarineparrot.com/


Re: DIP11: Automatic downloading of libraries

2011-06-15 Thread Steven Schveighoffer
On Wed, 15 Jun 2011 10:33:21 -0400, Andrei Alexandrescu  
seewebsiteforem...@erdani.org wrote:



dget would just add the appropriate path:

import dcollections.TreeMap =
get
http://www.dsource.org/projects/dcollections/import/dcollections/TreeMap.d
hm.. doesn't work
get
http://www.dsource.org/projects/dcollections/import/dcollections/TreeMap.di
ok, there it is!


This assumes the URL contains the package prefix. That would work, but  
imposes too much on the URL structure. I find the notation -Upackage=url  
more general.


Look at the url again, I'll split out the include path and the import:

[http://www.dsource.org/projects/dcollections/import] /  
[dcollections/TreeMap.di]


There is nothing being assumed by dget.  It could try and import  
dcollections.TreeMap from some other remote path as well, and fail.  It  
follows the same rules as the current import scheme, just with urls  
instead of paths.


-Steve


Re: DIP11: Automatic downloading of libraries

2011-06-15 Thread Andrei Alexandrescu

On 6/15/11 9:56 AM, Robert Clipsham wrote:

On 15/06/2011 15:33, Andrei Alexandrescu wrote:

On 6/15/11 9:13 AM, Steven Schveighoffer wrote:

We have been getting along swimmingly without pragmas for adding local
include paths. Why do we need to add them using pragmas for network
include paths?


That doesn't mean the situation is beyond improvement. If I had my way
I'd add pragma(liburl) AND pragma(libpath).


pragma(lib) doesn't (and can't) work as it is, why do you want to add
more useless pragmas?


Then we should yank it or change it. That pragma was defined in a 
completely different context from today's, and right now we have a much 
larger user base to draw experience and insight from.



Command line arguments are the correct way to go
here.


Why? At this point enough time has been collectively spent on this that 
I'm genuinely curious to find a reason that would have me huh, haven't 
thought about it that way. Fine, no need for the dip.



Not to mention that paths won't be standardized across machines
most likely so the latter would be useless.


version() for the win.


Also, I don't see the major difference in someone who's making a piece
of software from adding the include path to their source file vs. adding
it to their build script.


Because in the former case the whole need for a build script may be
obviated. That's where I'm trying to be.


This can't happen in a lot of cases, eg if you're interfacing with a
scripting language, you need certain files automatically generating
during build etc.


Sure. For those cases, use tools. For everything else, there's liburl.


Admittedly, for the most part, you'll just want to be
able to build libraries given a directory or an executable given a file
with _Dmain() in.


That's the spirit. This is what the proposal aims at: you have the root 
file and the process takes care of everything - no configs, no metadata, 
no XML info, no command-line switches, no fuss, no muss.


With such a feature, hello world equivalents demoing dcollections, qt, 
mysql (some day), etc. etc. will be as simple as few-liners that anyone 
can download and compile flag-free. I find it difficult to understand 
how only a few find that appealing.



There'll still be a lot of cases where you want to
specify some things to be dynamic libs, other static libs, and what if
any of it you want in a resulting binary.


Sure. But won't you think it's okay to have the DIP leave such cases to 
other tools without impeding them in any way?



Sounds good. I actually had the same notion, just forgot to mention it
in the dip (fixed).


I'd agree with Steven that we need command line arguments for it, I
completely disagree about pragmas though given that they don't work (as
mentioned above). Just because I know you're going to ask:

# a.d has a pragma(lib) in it
$ dmd a.d
$ dmd b.d
$ dmd a.o b.o
Linker errors

This is unavoidable unless you put metadata in the object files, and
even then you leave clutter in the resulting binary, unless you specify
that the linker should remove it (I don't know if it can).


I now understand, thanks. So I take it a compile-and-link command would 
succeed, whereas a compile-separately succession of commands wouldn't? 
That wouldn't mean the pragma doesn't work, just that it only works 
under certain build scenarios.



This assumes the URL contains the package prefix. That would work, but
imposes too much on the URL structure. I find the notation -Upackage=url
more general.


I personally think there should be a central repository listing packages
and their URLs etc, which massively simplifies what needs passing on a
command line. Eg -RmyPackage would cause myPackage to be looked up on
the central server, which will have the relevant URL etc.

Of course, there should be some sort of override method for private
remote servers.


That is tantamount to planting a flag in the distributed dmd.conf. 
Sounds fine.



As I said in another post, you could also specify a zip file or tarball
as a base path, and the whole package is downloaded instead. We may need
some sort of manifest instead in order to verify the import will be
found instead of downloading the entire package to find out.


Sounds cool.


I don't believe this tool should exist without compression being default.


Hm. Well fine.


Andrei


  1   2   3   >