A suggestion for modules names / sharing code between projects

2016-02-29 Thread Sebastien Alaiwan via Digitalmars-d

Hi all,

I've came across the following problem number of times.
Let's say I have project A and B, sharing code but having a 
different tree structure.


$ find projectA
./projectA/internal/math/algo.d
./projectA/internal/math/lcp.d
./projectA/internal/math/optimize.d
./projectA/gui/main.d

$ find projectB
./projectB/app/render/gfx.d
./projectB/app/render/algo.d
./projectB/app/physics/math/algo.d
./projectB/app/physics/math/lcp.d
./projectB/app/physics/math/optimize.d
./projectB/main.d

The directory "math" is shared between projects. It actually is 
an external/submodule. So it has a standalone existence as a 
library, and might one day be used by projectC.

(In the context of this issue, I'm using separate compilation).

I'd like to be able to write, in projectA's main:

import internal.math.optimize;

This requires me to add, at the beginning of "optimize.d" file, 
this module definition:

module internal.math.optimize;

However, this "optimize.d" file is shared between projects, now 
it's becoming specific to projectA.


How am I supposed to share code between projects then?

Clearly, putting everyone back into the same "file namespace" by 
adding every subdirectory of my project as import paths through 
the command line is not going to scale.


After many projects spent thinking of this issue, I came to the 
conclusion that being able to specify, on the command line, the 
module name of the module currently being compiled, thus allowing 
to override it to be inferred to the basename of the file, would 
solve this problem.


As a side-problem related to this one, having to repeat the 
"absolute" module path to all module declarations/importations is 
tedious. What if I change the name of the package? Now I might 
have to change hundreds of module declarations from "module 
oldname.algo;" to "module newname.algo;".


Clearly, something must be wrong here (and I hope it's me!)
I'd be very happy if someone showed me how solve this problem, 
this has bothered me for ages.






Re: A suggestion for modules names / sharing code between projects

2016-02-29 Thread Sebastien Alaiwan via Digitalmars-d
On Monday, 29 February 2016 at 19:23:08 UTC, Jonathan M Davis 
wrote:
My solution would be to never have the same file be part of 
multiple projects. Share code using libraries rather than just 
sharing the files. Then modules won't be changing where they 
are in a project or anything like that. The shared code goes 
into a library, and whichever projects need it link against 
that library. You get better modularization and encapsulation 
that way and completely avoid the problem that you're having.
Thanks for your answer ; from what I understand, you're basically 
telling me to only import binary/precompiled libraries, right?


Although I had to admit I had never considered this solution, 
never have the same file be part of multiple projects seems 
extremely restrictive to me.
How is it going to work if I want to share heavily templated 
code, like a container library?


Re: A suggestion for modules names / sharing code between projects

2016-02-29 Thread Sebastien Alaiwan via Digitalmars-d

On Monday, 29 February 2016 at 19:56:20 UTC, Jesse Phillips wrote:

I've used this pattern.

 ./projectA/lib/math/algo.d
 ./projectA/lib/math/lcp.d
 ./projectA/lib/math/optimize.d
 ./projectA/gui/main.d

 ./projectB/app/render/gfx.d
 ./projectB/app/render/algo.d
 ./projectB/lib/math/algo.d
 ./projectB/lib/math/lcp.d
 ./projectB/lib/math/optimize.d
 ./projectB/main.d

Dub doesn't like me too much, but in general it works.

Note that it doesn't matter where your modules live, if they 
have the declared module name in the file:


module math.optimize;

If DMD reads this file all your imports should look like:

import math.optimize;

Even if the file is in app/physics/math.

Yeah, I'm using this pattern too at the moment ;
Although, I'm trying to avoid having these redundant module 
declaration directives at the beginning of each of my library 
files.




Re: A suggestion for modules names / sharing code between projects

2016-02-29 Thread Sebastien Alaiwan via Digitalmars-d

On Monday, 29 February 2016 at 20:59:45 UTC, Adam D. Ruppe wrote:
On Monday, 29 February 2016 at 20:05:11 UTC, Sebastien Alaiwan 
wrote:
Although, I'm trying to avoid having these redundant module 
declaration directives at the beginning of each of my library 
files.


Those module declarations aren't redundant - they are virtually 
required (I think it is a mistake that they aren't explicitly 
required in all cases, actually)


The file layout does not matter to the language itself. Only 
that module declaration does - it is NOT optional if you want a 
package name.


$ find
main.d
foo/hello.d

$ cat main.d
import foo.hello;

$ cat foo/hello.d
module foo.hello;

Ok so now let's say I rename the directory "lib" to "foo". If I 
don't change the "import lib.hello" to "import foo.hello", how is 
the compiler going to find "hello.d"?

(As a reminder, as I said, I'm using separate compilation)




Re: A suggestion for modules names / sharing code between projects

2016-02-29 Thread Sebastien Alaiwan via Digitalmars-d

On Monday, 29 February 2016 at 21:33:37 UTC, Adam D. Ruppe wrote:
On Monday, 29 February 2016 at 21:04:52 UTC, Sebastien Alaiwan 
wrote:
Ok so now let's say I rename the directory "lib" to "foo". If 
I don't change the "import lib.hello" to "import foo.hello", 
how is the compiler going to find "hello.d"?


You have to tell the compiler where it is.

Is it only possible with separate compilation?


Either way, the module name is *not* optional.


(As a reminder, as I said, I'm using separate compilation)


meh that's part of your problem, why are you doing it that way?
Because it reduces turnaround time (this is something I actually 
measured on the project I'm currently working on). Having to 
recompile everything each time I make a modification takes up to 
25s on this project. By the way, I'm using gdc. Is it also part 
of my problem? :-)





Re: A suggestion for modules names / sharing code between projects

2016-02-29 Thread Sebastien Alaiwan via Digitalmars-d

On Monday, 29 February 2016 at 21:35:48 UTC, Adam D. Ruppe wrote:
On Monday, 29 February 2016 at 19:03:53 UTC, Sebastien Alaiwan 
wrote:
What if I change the name of the package? Now I might have to 
change hundreds of module declarations from "module 
oldname.algo;" to "module newname.algo;".


That's easy to automate, just run a find/replace across all the 
files, or if you want to go really fancy, modify hackerpilot's 
dfix to do it for you.


Yes, I know ; but how can you say there's no redundancy when I 
have to rely on find/replace to keep all these declarations in 
sync?


Re: A suggestion for modules names / sharing code between projects

2016-02-29 Thread Sebastien Alaiwan via Digitalmars-d

Hi Mike, thanks for taking the time to answer.
If I understand you correctly, you're advising me to keep my file 
hierarchy mostly in sync with my module hierarchy (except for one 
potential leading "src/" or "libs/" that I would add to the 
import search directories). That's fine with me.


However, what's the point of having module declarations 
directives in the language then?


If I understand correctly, the only way to make an interesting 
use of these directives makes separate compilation impossible.


I know dmd is lightning-fast and everything, and not having to 
manage build dependencies anymore is very attractive.


However, we still need separate compilation. Otherwise your 
turnaround time is going to resemble a tractor pulling 
competition as your project grows. Recompiling everything 
everytime you change one module is not an option ; Some of the 
source files I work on in my company require 30s to compile 
*alone* ; it's easy to reach, especially when the language has 
features like templates and CTFE (Pegged anyone?).





Re: A suggestion for modules names / sharing code between projects

2016-03-02 Thread Sebastien Alaiwan via Digitalmars-d
On Wednesday, 2 March 2016 at 00:16:57 UTC, Guillaume Piolat 
wrote:

Would this work?

1. pick a single module name like

module math.optimize;

2. import that module with:

import math.optimize;

3. put this module in a hierarchy like that:

math/optimize.d

4. pass -I to the compiler

However it may clutter your module namespace a bit more.


Yeah, this is what I going to do: flat hierarchy, like one 'src' 
directory and one 'extra' directory in the project root, globally 
unique package names, passing -Isrc and -Iextra to the compiler, 
and ... module declaration directives.

I was trying to avoid them, but maybe it's not possible.

I still think that software entities shouldn't depend upon their 
own name, or their absolute location in the top-level namespace.


This is true for classes, functions/methods, variables, 
structures ... maybe this rule becomes invalid when third-party 
packages enter a project?




Re: A suggestion for modules names / sharing code between projects

2016-03-02 Thread Sebastien Alaiwan via Digitalmars-d

On Wednesday, 2 March 2016 at 08:50:50 UTC, Mike Parker wrote:
I'm curious what sort of system you have in mind. How does the 
current system not work for you and what would you prefer to 
see?


First, you must know that I've been a C++ programmer for way too 
much time ;
and that my way of thinking about modules has probably been - 
badly - shaped accordingly :-).


However, it feels wrong that modules (foo, bar) inside the same 
package ("pkg") need to repeat the "absolute" package name when 
referencing each other, I mean "import pkg.foo" instead of 
"import foo". Not because of the extra typing, but because of the 
dependency it introduces. But as I said, maybe my vision is wrong.


I'm going to answer your first question, I hope you don't get 
bored before the end ;-)


First some generic stuff:
- The libraries I write are source-only, and I never install them 
system-wide, as many projects might depend on different revisions 
of one library.
- I only work with separate compilation, using gdc (Windows-dmd 
produces OMF, which is a dealbreaker ; but I use rdmd a lot for 
small programs).
- I often mix C code with D, sometimes, C++ code. I compile them 
using gcc/g++.
- My build system is composed of simple dependency-aware 
non-recursive makefiles (BTW, this is a great way to keep your 
code decoupled, otherwise you might not get feedback that you're 
making a dependency mess soon enough!)


This is the best way I found to be able to checkout one project, 
type 'make',
and have everything build properly, wether I'm doing it on 
GNU/Linux or on Windows (MSYS).


Everything is under version control using GNU Bazaar (which 
basically is to git what D is to C++).


I have a small library, which I call "lib_algo", containing 
semi-related utilities (containers, topological sort, etc.) which 
I use in many projects.


$ cd myLibs
$ find lib_algo -name "*.d" -or -name "*.mk" -or name "Makefile"
lib_algo/options.d
lib_algo/misc.d
lib_algo/pointer.d
lib_algo/set.d
lib_algo/queue.d
lib_algo/algebraic.d
lib_algo/vector.d
lib_algo/bresenham.d
lib_algo/topological.d
lib_algo/fix_array.d
lib_algo/test.d
lib_algo/pool.d
lib_algo/stack.d
lib_algo/list.d
lib_algo/array2d.d
lib_algo/project.mk
lib_algo/Makefile
(I know, there's some overlap with Phobos today)

The project.mk simply contains the list of source files of the 
library.
The test.d file actually contains a main function, which I use to 
run manual tests of the library functionalities. It's not 
included in the project.mk list. And it obviously must import the 
files needing to be tested.


With this flat directory structure, the only way test.d can 
import files is by saying, "import stack;", not  "import 
lib_algo.stack".
(adding ".." to the list of import directories might do the 
trick, but this would definitely feel wrong).


Now, let's see a project using this "lib_algo", I call it 
"myGame".


$ cd myProjects
$ find 
myGame/lib_algo/options.d
myGame/lib_algo/
myGame/lib_algo/array2d.d
myGame/lib_algo/project.mk
myGame/lib_algo/Makefile
myGame/src/main.d
myGame/Makefile


"lib_algo" simply is an external (aka "git submodule"), pointing 
to a specific revision of the repository "lib_algo".
The top-level "Makefile" of the projects includes 
"lib_algo/project.mk", and all is well, I can compose the list of 
source files to compile without having to rely on 
"convenience/intermediate" libraries.


Now, if "src/main.d" wants to import "array2d.d", currently, I 
must write "import array2d.d", and add "-I myGame/lib_algo" to my 
compiler command line.

I said in a previous post that this would not scale.
That's because with this scheme, obviously, I can't have two 
modules having the same name, even if they belong to different 
libraries.
Let's say I have "lib_algo/utils.d" and "lib_geom/utils.d", if 
"main.d" says "import utils.d", it's ambiguous.


Clearly, "main.d" needs to say "import lib_algo.utils". However, 
if I do this,
I must add a module declaration in "lib_algo/utils.d", and I must 
change

"lib_algo/test.d" so it says "import lib_algo.utils".

This is where it begins to feel clumsy. Why should lib_algo need 
to be

modified in order to resolve an ambiguity happening in "myGame" ?

Moreover, we saw earlier that this modification of the import 
done by
"lib_algo/test.d" had consequences on the directory structure of 
lib_algo.
In order to be able to build the test executable for lib_algo, I 
now need to

have the following structure:
$ find lib_algo
lib_algo/lib_algo/options.d
lib_algo/lib_algo/
lib_algo/lib_algo/array2d.d
lib_algo/test.d
(test.d can be put inside the inner "lib_algo")

Am I the only one to find this counterintuitive - and probably 
wrong?

What am I missing there?

(And please, don't tell me that I need to switch to 
dub+git+dmd+D_source_files_only ; I like my tools to be 
composable)


Thanks for reading!



Re: A suggestion for modules names / sharing code between projects

2016-03-02 Thread Sebastien Alaiwan via Digitalmars-d

Hi guys,

thanks a lot for your answers!

On Wednesday, 2 March 2016 at 22:42:18 UTC, ag0aep6g wrote:

On 02.03.2016 21:40, Sebastien Alaiwan wrote:
- I only work with separate compilation, using gdc 
(Windows-dmd produces OMF, which is a dealbreaker ; but I use 
rdmd a lot for small programs).
dmd gives you COFF with -m64 or -m32mscoff (-m32 is the default 
is => OMF).

Thanks for the tip, I didn't know that.
I have actually deeper reasons for not using dmd (seemless 
cross-compilation, e.g "make CROSS_COMPILE=i686-w64-mingw32" or 
"make CROSS_COMPILE=arm-linux-gnueabi" without any special case 
in the Makefile ; and the way its unusual command line 
(-oftarget) conflicts with MSYS's path conversion, etc. ).


I don't know the exact reasons why the module system works like 
it does. Maybe you're on to something, maybe not. Anyway, here 
are some problems/surprises I see with your alternative idea 
(which basically means using relative file paths, if I get you 
right):

Yes, I think we can say that.

I 100% agree with all the limitations you listed ; which makes a 
forced absolute-import system appear to be a lot more comfortable!


There's also one big limitation I see that you didn't list:
* If you're importing lib_a and lib_b both depending on lib_c, 
you need them to expect lib_c to be at the same path/namespace 
location. The current module system guarantees this, because all 
package names are global.


(And linking with precompiled binaries might become a nightmare)

However, using global package names means you're relying on the 
library providers to avoid conflicts, in their package names, 
although they might not even know each other. Which basically 
implies to always put your name / company name in your package 
name, like "import alaiwan.lib_algo.array2d;", or "import 
apple.lib_algo.array2d".
Or rename my "lib_algo" to something meaningless / less common, 
like "import diamond.array2d" (e.g Derelict, Pegged, imaged, 
etc.).


If, unfortunately, I happen to run into a conflict, i.e my 
project uses two unrelated libraries sharing a name for their 
top-level namespace, there's no way out for me, right?

(Except relying on shared objects to isolate symbols)


I don't think your alternative is obviously superior.
Me neither :-) I'm just sharing my doubts about the current 
system!


The current way of having exactly one fixed name per module 
makes it easy to see what's going on at any point. You have to 
pay for that by typing a bit more, I guess.

Yes, I guess you're right.

I'm going to follow your advice and Mike's for my projects, this 
will allow me to see things more clearly. Anyway, thanks again to 
all of you for your answers!





Re: A suggestion for modules names / sharing code between projects

2016-03-03 Thread Sebastien Alaiwan via Digitalmars-d

On Thursday, 3 March 2016 at 08:39:51 UTC, Laeeth Isharc wrote:

On Thursday, 3 March 2016 at 06:53:33 UTC, Sebastien Alaiwan
If, unfortunately, I happen to run into a conflict, i.e my 
project uses two unrelated libraries sharing a name for their 
top-level namespace, there's no way out for me, right?

(Except relying on shared objects to isolate symbols)


See modules doc page off dlang.org.
import io = std.stdio;
void main()
{ io.writeln("hello!"); // ok, calls std.stdio.writeln
std.stdio.writeln("hello!"); // error, std is undefined
writeln("hello!"); // error, writeln is undefined
}
Thansk, but I was talking about module name collisions, e.g if my 
project uses two unrelated libraries, both - sloppily - defining 
a module called "geom.utils".


Can this be worked-around using renamed imports?

How is "import myUtils = geom.utils" going to know which of both 
"geom.utils" I'm talking about?
As far as I understand, _module_ renaming at import is rather a 
tool for defining shorthands, than for disambiguation.


I agree that it might not be a problem in practice - just wanting 
to make sure I'm not missing something.




Re: Anyone tried to emscripten a D/SDL game?

2017-06-04 Thread Sebastien Alaiwan via Digitalmars-d
On Wednesday, 24 May 2017 at 17:08:06 UTC, Nick Sabalausky 
"Abscissa" wrote:
On Wednesday, 24 May 2017 at 17:06:55 UTC, Guillaume Piolat 
wrote:


http://code.alaiwan.org/wp/?p=103


Awesome, thanks!


Author here ; the blogpost is indeed interesting but it's 
outdated.
It's a lot more simpler now that the "LDC + emscripten-fastcomp" 
combination works (no need for intermediate C lowering anymore)


The whole simplified toolchain and example project live here: 
https://github.com/Ace17/dscripten
If you have questions about how this work, I'd be glad to answer 
them!


Re: Anyone tried to emscripten a D/SDL game?

2017-06-04 Thread Sebastien Alaiwan via Digitalmars-d

On Wednesday, 24 May 2017 at 17:47:42 UTC, Suliman wrote:

It's it's possible to [compile to WASM] with D?

It should be.

LLVM has a working WebAssembly backend ; LDC might need some 
slight modifications to become aware of this new target. 
Everything that don't rely on the D runtime should work (except 
for bugs, e.g 
https://github.com/kripken/emscripten-fastcomp/issues/187 ).


Then, I think the following blog post could be easily adapted for 
the D langage:

https://medium.com/@mbebenita/lets-write-pong-in-webassembly-ac3a8e7c4591

However, please keep in mind that the target instruction set is 
only the tip of the iceberg ; you have to provide a target 
environment (like SDL bindings or a simplified D runtime), so the 
generated code can do anything usefull (like I/O).





Re: Anyone tried to emscripten a D/SDL game?

2017-06-05 Thread Sebastien Alaiwan via Digitalmars-d

On Monday, 5 June 2017 at 10:48:54 UTC, Johan Engelen wrote:

On Monday, 5 June 2017 at 05:22:44 UTC, Sebastien Alaiwan wrote:


The whole simplified toolchain and example project live here: 
https://github.com/Ace17/dscripten


Are you planning on upstreaming some of your work to LDC? 
Please do! :-)


Don't let the small size of the LDC patch from dscripten deceive 
you ; it mangles LDC in the easiest possible way to adapt it to 
the quirks of emscripten-fastcomp (the LLVM fork).
(the official LLVM branch doesn't even declare the 
asmjs/Emscripten triples).


At this point, I think emscripten-fastcomp might be on the go ; 
especially if the WebAssembly backend from the official LLVM 
branch becomes widely used (at this point, I'll probably ditch 
emscripten-fastcomp from dscripten, to use WebAssembly instead).


Some stuff that could be easily upstreamed in a clean 
non-emscripten specific way:
- allowing to disable the build of phobos and druntime from the 
cmake build.
- allowing to disable the build of the debug information 
generator.

- allowing to disable the build of the tests.




Re: Makefile experts, unite!

2017-06-11 Thread Sebastien Alaiwan via Digitalmars-d

On Sunday, 11 June 2017 at 23:47:30 UTC, Ali Çehreli wrote:


I had the pleasure of working with Eyal Lotem, main author of 
buildsome. The buildsome team are aware of all pitfalls of all 
build systems and offer build*some* as an awe*some* ;) and 
correct build system:


  http://buildsome.github.io/buildsome/


Very interesting!

The selling points, to me, are:
1) the automatic dependency detection through filesystem hooks
2) recipes also are dependencies
3) the genericity/low-level. I believe build systems should let 
me define my own abstractions, instead of trying to define for me 
what an "executable" or a "static library" should be.


- Make has 3)
- Ninja has 2), 3)
- tup and buildsome have 1), 2), 3)

However, buildsome also seems to have a (simplified) make-like 
syntax.

Why did they have to write it in Haskell, for god's sake!



Re: Makefile experts, unite!

2017-06-11 Thread Sebastien Alaiwan via Digitalmars-d

On Monday, 12 June 2017 at 06:30:16 UTC, ketmar wrote:

Jonathan M Davis wrote:


It's certainly a pain to edit the makefiles though


and don't forget those Great Copying Lists to copy modules. 
forgetting to include module in one of the lists was happened 
before, not once or twice...


I don't get it, could you please show an example?



Re: Makefile experts, unite!

2017-06-12 Thread Sebastien Alaiwan via Digitalmars-d

On Monday, 12 June 2017 at 07:00:46 UTC, Jonathan M Davis wrote:
On Monday, June 12, 2017 06:34:31 Sebastien Alaiwan via 
Digitalmars-d wrote:

On Monday, 12 June 2017 at 06:30:16 UTC, ketmar wrote:
> Jonathan M Davis wrote:
>> It's certainly a pain to edit the makefiles though
>
> and don't forget those Great Copying Lists to copy modules. 
> forgetting to include module in one of the lists was 
> happened before, not once or twice...


I don't get it, could you please show an example?


posix.mak is a lot better than it used to be, but with 
win{32,64}.mak, you have to list the modules all over the 
place. So, adding or removing a module becomes a royal pain, 
and it's very easy to screw up. Ideally, we'd just list the 
modules once in one file that was then used across all of the 
platforms rather than having to edit several files every time 
we add or remove anything. And the fact that we're using make 
for all of this makes that difficult if not impossible 
(especially with the very limited make that we're using on 
Windows).


Are you implying that we are currently keeping compatibility with 
NMAKE (the 'make' from MS)?


GNU make inclusion mechanism makes it possible and easy to share 
a list of modules between makefiles.


Before switching to a fancy BS, we might benefit from learning to 
fully take advantage of the one we currently have!






Re: Makefile experts, unite!

2017-06-12 Thread Sebastien Alaiwan via Digitalmars-d

On Monday, 12 June 2017 at 06:38:34 UTC, ketmar wrote:

Sebastien Alaiwan wrote:


The selling points, to me, are:
1) the automatic dependency detection through filesystem hooks
2) recipes also are dependencies
3) the genericity/low-level. I believe build systems should 
let me define my own abstractions, instead of trying to define 
for me what an "executable" or a "static library" should be.


i'm not that all excited by "1", though. tbh, i prefer simplier 
things, like regexp scanning. while regexp scanners may find 
more dependencies than there really are, they doesn't require 
any dirty tricks to work.


I understand your point ; I was explaining to my colleagues 
yesterday that "1" was a "good step in the wrong direction".


I think dependencies should come from above, they should be 
forced at the build system level. No more 'include' or 'imports' 
(Bazel took one step into this direction).
The consequence is that now you can't consider the build system 
of your project as a second class citizen. It's the only place 
where you're forced to express something  vaguely resembling to a 
high-level architecture.


Instead of letting module implementations happily create 
dependencies to any other module implementation (which is an 
architectural sin!) and then resorting to system-level hacks to 
try to re-create the DAG of this mess (and good luck with 
generated files).


However: "1" is still a "good" step. Compared to where we are 
now, it's in theory equivalent to perfectly doing regexp/gcc-MM 
scanning, in a langage agnostic way. It's a net win!





Re: Isn't it about time for D3?

2017-06-12 Thread Sebastien Alaiwan via Digitalmars-d

On Sunday, 11 June 2017 at 17:59:54 UTC, ketmar wrote:

Guillaume Piolat wrote:

On Saturday, 10 June 2017 at 23:30:18 UTC, Liam McGillivray 
wrote:
I realize that there are people who want to continue using D 
as it is, but those people may continue to use D2.


Well, no thanks.
The very same strategy halved the community for D1/D2 split 
and almost killed D.


as you can see, D is alive and kicking, and nothing disasterous 
or fatal happens.
Yes, but could it have been a lot more alive and kicking, hadn't 
we shot ourselves in the foot with this D1/D2 split?




Re: Makefile experts, unite!

2017-06-12 Thread Sebastien Alaiwan via Digitalmars-d

On Monday, 12 June 2017 at 15:57:13 UTC, Meta wrote:
On Monday, 12 June 2017 at 06:34:31 UTC, Sebastien Alaiwan 
wrote:

On Monday, 12 June 2017 at 06:30:16 UTC, ketmar wrote:

Jonathan M Davis wrote:


It's certainly a pain to edit the makefiles though


and don't forget those Great Copying Lists to copy modules. 
forgetting to include module in one of the lists was happened 
before, not once or twice...


I don't get it, could you please show an example?


https://github.com/dlang/phobos/pull/3843


Thanks!

Please, tell me that the Digital Mars implementation of Make 
*does* support generic rules...


Re: Isn't it about time for D3?

2017-06-13 Thread Sebastien Alaiwan via Digitalmars-d

On Tuesday, 13 June 2017 at 06:56:14 UTC, ketmar wrote:

Sebastien Alaiwan wrote:


On Sunday, 11 June 2017 at 17:59:54 UTC, ketmar wrote:

Guillaume Piolat wrote:

On Saturday, 10 June 2017 at 23:30:18 UTC, Liam McGillivray 
wrote:
I realize that there are people who want to continue using 
D as it is, but those people may continue to use D2.


Well, no thanks.
The very same strategy halved the community for D1/D2 split 
and almost killed D.


as you can see, D is alive and kicking, and nothing 
disasterous or fatal happens.
Yes, but could it have been a lot more alive and kicking, 
hadn't we shot ourselves in the foot with this D1/D2 split?


this question has no answer. can we do better if we will do 
everything right on the first try? of course!


My point precisely was that "not splitting D1/D2" might 
correspond to "doing things right".




Re: Isn't it about time for D3?

2017-06-13 Thread Sebastien Alaiwan via Digitalmars-d

On Tuesday, 13 June 2017 at 17:57:14 UTC, Patrick Schluter wrote:
Before even contemplating a big disrupting language split like 
proposed by the OP, wouldn't it first more appropriate to write 
a nice article, DIP, blog, whatever, listing the defects of the 
current language that can not be solved by progressive 
evolution?
I haven't the impression that the *language* itself suffer from 
so big flaws that it would warrant to fork it in a way that 
will lead to a lot frustration and bewilderment.
D is not perfect, no question, but it is not in a state that 
would jusrify such a harsh approach.


+1
Does anyone currently maintain somewhere such a list of 
not-gonna-be-fixed-in-D2 defects?

This might provide more solid grounds to this discussion.



Re: Isn't it about time for D3?

2017-06-15 Thread Sebastien Alaiwan via Digitalmars-d

On Thursday, 15 June 2017 at 15:04:26 UTC, Suliman wrote:
Should D really move to GC-free? I think there is already 
enough GC-free language on the market. D even now is very 
complected language, and adding ways to manually managing 
memory will make it's more complicated.


We need automatic deterministic destruction (and we partially 
have it, using scope(exit) and structs RAII).


Memory management is only the tip of the iceberg of resource 
management ; it's the easy problem, where an automated process 
can to tell which resources aren't needed any more.


However, an instance of a class can hold a lot more than flat 
memory blocks: threads, file handles, on-disk files, system-wide 
events, sockets, mutexes, etc.


Freeing the memory of my "TcpServer" instance is mostly useless 
if I can't reinstanciate a new one because the TCP port is kept 
open by the freed instance (whose destructor won't be run by the 
GC).




Re: Isn't it about time for D3?

2017-06-15 Thread Sebastien Alaiwan via Digitalmars-d

On Thursday, 15 June 2017 at 18:02:54 UTC, Moritz Maxeiner wrote:
We need automatic deterministic destruction (and we partially 
have it, using scope(exit) and structs RAII).

Not sure what exactly you are expecting, tbh.


I'm not advocating for a language change here.
As I said, we already have some deterministic destruction (the 
rest being the GC).


Why are you even talking about the GC when your problem seems 
to require deterministic lifetime management. That's not what a 
GC does, and thus it's the wrong tool for the job.


I don't have a programming problem ; I'm fine with unique and 
scoped!


I just wanted to make a point about how even GC-languages still 
benefit from some extra automatic and deterministic means of 
resource management.







Re: D needs to get its shit together!

2017-06-17 Thread Sebastien Alaiwan via Digitalmars-d

On Friday, 16 June 2017 at 03:53:18 UTC, Mike B Johnson wrote:
When a new user goes to start using D for the first time, D is 
a PITA to get working! Don't believe me?!?!


I'm running Debian GNU/Linux (testing). Here's the installation 
process for the 3 major D compilers.


$ apt install gdc
$ gdc my_first_program.d

GDC is too old for you? Fine, let's use ldc:

$ apt install ldc
$ ldc2 my_first_program.d

Or if you want the bleeding edge version of D:

(download dmd .deb package from dlang.org)
$ dpkg -i dmd_***.deb
$ rdmd my_first_program.d

Debian maintainers, one word: Thank you!


Re: dmd -betterC

2017-06-20 Thread Sebastien Alaiwan via Digitalmars-d

On Wednesday, 21 June 2017 at 03:57:46 UTC, ketmar wrote:
"bloatsome"? i don't think so. those generated messages is 
small (usually 20-30 bytes), and nothing comparing to 
druntime/phobos size.


Yeah, but what if you're already working without runtime and 
phobos?
(some embedded systems only have 32Kb program memory, and yes 
these are up-to-date ones)


Would it help to formalize the interface between compiler 
generated code and druntime? (IIRC this is implementation 
specific at the moment).


The idea is to make it easier to only reimplement and provide the 
runtime parts that are actually needed ; and to rely on link 
errors to prevent forbidden D constructs.




Re: dmd -betterC

2017-06-20 Thread Sebastien Alaiwan via Digitalmars-d

On Wednesday, 21 June 2017 at 05:35:42 UTC, ketmar wrote:
asserts on embedded systems? O_O code built without -release 
for embedded systems? O_O


embedded system != production system.

You still need to debug the code, and at some point, you have to 
load it on a real microcontroller ; this is where some of your 
assumptions about how the chip works might turn out to be false.


This is where it gets tricky. Because now you need to fit the 
memory requirements, and at the same time, you want some level of 
diagnostic.


And by the way, you might want to keep boundschecking code, even 
in production. In some situations, it's preferable to crash/hang 
than outputing garbage.


Re: Checked vs unchecked exceptions

2017-06-26 Thread Sebastien Alaiwan via Digitalmars-d

On Monday, 26 June 2017 at 16:35:51 UTC, jmh530 wrote:
Just curious: how are checked exceptions different from setting 
nothrow as the default? Like you would have to write:


void foo() @maythrow
{
   functionWithException();
}

So for instance, you could still use your "//shut up compiler" 
code with the nothrow default.


Checked exceptions allow a lot more precision about what types of 
exceptions a function can throw.


Re: Checked vs unchecked exceptions

2017-06-26 Thread Sebastien Alaiwan via Digitalmars-d

On Monday, 26 June 2017 at 17:44:15 UTC, Guillaume Boucher wrote:
On Monday, 26 June 2017 at 16:52:22 UTC, Sebastien Alaiwan 
wrote:
Checked exceptions allow a lot more precision about what types 
of exceptions a function can throw.


I totally agree that this is a problem with D right now.
This wasn't my point! I don't think there's a problem with D not 
having CE.
I was just pointing out the difference between "nothrow" 
specifications and CE.


Re: Checked vs unchecked exceptions

2017-06-27 Thread Sebastien Alaiwan via Digitalmars-d

On Tuesday, 27 June 2017 at 10:18:04 UTC, mckoder wrote:
"I think that the belief that everything needs strong static 
(compile-time) checking is an illusion; it seems like it will 
buy you more than it actually does. But this is a hard thing to 
see if you are coming from a statically-typed language."


Source: 
https://groups.google.com/forum/#!original/comp.lang.java.advocacy/r8VPk4deYDI/qqhL8g1uvf8J


If you like dynamic languages such as Python you probably agree 
with Bruce Eckel. If you are a fan of static checking then I 
don't see how you can like that.


A quote from Uncle Bob about too much static typing and checked 
exceptions:


http://blog.cleancoder.com/uncle-bob/2017/01/11/TheDarkPath.html

"My problem is that [Kotlin and Swift] have doubled down on 
strong static typing. Both seem to be intent on closing every 
single type hole in their parent languages."


"I would not call Java a strongly opinionated language when it 
comes to static typing. You can create structures in Java that 
follow the type rules nicely; but you can also violate many of 
the type rules whenever you want or need to. The language 
complains a bit when you do; and throws up a few roadblocks; but 
not so many as to be obstructionist.


Swift and Kotlin, on the other hand, are completely inflexible 
when it comes to their type rules. For example, in Swift, if you 
declare a function to throw an exception, then by God every call 
to that function, all the way up the stack, must be adorned with 
a do-try block, or a try!, or a try?. There is no way, in this 
language, to silently throw an exception all the way to the top 
level; without paving a super-hiway for it up through the entire 
calling tree."


Re: proposal: private module-level import for faster compilation

2016-07-24 Thread Sebastien Alaiwan via Digitalmars-d

On Wednesday, 20 July 2016 at 19:59:42 UTC, Jack Stouffer wrote:
I concur. If the root problem is slow compilation, then there 
are much simpler, non-breaking changes that can be made to fix 
that.


I don't think compilation time is a problem, actually. It more 
has to do with dependency management and encapsulation.


Speeding up compilation should never be considered as an 
acceptable solution here, as it's not scalable: it just pushes 
the problem away, until your project size increases enough.


Here's my understanding of the problem:

// client.d
import server;
void f()
{
  Data x;
  // Data.sizeof depends on something in server_private.
  x.something = 3; // offset to 'something' depends on 
privateStuff.sizeof.

}

// server.d
private import server_private;
struct Data
{
  Opaque someOtherThing;
  int something;
}

// server_private.d
struct Opaque
{
  byte[43] privateStuff;
}

I you're doing separate compilation, your dependency graph has to 
express that "client.o" depends on "client.d", "server.d", but 
also "server_private.d".


GDC "-fdeps" option properly lists all transitive imported files 
(disclaimer: this was my pull request). It's irrelevant here that 
imports might be private or public, the dependency is still here.


In other words, changes to "server_private.d" must alway trigger 
recompilation of "client.d".


I believe the solution proposed by the OP doesn't work, because 
of voldemort types. It's always possible to return a struct whose 
size depends on something deeply private.


// client.d
import server;
void f()
{
  auto x = getData();
  // Data.sizeof depends on something in server_private.
  x.something = 3; // offset to 'something' depends on 
privateStuff.sizeof.

}

// server.d
auto getData()
{
  private import server_private;

  struct Data
  {
Opaque someOtherThing;
int something;
  }

  Data x;
  return x;
}

// server_private.d
struct Opaque
{
  byte[43] privateStuff;
}

My conclusion is that maybe there's no problem in the language, 
nor in the dependency generation, nor in the compiler 
implementation.

Maybe it's just a design issue.



Re: proposal: private module-level import for faster compilation

2016-07-24 Thread Sebastien Alaiwan via Digitalmars-d

On Sunday, 24 July 2016 at 15:33:04 UTC, Chris Wright wrote:
Look at std.algorithm. Tons of methods, and I imported it just 
for `canFind` and `sort`.


Look at std.datetime. It imports eight or ten different 
modules, and it needs every one of those for something or 
other. Should we split it into a different module for each 
type, one for formatting, one for parsing, one for fetching the 
current time, etc? Because that's what we would have to do to 
work around the problem in user code.


That would be terribly inconvenient and would just waste 
everyone's time.


I agree with you, but I think you got me wrong.

Modules like std.algorithm (and nearly every other, in any 
standard library) have very low cohesion. As you said, most of 
the time, the whole module gets imported, although only 1% of it 
is going to be used.


(selective imports such as "import std.algorithm : canFind;" help 
you reduce namespace pollution, but not dependencies, because a 
change in the imported module could, for example, change symbol 
names.)


I guess low cohesion is OK for standard libraries, because 
splitting this into lots of files would result in long import 
lists on the user side, e.g:


import std.algorithm.canFind;
import std.algorithm.sort;
import std.algorithm.splitter;

(though, this seems a lot like most of us already do with 
selective imports).


But my point wasn't about the extra compilation time resulting 
from the unwanted import of 99% of std.algorithm.


My point is about the recompilation frequency of *your* modules, 
due to changes in one module.


Although std.algorithm has low cohesion, it never changes 
(upgrading one's compiler doesn't count, as everything needs to 
be recompiled anyway).


However, if your project has a "utils.d" composed of mostly 
unrelated functions, that is imported by almost every module in 
your project, and that is frequently changed, then I believe you 
have a design issue.


Any compiler is going to have a very hard time trying to avoid 
recompiling modules which only imported something in the 99% of 
utils.d which wasn't modified (and, by the way, it's not 
compatible with the separate compilation model).


Do you think I'm missing something here?






Re: New __FILE_DIR__ trait?

2016-07-28 Thread Sebastien Alaiwan via Digitalmars-d

On Thursday, 28 July 2016 at 06:21:06 UTC, Jonathan Marler wrote:

auto __DIR__(string fileFullPath = __FILE_FULL_PATH__) pure
{
return fileFullPath.dirName;
}


Doesn't work, I don't think you can wrap such things ( __FILE__ 
and such ):


import std.stdio;

int main()
{
  printNormal();
  printWrapped();
  return 0;
}

void printNormal(int line = __LINE__)
{
  writefln("%s", line);
}

void printWrapped(int line = __MY_LINE__)
{
  writefln("%s", line);
}

// wrapped version
int __MY_LINE__(int line = __LINE__)
{
  return line;
}

$ rdmd demo.d
5
15  (should be 6!)

Thus, the suggested implementation of __DIR__ would behave very 
differently from a builtin one. I'm not saying we need a builtin 
one, however, it might not be a good idea to name it this way.





Re: New __FILE_DIR__ trait?

2016-07-28 Thread Sebastien Alaiwan via Digitalmars-d
By the way, I really think __FILE_FULL_PATH__ should be a rdmd 
feature, not dmd.


rdmd could set an environment variable "RDMD_FULL_PATH" or 
something like this (replacing argv[0]), instead of potentially 
making the compilation depend on the working copy location on 
disk...


Re: Templates are slow.

2016-09-07 Thread Sebastien Alaiwan via Digitalmars-d

On Thursday, 8 September 2016 at 05:02:38 UTC, Stefan Koch wrote:
(Don't do this preemptively, ONLY when you know that this 
template is a problem!)


How would you measure such things?
Is there such a thing like a "compilation time profiler" ?

(Running oprofile on a dmd with debug info comes first to mind ; 
however, this would only give me statistics on dmd's source code, 
not mine.)


Re: The delang is using merge instead of rebase/squash

2017-03-21 Thread Sebastien Alaiwan via Digitalmars-d

It's common practice for "merge" commits to have the form:
"merge work from some/branch, fix PR #somenumber".

This basically tells me nothing about what this commit does.

We already know it's a merge commit, we don't care so much what 
branch it's from, and we don't want to dig into the bug tracker 
to translate the issue number into english.


We care more about how this merge modifies the code behaviour.

What if "merge" commits had better messages, not containing the 
word "merge" at all?
This way, the depth-0 history, which is always linear, would be 
human-readable and bisectable.