Re: Alternative to Interfaces

2019-01-25 Thread Sebastien Alaiwan via Digitalmars-d-learn

On Saturday, 19 January 2019 at 09:24:21 UTC, Kagamin wrote:
On Friday, 18 January 2019 at 18:48:46 UTC, Jonathan M Davis 
wrote:

Yes, but some D features will use the GC


They would like to allocate, but they don't know nor care where 
it's allocated from, if the developer uses custom memory 
management, he will know how it's allocated and will be able to 
manage it.


Is it possible to get the raw context pointer from a given 
delegate?

If not, how to deallocate the context without garbage collection?



Re: D vs perl6

2018-11-22 Thread Sebastien Alaiwan via Digitalmars-d-learn
On Thursday, 22 November 2018 at 09:03:19 UTC, Gary Willoughby 
wrote:

On Monday, 19 November 2018 at 06:46:55 UTC, dangbinghoo wrote:
So, can you experts give a more comprehensive compare with 
perl6 and D?


Sure!

1). You can actually read and understand D code.


Also, D can be parsed.

See: Perl Cannot Be Parsed: A Formal Proof ( 
https://www.perlmonks.org/?node_id=663393 )




Re: Wed Oct 17 - Avoiding Code Smells by Walter Bright

2018-11-09 Thread Sebastien Alaiwan via Digitalmars-d-announce
On Thursday, 8 November 2018 at 23:50:18 UTC, TheFireFighter 
wrote:
i.e. better encapsulation really is a good thing (although for  
many, it a lesson that needs to be learned).


Public/private/protected are hacks anyway - and many 
object-oriented languages don't have it. They only provide 
extremely limited encapsulation ; the client still sees the 
non-public part, and can depend on it in unexpected ways:


// my_module.d
struct MyStruct
{
private:
  char[1024] data;
}

class MyClass
{
protected:
  abstract void f();
};

// main.d
import my_module;
import std.traits;
import std.stdio;

int main()
{
  // depends on the list of private
  writefln("MyStruct.sizeof: %s", MyStruct.sizeof); members

  // depends on wether 'f' is declared abstract or not.
  writefln("isAbstractClass!MyClass: %s", 
isAbstractClass!MyClass);


  return 0;
}

If you want perfect encapsulation, use interfaces (as already 
said in this thread), or PIMPL.




Re: Wed Oct 17 - Avoiding Code Smells by Walter Bright

2018-10-31 Thread Sebastien Alaiwan via Digitalmars-d-announce
On Wednesday, 31 October 2018 at 05:00:12 UTC, myCodeDontSmell 
wrote:
in D, once your write your abstraction, say a class, with it's 
public interface, all the code below it can do whatever it 
likes to that class, making it a leaky abstraction.


I think there might be some confusion between "leaky abstraction" 
and "insufficient encapsulation".


Here's the Wikipedia definition of leaky abstractions:

"In software development, a leaky abstraction is an abstraction 
that *requires* knowledge of an underlying complexity* to be able 
to know how to use it. This is an issue, as the whole point of an 
abstraction is to be able to abstract away these details from the 
user. "


Why would imply that: as long as the user of your class isn't 
*required* to know about the underlying implementation specifics, 
this isn't a "leaky abstraction".


My understanding is that:

"Leaky abstractions" are about the interface design, and how one 
component is meant to be used. Those are unrelated to the 
programming language (e.g translating the code to another 
language doesn't make the leaky abstraction disappear).


For example, a shared directory can be a leaky abstraction, if 
the network is unstable (because then, the client code, which 
only sees file descriptors, now has to deal with disappearing 
files).


"Encapsulation" is about implementation hiding and access control 
("public/private"), and requires programming language support 
(e.g most dynamic languages don't have it).





Re: testing for deprecation

2017-08-28 Thread Sebastien Alaiwan via Digitalmars-d-learn

On Thursday, 1 September 2016 at 11:11:15 UTC, Cauterite wrote:
How does one test whether a symbol is deprecated? I would have 
expected something like: __traits(isDeprecated, foo).


Such a trait makes it possible to write code that will break, 
just because something has been marked as deprecated.


Doesn't it break the purpose of deprecation?



Re: Tools to help me find memory leaks?

2017-08-25 Thread Sebastien Alaiwan via Digitalmars-d-learn

I always use "valgrind --tool=massif" + "massif-visualizer".
Gives me a nice timeline allowing to find quickly who the big 
memory consumers (allocation sites) are.


Re: D doesn't read the first character of a file (reads everything but the first chararacter) with either read() or readText()

2017-07-18 Thread Sebastien Alaiwan via Digitalmars-d-learn

On Tuesday, 18 July 2017 at 02:21:59 UTC, Enjoys Math wrote:


DMD32 D Compiler v2.074.1

import std.file;

void main() {
   string bigInput = readText("input.txt");
}

The file is 7 MB of ascii text, don't know if that matters...

Should I upgrade versions?


Could you please share the first 32-bytes (in hex) of your file? 
Like:

$ hexdump -C input.txt



Re: Is it possible to generate a pool of random D or D inline assembler programs, run them safely?

2017-07-18 Thread Sebastien Alaiwan via Digitalmars-d-learn

On Tuesday, 18 July 2017 at 17:35:17 UTC, Enjoys Math wrote:
Without them crashing the app running them?  Say by wrapping 
with try / catch?


and, most probably a timeout, as  you're certainly going to run 
into infinite loops.



Reason is so I don't have to make my own VM.


Why not reuse an existing one? Some of them are very simple:
https://github.com/munificent/wren

It will be a lot easier than trying to generate random 
*compilable* D programs ; and will avoid requiring a compilation 
step in your mutation loop (I know the D compiler is fast, but 
still :-) ).





Re: Alias template parameter to a private function

2017-06-29 Thread Sebastien Alaiwan via Digitalmars-d-learn

On Thursday, 29 June 2017 at 20:21:13 UTC, Ali Çehreli wrote:

A workaround is to use a lambda:

  filter!(a => isValid(a))(array)


Thanks! Nice trick, this is definitely going into my company's 
codebase :-)


Such limitations are pretty annoying. There were a number of 
similar issues in recent dmd releases. Please file a bug if 
it's not already there:


Thanks, will do!


Re: Alias template parameter to a private function

2017-06-29 Thread Sebastien Alaiwan via Digitalmars-d-learn

up please!


Re: Checked vs unchecked exceptions

2017-06-27 Thread Sebastien Alaiwan via Digitalmars-d

On Tuesday, 27 June 2017 at 10:18:04 UTC, mckoder wrote:
"I think that the belief that everything needs strong static 
(compile-time) checking is an illusion; it seems like it will 
buy you more than it actually does. But this is a hard thing to 
see if you are coming from a statically-typed language."


Source: 
https://groups.google.com/forum/#!original/comp.lang.java.advocacy/r8VPk4deYDI/qqhL8g1uvf8J


If you like dynamic languages such as Python you probably agree 
with Bruce Eckel. If you are a fan of static checking then I 
don't see how you can like that.


A quote from Uncle Bob about too much static typing and checked 
exceptions:


http://blog.cleancoder.com/uncle-bob/2017/01/11/TheDarkPath.html

"My problem is that [Kotlin and Swift] have doubled down on 
strong static typing. Both seem to be intent on closing every 
single type hole in their parent languages."


"I would not call Java a strongly opinionated language when it 
comes to static typing. You can create structures in Java that 
follow the type rules nicely; but you can also violate many of 
the type rules whenever you want or need to. The language 
complains a bit when you do; and throws up a few roadblocks; but 
not so many as to be obstructionist.


Swift and Kotlin, on the other hand, are completely inflexible 
when it comes to their type rules. For example, in Swift, if you 
declare a function to throw an exception, then by God every call 
to that function, all the way up the stack, must be adorned with 
a do-try block, or a try!, or a try?. There is no way, in this 
language, to silently throw an exception all the way to the top 
level; without paving a super-hiway for it up through the entire 
calling tree."


Re: Checked vs unchecked exceptions

2017-06-26 Thread Sebastien Alaiwan via Digitalmars-d

On Monday, 26 June 2017 at 17:44:15 UTC, Guillaume Boucher wrote:
On Monday, 26 June 2017 at 16:52:22 UTC, Sebastien Alaiwan 
wrote:
Checked exceptions allow a lot more precision about what types 
of exceptions a function can throw.


I totally agree that this is a problem with D right now.
This wasn't my point! I don't think there's a problem with D not 
having CE.
I was just pointing out the difference between "nothrow" 
specifications and CE.


Re: Checked vs unchecked exceptions

2017-06-26 Thread Sebastien Alaiwan via Digitalmars-d

On Monday, 26 June 2017 at 16:35:51 UTC, jmh530 wrote:
Just curious: how are checked exceptions different from setting 
nothrow as the default? Like you would have to write:


void foo() @maythrow
{
   functionWithException();
}

So for instance, you could still use your "//shut up compiler" 
code with the nothrow default.


Checked exceptions allow a lot more precision about what types of 
exceptions a function can throw.


Accessing function frame from struct

2017-06-25 Thread Sebastien Alaiwan via Digitalmars-d-learn

Hi guys,
here's my full code below.
My problem is that last "auto Y = X" assignment, that the 
compiler won't accept:


yo.globalFunction.DirectStruct.IndirectStruct.indirectMemberFunc 
cannot access frame of function yo.globalFunction


I was expecting X to be accessible from here.
Suprisingly, if I replace all "struct" with "class", the last 
assignment is accepted by the compiler.
Could someone please tell me why the scoping asymmetry between 
structs and classes?


void globalFunction()
{
  auto X = 0;

  struct DirectStruct
  {
void directMemberFunc()
{
  auto Y = X; // OK, X is accessible

  struct HybridStruct
  {
void hybridFunc()
{
  auto Y = X; // OK, X is accessible
}
  }
}

struct IndirectStruct
{
  void indirectMemberFunc()
  {
auto Y = X; // Error: can't access frame of globalFunc
  }
}
  }
}




Alias template parameter to a private function

2017-06-24 Thread Sebastien Alaiwan via Digitalmars-d-learn

Hi,

I'm trying to call std.algorithm.iteration.filter with a private 
function as a predicate.

Here's a reduced example code:

// yo.d
import std.algorithm;

void moduleEntryPoint()
{
  privateFunction1();
  privateFunction2();
}

private:

void privateFunction1()
{
  auto array = [0, 1, 2, 3, 4, 5];
  auto result = filter!isValid(array); // error: 'isValid' is 
private

}

void privateFunction2()
{
  auto array = [0, 1, 2, 3, 4, 5];
  auto result = filter!isValid(array); // error: 'isValid' is 
private

}

bool isValid(int i)
{
  return i % 2 == 0;
}

Here's the compiler output:

/usr/include/dmd/phobos/std/algorithm/iteration.d(1132): Error: 
function yo.isValid is not accessible from module iteration
yo.d(14): Error: template instance 
std.algorithm.iteration.filter!(isValid).filter!(int[]) error 
instantiating


This seems like the compiler, when instanciating the calls to 
'filter', is resolving 'isValid' from std.algorithm.iteration 
scope (however, this isn't actually the case, see below).
I was expecting this identifier to be resolved from yo.d, where 
we have access to the private functions.


Surprisingly, the following works:

void privateFunction2()
{
  static bool isValid(int i)
  {
return i % 2 == 0;
  }

  auto array = [0, 1, 2, 3, 4, 5];
  auto result = filter!isValid(array); // error: 'isValid' is 
private

}

This makes the instanciation of 'filter' "see" 'isValid', 
however, now, the other privateFunctions can't use it.


Am I missing something here?
Thanks!



Re: D Language accepted for inclusion in GCC

2017-06-22 Thread Sebastien Alaiwan via Digitalmars-d-announce

On Thursday, 22 June 2017 at 16:13:51 UTC, Gary Willoughby wrote:

D Language accepted for inclusion in GCC:

https://gcc.gnu.org/ml/gcc/2017-06/msg00111.html

Well done Iain Buclaw!

Reddit thread: 
https://www.reddit.com/r/programming/comments/6im1yo/david_edelsohn_d_language_accepted_for_inclusion/


http://forum.dlang.org/thread/rakdufnuyofyhsweg...@forum.dlang.org


Re: GDC generate wrong .exe ("not a valid win32 application")

2017-06-22 Thread Sebastien Alaiwan via Digitalmars-d-learn

On Thursday, 22 June 2017 at 05:57:59 UTC, bauss wrote:
On Wednesday, 21 June 2017 at 15:55:27 UTC, David Nadlinger 
wrote:
On Monday, 19 June 2017 at 14:08:56 UTC, Patric Dexheimer 
wrote:

Fresh install of GDC. (tried with 32x ad 32_64x)


Where did you get the GDC executable from? The GDC project 
doesn't currently offer any official builds that target 
Windows; the 6.3.0 builds from 
https://gdcproject.org/downloads in fact generate Linux 
binaries.


 — David


I see Windows distributions below the Linux ones.


They run on Windows, and produce binaries that run on GNU/Linux.


Re: dmd -betterC

2017-06-21 Thread Sebastien Alaiwan via Digitalmars-d

On Wednesday, 21 June 2017 at 05:35:42 UTC, ketmar wrote:
asserts on embedded systems? O_O code built without -release 
for embedded systems? O_O


embedded system != production system.

You still need to debug the code, and at some point, you have to 
load it on a real microcontroller ; this is where some of your 
assumptions about how the chip works might turn out to be false.


This is where it gets tricky. Because now you need to fit the 
memory requirements, and at the same time, you want some level of 
diagnostic.


And by the way, you might want to keep boundschecking code, even 
in production. In some situations, it's preferable to crash/hang 
than outputing garbage.


Re: dmd -betterC

2017-06-20 Thread Sebastien Alaiwan via Digitalmars-d

On Wednesday, 21 June 2017 at 03:57:46 UTC, ketmar wrote:
"bloatsome"? i don't think so. those generated messages is 
small (usually 20-30 bytes), and nothing comparing to 
druntime/phobos size.


Yeah, but what if you're already working without runtime and 
phobos?
(some embedded systems only have 32Kb program memory, and yes 
these are up-to-date ones)


Would it help to formalize the interface between compiler 
generated code and druntime? (IIRC this is implementation 
specific at the moment).


The idea is to make it easier to only reimplement and provide the 
runtime parts that are actually needed ; and to rely on link 
errors to prevent forbidden D constructs.




Re: D needs to get its shit together!

2017-06-17 Thread Sebastien Alaiwan via Digitalmars-d

On Friday, 16 June 2017 at 03:53:18 UTC, Mike B Johnson wrote:
When a new user goes to start using D for the first time, D is 
a PITA to get working! Don't believe me?!?!


I'm running Debian GNU/Linux (testing). Here's the installation 
process for the 3 major D compilers.


$ apt install gdc
$ gdc my_first_program.d

GDC is too old for you? Fine, let's use ldc:

$ apt install ldc
$ ldc2 my_first_program.d

Or if you want the bleeding edge version of D:

(download dmd .deb package from dlang.org)
$ dpkg -i dmd_***.deb
$ rdmd my_first_program.d

Debian maintainers, one word: Thank you!


Re: Isn't it about time for D3?

2017-06-15 Thread Sebastien Alaiwan via Digitalmars-d

On Thursday, 15 June 2017 at 18:02:54 UTC, Moritz Maxeiner wrote:
We need automatic deterministic destruction (and we partially 
have it, using scope(exit) and structs RAII).

Not sure what exactly you are expecting, tbh.


I'm not advocating for a language change here.
As I said, we already have some deterministic destruction (the 
rest being the GC).


Why are you even talking about the GC when your problem seems 
to require deterministic lifetime management. That's not what a 
GC does, and thus it's the wrong tool for the job.


I don't have a programming problem ; I'm fine with unique and 
scoped!


I just wanted to make a point about how even GC-languages still 
benefit from some extra automatic and deterministic means of 
resource management.







Re: Isn't it about time for D3?

2017-06-15 Thread Sebastien Alaiwan via Digitalmars-d

On Thursday, 15 June 2017 at 15:04:26 UTC, Suliman wrote:
Should D really move to GC-free? I think there is already 
enough GC-free language on the market. D even now is very 
complected language, and adding ways to manually managing 
memory will make it's more complicated.


We need automatic deterministic destruction (and we partially 
have it, using scope(exit) and structs RAII).


Memory management is only the tip of the iceberg of resource 
management ; it's the easy problem, where an automated process 
can to tell which resources aren't needed any more.


However, an instance of a class can hold a lot more than flat 
memory blocks: threads, file handles, on-disk files, system-wide 
events, sockets, mutexes, etc.


Freeing the memory of my "TcpServer" instance is mostly useless 
if I can't reinstanciate a new one because the TCP port is kept 
open by the freed instance (whose destructor won't be run by the 
GC).




Re: Isn't it about time for D3?

2017-06-13 Thread Sebastien Alaiwan via Digitalmars-d

On Tuesday, 13 June 2017 at 17:57:14 UTC, Patrick Schluter wrote:
Before even contemplating a big disrupting language split like 
proposed by the OP, wouldn't it first more appropriate to write 
a nice article, DIP, blog, whatever, listing the defects of the 
current language that can not be solved by progressive 
evolution?
I haven't the impression that the *language* itself suffer from 
so big flaws that it would warrant to fork it in a way that 
will lead to a lot frustration and bewilderment.
D is not perfect, no question, but it is not in a state that 
would jusrify such a harsh approach.


+1
Does anyone currently maintain somewhere such a list of 
not-gonna-be-fixed-in-D2 defects?

This might provide more solid grounds to this discussion.



Re: Isn't it about time for D3?

2017-06-13 Thread Sebastien Alaiwan via Digitalmars-d

On Tuesday, 13 June 2017 at 06:56:14 UTC, ketmar wrote:

Sebastien Alaiwan wrote:


On Sunday, 11 June 2017 at 17:59:54 UTC, ketmar wrote:

Guillaume Piolat wrote:

On Saturday, 10 June 2017 at 23:30:18 UTC, Liam McGillivray 
wrote:
I realize that there are people who want to continue using 
D as it is, but those people may continue to use D2.


Well, no thanks.
The very same strategy halved the community for D1/D2 split 
and almost killed D.


as you can see, D is alive and kicking, and nothing 
disasterous or fatal happens.
Yes, but could it have been a lot more alive and kicking, 
hadn't we shot ourselves in the foot with this D1/D2 split?


this question has no answer. can we do better if we will do 
everything right on the first try? of course!


My point precisely was that "not splitting D1/D2" might 
correspond to "doing things right".




Re: Makefile experts, unite!

2017-06-13 Thread Sebastien Alaiwan via Digitalmars-d

On Monday, 12 June 2017 at 15:57:13 UTC, Meta wrote:
On Monday, 12 June 2017 at 06:34:31 UTC, Sebastien Alaiwan 
wrote:

On Monday, 12 June 2017 at 06:30:16 UTC, ketmar wrote:

Jonathan M Davis wrote:


It's certainly a pain to edit the makefiles though


and don't forget those Great Copying Lists to copy modules. 
forgetting to include module in one of the lists was happened 
before, not once or twice...


I don't get it, could you please show an example?


https://github.com/dlang/phobos/pull/3843


Thanks!

Please, tell me that the Digital Mars implementation of Make 
*does* support generic rules...


Re: Isn't it about time for D3?

2017-06-13 Thread Sebastien Alaiwan via Digitalmars-d

On Sunday, 11 June 2017 at 17:59:54 UTC, ketmar wrote:

Guillaume Piolat wrote:

On Saturday, 10 June 2017 at 23:30:18 UTC, Liam McGillivray 
wrote:
I realize that there are people who want to continue using D 
as it is, but those people may continue to use D2.


Well, no thanks.
The very same strategy halved the community for D1/D2 split 
and almost killed D.


as you can see, D is alive and kicking, and nothing disasterous 
or fatal happens.
Yes, but could it have been a lot more alive and kicking, hadn't 
we shot ourselves in the foot with this D1/D2 split?




Re: Makefile experts, unite!

2017-06-12 Thread Sebastien Alaiwan via Digitalmars-d

On Monday, 12 June 2017 at 06:38:34 UTC, ketmar wrote:

Sebastien Alaiwan wrote:


The selling points, to me, are:
1) the automatic dependency detection through filesystem hooks
2) recipes also are dependencies
3) the genericity/low-level. I believe build systems should 
let me define my own abstractions, instead of trying to define 
for me what an "executable" or a "static library" should be.


i'm not that all excited by "1", though. tbh, i prefer simplier 
things, like regexp scanning. while regexp scanners may find 
more dependencies than there really are, they doesn't require 
any dirty tricks to work.


I understand your point ; I was explaining to my colleagues 
yesterday that "1" was a "good step in the wrong direction".


I think dependencies should come from above, they should be 
forced at the build system level. No more 'include' or 'imports' 
(Bazel took one step into this direction).
The consequence is that now you can't consider the build system 
of your project as a second class citizen. It's the only place 
where you're forced to express something  vaguely resembling to a 
high-level architecture.


Instead of letting module implementations happily create 
dependencies to any other module implementation (which is an 
architectural sin!) and then resorting to system-level hacks to 
try to re-create the DAG of this mess (and good luck with 
generated files).


However: "1" is still a "good" step. Compared to where we are 
now, it's in theory equivalent to perfectly doing regexp/gcc-MM 
scanning, in a langage agnostic way. It's a net win!





Re: Makefile experts, unite!

2017-06-12 Thread Sebastien Alaiwan via Digitalmars-d

On Monday, 12 June 2017 at 07:00:46 UTC, Jonathan M Davis wrote:
On Monday, June 12, 2017 06:34:31 Sebastien Alaiwan via 
Digitalmars-d wrote:

On Monday, 12 June 2017 at 06:30:16 UTC, ketmar wrote:
> Jonathan M Davis wrote:
>> It's certainly a pain to edit the makefiles though
>
> and don't forget those Great Copying Lists to copy modules. 
> forgetting to include module in one of the lists was 
> happened before, not once or twice...


I don't get it, could you please show an example?


posix.mak is a lot better than it used to be, but with 
win{32,64}.mak, you have to list the modules all over the 
place. So, adding or removing a module becomes a royal pain, 
and it's very easy to screw up. Ideally, we'd just list the 
modules once in one file that was then used across all of the 
platforms rather than having to edit several files every time 
we add or remove anything. And the fact that we're using make 
for all of this makes that difficult if not impossible 
(especially with the very limited make that we're using on 
Windows).


Are you implying that we are currently keeping compatibility with 
NMAKE (the 'make' from MS)?


GNU make inclusion mechanism makes it possible and easy to share 
a list of modules between makefiles.


Before switching to a fancy BS, we might benefit from learning to 
fully take advantage of the one we currently have!






Re: Makefile experts, unite!

2017-06-12 Thread Sebastien Alaiwan via Digitalmars-d

On Monday, 12 June 2017 at 06:30:16 UTC, ketmar wrote:

Jonathan M Davis wrote:


It's certainly a pain to edit the makefiles though


and don't forget those Great Copying Lists to copy modules. 
forgetting to include module in one of the lists was happened 
before, not once or twice...


I don't get it, could you please show an example?



Re: Makefile experts, unite!

2017-06-12 Thread Sebastien Alaiwan via Digitalmars-d

On Sunday, 11 June 2017 at 23:47:30 UTC, Ali Çehreli wrote:


I had the pleasure of working with Eyal Lotem, main author of 
buildsome. The buildsome team are aware of all pitfalls of all 
build systems and offer build*some* as an awe*some* ;) and 
correct build system:


  http://buildsome.github.io/buildsome/


Very interesting!

The selling points, to me, are:
1) the automatic dependency detection through filesystem hooks
2) recipes also are dependencies
3) the genericity/low-level. I believe build systems should let 
me define my own abstractions, instead of trying to define for me 
what an "executable" or a "static library" should be.


- Make has 3)
- Ninja has 2), 3)
- tup and buildsome have 1), 2), 3)

However, buildsome also seems to have a (simplified) make-like 
syntax.

Why did they have to write it in Haskell, for god's sake!



Re: Anyone tried to emscripten a D/SDL game?

2017-06-05 Thread Sebastien Alaiwan via Digitalmars-d

On Monday, 5 June 2017 at 10:48:54 UTC, Johan Engelen wrote:

On Monday, 5 June 2017 at 05:22:44 UTC, Sebastien Alaiwan wrote:


The whole simplified toolchain and example project live here: 
https://github.com/Ace17/dscripten


Are you planning on upstreaming some of your work to LDC? 
Please do! :-)


Don't let the small size of the LDC patch from dscripten deceive 
you ; it mangles LDC in the easiest possible way to adapt it to 
the quirks of emscripten-fastcomp (the LLVM fork).
(the official LLVM branch doesn't even declare the 
asmjs/Emscripten triples).


At this point, I think emscripten-fastcomp might be on the go ; 
especially if the WebAssembly backend from the official LLVM 
branch becomes widely used (at this point, I'll probably ditch 
emscripten-fastcomp from dscripten, to use WebAssembly instead).


Some stuff that could be easily upstreamed in a clean 
non-emscripten specific way:
- allowing to disable the build of phobos and druntime from the 
cmake build.
- allowing to disable the build of the debug information 
generator.

- allowing to disable the build of the tests.




Re: GDC options

2017-06-05 Thread Sebastien Alaiwan via Digitalmars-d-learn
On Wednesday, 22 March 2017 at 13:42:21 UTC, Matthias Klumpp 
wrote:
This is why most of my work in Meson to get D supported is 
adding weird hacks to translate compiler flags between GNU <-> 
non-GNU <-> DMD. It sucks quite badly, and every now and then I 
hit a weird corner case where things break.
For example: 
https://github.com/mesonbuild/meson/commit/d9cabe9f0ca6fb06808c1d5cf5206a7c5158517e


One day, we'll have to decide if we should align build systems on 
compilers, or the other way around.
In the meantime, could everyone please align on clang and gcc ? 
:-)


Re: Anyone tried to emscripten a D/SDL game?

2017-06-04 Thread Sebastien Alaiwan via Digitalmars-d

On Wednesday, 24 May 2017 at 17:47:42 UTC, Suliman wrote:

It's it's possible to [compile to WASM] with D?

It should be.

LLVM has a working WebAssembly backend ; LDC might need some 
slight modifications to become aware of this new target. 
Everything that don't rely on the D runtime should work (except 
for bugs, e.g 
https://github.com/kripken/emscripten-fastcomp/issues/187 ).


Then, I think the following blog post could be easily adapted for 
the D langage:

https://medium.com/@mbebenita/lets-write-pong-in-webassembly-ac3a8e7c4591

However, please keep in mind that the target instruction set is 
only the tip of the iceberg ; you have to provide a target 
environment (like SDL bindings or a simplified D runtime), so the 
generated code can do anything usefull (like I/O).





Re: Anyone tried to emscripten a D/SDL game?

2017-06-04 Thread Sebastien Alaiwan via Digitalmars-d
On Wednesday, 24 May 2017 at 17:08:06 UTC, Nick Sabalausky 
"Abscissa" wrote:
On Wednesday, 24 May 2017 at 17:06:55 UTC, Guillaume Piolat 
wrote:


http://code.alaiwan.org/wp/?p=103


Awesome, thanks!


Author here ; the blogpost is indeed interesting but it's 
outdated.
It's a lot more simpler now that the "LDC + emscripten-fastcomp" 
combination works (no need for intermediate C lowering anymore)


The whole simplified toolchain and example project live here: 
https://github.com/Ace17/dscripten
If you have questions about how this work, I'd be glad to answer 
them!


Re: htod for linux

2017-04-21 Thread Sebastien Alaiwan via Digitalmars-d-learn

On Friday, 21 April 2017 at 11:40:45 UTC, Mike Parker wrote:
On Friday, 21 April 2017 at 10:54:26 UTC, سليمان السهمي 
(Soulaïman Sahmi) wrote:
Is there an htod for linux or an equivalent that works with 
Cpp, there is dstep but it does not support Cpp.


From the very bottom of the htod doc page [1]:

"No Linux version."


https://dlang.org/htod.html


However, I wouldn't be surprised to get good results on wine.


Re: The delang is using merge instead of rebase/squash

2017-03-22 Thread Sebastien Alaiwan via Digitalmars-d

It's common practice for "merge" commits to have the form:
"merge work from some/branch, fix PR #somenumber".

This basically tells me nothing about what this commit does.

We already know it's a merge commit, we don't care so much what 
branch it's from, and we don't want to dig into the bug tracker 
to translate the issue number into english.


We care more about how this merge modifies the code behaviour.

What if "merge" commits had better messages, not containing the 
word "merge" at all?
This way, the depth-0 history, which is always linear, would be 
human-readable and bisectable.


Re: GDC options

2017-03-21 Thread Sebastien Alaiwan via Digitalmars-d-learn

On Monday, 13 March 2017 at 11:06:53 UTC, Russel Winder wrote:
It is a shame that dmd and ldc do not just use the standard GCC 
option set.

Totally agreed.

Moreover, funny stuff like "dmd -of" (instead of standard 
"-o ") breaks automatic Msys path conversion hack (the 
code translates Unix paths from the command line to Windows paths 
before the invocation of a non-msys program), which makes it 
impossible to use dmd under Msys without wrapping it first.


pkg-config also is a real pain to use with dmd (the pkg-config's 
output needs to be post-processed so it has the form "-L-lstuff" 
instead of "-lstuff").


This is an issue, because it makes it very hard to use write 
portable makefiles for programs containing D code. Too bad, 
because the D code is actually platform-independent, and there's 
been a lot of work in Phobos to make it easy to write such code.


D was designed to be binary compatible with the C ABI ; however, 
having a compiler whose command-line behaves so different from 
gcc makes it harder to actually work with existing C libs.


This is actually the main reason why I almost exclusively use 
gdc: to have one Makefile, for all platforms, allowing native and 
cross-compilation with no platform-specific special cases.




Re: Testing in the D Standard Library

2017-01-26 Thread Sebastien Alaiwan via Digitalmars-d-announce

On Monday, 23 January 2017 at 01:52:29 UTC, Chris Wright wrote:

On Sun, 22 Jan 2017 20:18:11 +, Mark wrote:

Have you considered adding randomized tests to Phobos?


Randomized testing is an interesting strategy to use alongside 
deterministic testing. It produces more coverage over time. 
However, any given test run only has a fraction of the coverage 
that you see over a large number of runs.


In other words, if the randomized tests catch something, you 
don't know who dun it. This is generally considered a bad thing.


Phobos does have some tests that use a PRNG. They all use a 
fixed seed. This is a shortcut to coming up with arbitrary test 
data; it's not a way to increase overall coverage.


I think the right way to do it is to have a nightly randomized 
test build, but since I'm not willing to do the work, I don't 
have much say.


This. So much this.
Unit tests that loop over many randomly generated input test 
vectors are just a waste of everybody's CPU time.


Don't get me wrong: fuzzing is also necessary. But it relies on 
an arbitrary time limit, which is hardly compatible with keeping 
a test suite fast. Which means it should be done in a another 
validation process than unit tests.




Re: [Semi-OT] I don't want to leave this language!

2016-12-07 Thread Sebastien Alaiwan via Digitalmars-d-learn
On Wednesday, 7 December 2016 at 21:52:22 UTC, Jonathan M Davis 
wrote:
On Wednesday, December 07, 2016 15:17:21 Picaud Vincent via 
Digitalmars-d- learn wrote:
That being said, if someone wants to make their life harder by 
insisting on using D without even druntime, then that's their 
choice. I think that it's an unnecessarily extreme approach 
even for really performance-centric code, but they're free to 
do what they want.


It's not only a performance issue. Sometimes, your target 
platform simply doesn't have the runtime nor Phobos: Emscripten 
(asmjs), kernel mode code or bare metal embedded stuff.


I'm using D without druntime for my D-to-asmjs project. Avoid 
druntime certainly makes my life harder, but it makes the whole 
project possible.




Re: strange -fPIC compilation error

2016-10-31 Thread Sebastien Alaiwan via Digitalmars-d-learn

Hello,
From GCC 6.2, -fpie is becoming the default setting at compile 
and at link time.
As dmd uses GCC to link, now the code needs to be compiled with a 
special option.
Which means you need, at the moment, to add the following options 
to your dmd.conf:

 -defaultlib=libphobos2.so -fPIC
(the change from GCC is related to security and address space 
randomization).


Re: Comparing compilation time of random code in C++, D, Go, Pascal and Rust

2016-10-28 Thread Sebastien Alaiwan via Digitalmars-d-announce

On Thursday, 27 October 2016 at 12:11:09 UTC, Johan Engelen wrote:

On Thursday, 27 October 2016 at 06:43:15 UTC, Sebastien Alaiwan
If code generation/optimization is the bottleneck, a 
"ccache-for-D" ("dcache"?) tool might be very beneficial.


See 
https://johanengelen.github.io/ldc/2016/09/17/LDC-object-file-caching.html


I also have a working dcache implementation in LDC but it still 
needs some polishing.
Hashing the LLVM bitcode ... how come I didn't think about this 
before!
Unless someone manages to do the same thing with gdc + GIMPLE, 
this could very well be the "killer" feature of LDC ...


Having a the fastest compiler on earth still doesn't provide 
scalability ; interestingly, when I build a full LLVM+LDC 
toolchain, the longest step is the compilation of the dmd 
frontend. It's the only part that is:

1) not cached: all the other source files from LLVM are ccache'd.
2) sequential: my CPU load drops to 12.5%, although it's near 
100% for LLVM.





Re: Comparing compilation time of random code in C++, D, Go, Pascal and Rust

2016-10-27 Thread Sebastien Alaiwan via Digitalmars-d-announce
On Wednesday, 19 October 2016 at 17:05:18 UTC, Gary Willoughby 
wrote:

This was posted on twitter a while ago:

Comparing compilation time of random code in C++, D, Go, Pascal 
and Rust


http://imgur.com/a/jQUav

Very interesting, thanks for sharing!

From the article:
Surprise: C++ without optimizations is the fastest! A few other 
surprises: Rust also seems quite competitive here. D starts out 
comparatively slow."


These benchmarks seem to support the idea that it's not the 
parsing which is slow, but the code generation phase. If code 
generation/optimization is the bottleneck, a "ccache-for-D" 
("dcache"?) tool might be very beneficial.


(However, then why do C++ standard committee members believe that 
the replacement of text-based #includes with C++ modules 
("import") will speed up the compilation by one order of 
magnitude?)


Working simultaneously on equally sized C++ projects and D 
projects, I believe that a "dcache" (using hashes of the AST?) 
might be usefull. The average project build time in my company is 
lower for C++ projects than for D projects (we're using "ccache 
g++ -O3" and "gdc -O3").





Re: Templates are slow.

2016-09-08 Thread Sebastien Alaiwan via Digitalmars-d

On Thursday, 8 September 2016 at 05:02:38 UTC, Stefan Koch wrote:
(Don't do this preemptively, ONLY when you know that this 
template is a problem!)


How would you measure such things?
Is there such a thing like a "compilation time profiler" ?

(Running oprofile on a dmd with debug info comes first to mind ; 
however, this would only give me statistics on dmd's source code, 
not mine.)


Re: How to debug D programs

2016-08-08 Thread Sebastien Alaiwan via Digitalmars-d-debugger

On Monday, 8 August 2016 at 10:36:39 UTC, eugene wrote:

Hello, everyone,
question to you: how do you debug D programs on gnu/linux? and 
what ides are you using for debugging?

I only use cgdb (ncurses frontend) or simply gdb.


Re: Running a D game in the browser

2016-08-07 Thread Sebastien Alaiwan via Digitalmars-d-announce

On Friday, 5 August 2016 at 13:18:38 UTC, Johan Engelen wrote:

That patch doesn't look too bad.
Could you introduce a CMake option for building with 
Emscripten-fastcomp?
And a #define "LDC_LLVM_EMSCRIPTEN" or something like that, so 
that you can change `#if LDC_LLVM_VER >= 309 && 0` to `#if 
LDC_LLVM_VER >= 309 && !LDC_LLVM_EMSCRIPTEN`.


Should be mergable into LDC master then!


I could definitely formalize things a bit, but any patch of this 
kind would quickly be obsolete, as Emscripten catches up with 
LLVM versions.
Moreover, I don't feel comfortable polluting LDC with the 
specificities of some obscure use case (as cool as this use case 
can be!).


It might be preferable - though harder - to patch Emscripten so 
it aligns on LLVM official API versions.


Re: Running a D game in the browser

2016-08-04 Thread Sebastien Alaiwan via Digitalmars-d-announce
On Thursday, 4 August 2016 at 19:17:34 UTC, Sebastien Alaiwan 
wrote:
at the moment, I have a patch to making the build work (only 
for the binary "ldc2", not other tools of the package).

I created a dedicated github branch "fastcomp-ldc".
The patch: 
https://github.com/Ace17/dscripten/blob/fastcomp-ldc/ldc2.patch





Re: Running a D game in the browser

2016-08-04 Thread Sebastien Alaiwan via Digitalmars-d-announce

On Wednesday, 3 August 2016 at 20:40:47 UTC, Kai Nacke wrote:

That's awesome!

Do you still know the modifications you made to compile LDC 
with emscripten-fastcomp? I would be interested to have a look 
into the "PNaCl legalization passes" problem.

That would be great, and might simplify the toolchain a lot.
First we must get the build to pass again ; I'm working on it at 
the moment, I have a patch to making the build work (only for the 
binary "ldc2", not other tools of the package). I can send it to 
you if you want.


Re: Running a D game in the browser

2016-08-04 Thread Sebastien Alaiwan via Digitalmars-d-announce

On Thursday, 4 August 2016 at 09:57:57 UTC, Mike Parker wrote:
On Wednesday, 3 August 2016 at 20:26:23 UTC, Sebastien Alaiwan 
wrote:




And a blogpost explaining the technique is available here:
http://code.alaiwan.org/wp/?p=103
(Spoiler: at some point, it involves lowering the source code 
back to C)


Reddit: 
https://www.reddit.com/r/programming/comments/4w3svq/running_a_d_game_in_the_browser/


HN:
https://news.ycombinator.com/item?id=12225036


Re: Running a D game in the browser

2016-08-04 Thread Sebastien Alaiwan via Digitalmars-d-announce

On Thursday, 4 August 2016 at 05:03:17 UTC, Joel wrote:
[snip]
Though, it looks like the score isn't reset when you start a 
new game. Or, is it intended that way?


Oh, I read it wrong, the score is reset. Dummy, me!


It's just that you're becoming better at this silly game :-)

Thanks for your replies!

As Mark guessed, I didn't spend a lot of time on the game-design 
side (but I'm still interested in game-design related comments).


It's more of a proof of concept, using D.

Using C++, there already are impressive demos:
https://kripken.github.io/BananaBread/wasm-demo/index.html



Running a D game in the browser

2016-08-03 Thread Sebastien Alaiwan via Digitalmars-d-announce

Hi,

I finally managed to compile some D code to asm.js, using 
Emscripten.


It had been done by one dude several years ago, but some changes 
in the inner workings of Emscripten (the introduction of 
fastcomp, also probably combined with changes in the way LDC 
generates LLVM bitcode) made it impossible to use the same 
technique.


You can play a minimalistic demo:
http://code.alaiwan.org/dscripten/full.html

The source code + toolchain deployment scripts are available on 
github:

https://github.com/Ace17/dscripten

And a blogpost explaining the technique is available here:
http://code.alaiwan.org/wp/?p=103
(Spoiler: at some point, it involves lowering the source code 
back to C)


Please let me know what you think!



Re: New __FILE_DIR__ trait?

2016-07-28 Thread Sebastien Alaiwan via Digitalmars-d
By the way, I really think __FILE_FULL_PATH__ should be a rdmd 
feature, not dmd.


rdmd could set an environment variable "RDMD_FULL_PATH" or 
something like this (replacing argv[0]), instead of potentially 
making the compilation depend on the working copy location on 
disk...


Re: Transform/Compile to C/CPP as a target

2016-07-28 Thread Sebastien Alaiwan via Digitalmars-d-learn

On Monday, 25 July 2016 at 07:53:17 UTC, Stefan Koch wrote:

On Saturday, 23 July 2016 at 12:27:24 UTC, ParticlePeter wrote:
Is there any kind of project or workflow that converts D 
(subset) to C/CPP ?


The short answer is no, not for any recent version of D.


The long answer is it's kind of possible, but the resulting C 
code is not human-readable.
I just managed today to achieve some transformation to C with the 
below script:


# compile the D modules to llvm bitcode
$ ldc2 hello.d -c -output-ll -ofhello.ll
$ ldc2 lib.d -c -output-ll -oflib.ll

# merge them into one LLVM bitcode module
$ llvm-link-3.8 hello.ll lib.ll -o full.bc
$ llvm-dis-3.8 full.bc -o=full.ll

# convert bitcode to C
$ llvm-cbe full.ll

# patch the generated C, so it's compilable
$ sed -i "s/.*APInt.*//" full.cbe.c
$ sed -i "s/^uint32_t main(uint32_t llvm_cbe_argc_arg, 
uint8_t\*\* llvm_cbe_argv_arg)/int main(int llvm_cbe_argc_arg, 
char** llvm_cbe_argv_arg)/" full.cbe.c
$ sed -i "s/^uint32_t main(uint32_t, uint8_t\*\*)/int main(int, 
char**)/" full.cbe.c


# compile the C program and run it.
$ gcc -w full.cbe.c -o full.exe -lphobos2
$ ./full.exe
Hello, world: 46

I only tried this with a very minimalistic subset of D at the 
moment.
Most of the magic occurs in the "llvm-cbe" program, which is a 
"resurrected LLVM C backend" ( 
https://github.com/JuliaComputing/llvm-cbe ).







Re: New __FILE_DIR__ trait?

2016-07-28 Thread Sebastien Alaiwan via Digitalmars-d

On Thursday, 28 July 2016 at 06:21:06 UTC, Jonathan Marler wrote:

auto __DIR__(string fileFullPath = __FILE_FULL_PATH__) pure
{
return fileFullPath.dirName;
}


Doesn't work, I don't think you can wrap such things ( __FILE__ 
and such ):


import std.stdio;

int main()
{
  printNormal();
  printWrapped();
  return 0;
}

void printNormal(int line = __LINE__)
{
  writefln("%s", line);
}

void printWrapped(int line = __MY_LINE__)
{
  writefln("%s", line);
}

// wrapped version
int __MY_LINE__(int line = __LINE__)
{
  return line;
}

$ rdmd demo.d
5
15  (should be 6!)

Thus, the suggested implementation of __DIR__ would behave very 
differently from a builtin one. I'm not saying we need a builtin 
one, however, it might not be a good idea to name it this way.





Re: proposal: private module-level import for faster compilation

2016-07-25 Thread Sebastien Alaiwan via Digitalmars-d

On Sunday, 24 July 2016 at 15:33:04 UTC, Chris Wright wrote:
Look at std.algorithm. Tons of methods, and I imported it just 
for `canFind` and `sort`.


Look at std.datetime. It imports eight or ten different 
modules, and it needs every one of those for something or 
other. Should we split it into a different module for each 
type, one for formatting, one for parsing, one for fetching the 
current time, etc? Because that's what we would have to do to 
work around the problem in user code.


That would be terribly inconvenient and would just waste 
everyone's time.


I agree with you, but I think you got me wrong.

Modules like std.algorithm (and nearly every other, in any 
standard library) have very low cohesion. As you said, most of 
the time, the whole module gets imported, although only 1% of it 
is going to be used.


(selective imports such as "import std.algorithm : canFind;" help 
you reduce namespace pollution, but not dependencies, because a 
change in the imported module could, for example, change symbol 
names.)


I guess low cohesion is OK for standard libraries, because 
splitting this into lots of files would result in long import 
lists on the user side, e.g:


import std.algorithm.canFind;
import std.algorithm.sort;
import std.algorithm.splitter;

(though, this seems a lot like most of us already do with 
selective imports).


But my point wasn't about the extra compilation time resulting 
from the unwanted import of 99% of std.algorithm.


My point is about the recompilation frequency of *your* modules, 
due to changes in one module.


Although std.algorithm has low cohesion, it never changes 
(upgrading one's compiler doesn't count, as everything needs to 
be recompiled anyway).


However, if your project has a "utils.d" composed of mostly 
unrelated functions, that is imported by almost every module in 
your project, and that is frequently changed, then I believe you 
have a design issue.


Any compiler is going to have a very hard time trying to avoid 
recompiling modules which only imported something in the 99% of 
utils.d which wasn't modified (and, by the way, it's not 
compatible with the separate compilation model).


Do you think I'm missing something here?






Re: proposal: private module-level import for faster compilation

2016-07-24 Thread Sebastien Alaiwan via Digitalmars-d

On Wednesday, 20 July 2016 at 19:59:42 UTC, Jack Stouffer wrote:
I concur. If the root problem is slow compilation, then there 
are much simpler, non-breaking changes that can be made to fix 
that.


I don't think compilation time is a problem, actually. It more 
has to do with dependency management and encapsulation.


Speeding up compilation should never be considered as an 
acceptable solution here, as it's not scalable: it just pushes 
the problem away, until your project size increases enough.


Here's my understanding of the problem:

// client.d
import server;
void f()
{
  Data x;
  // Data.sizeof depends on something in server_private.
  x.something = 3; // offset to 'something' depends on 
privateStuff.sizeof.

}

// server.d
private import server_private;
struct Data
{
  Opaque someOtherThing;
  int something;
}

// server_private.d
struct Opaque
{
  byte[43] privateStuff;
}

I you're doing separate compilation, your dependency graph has to 
express that "client.o" depends on "client.d", "server.d", but 
also "server_private.d".


GDC "-fdeps" option properly lists all transitive imported files 
(disclaimer: this was my pull request). It's irrelevant here that 
imports might be private or public, the dependency is still here.


In other words, changes to "server_private.d" must alway trigger 
recompilation of "client.d".


I believe the solution proposed by the OP doesn't work, because 
of voldemort types. It's always possible to return a struct whose 
size depends on something deeply private.


// client.d
import server;
void f()
{
  auto x = getData();
  // Data.sizeof depends on something in server_private.
  x.something = 3; // offset to 'something' depends on 
privateStuff.sizeof.

}

// server.d
auto getData()
{
  private import server_private;

  struct Data
  {
Opaque someOtherThing;
int something;
  }

  Data x;
  return x;
}

// server_private.d
struct Opaque
{
  byte[43] privateStuff;
}

My conclusion is that maybe there's no problem in the language, 
nor in the dependency generation, nor in the compiler 
implementation.

Maybe it's just a design issue.



Re: A suggestion for modules names / sharing code between projects

2016-03-03 Thread Sebastien Alaiwan via Digitalmars-d

On Thursday, 3 March 2016 at 08:39:51 UTC, Laeeth Isharc wrote:

On Thursday, 3 March 2016 at 06:53:33 UTC, Sebastien Alaiwan
If, unfortunately, I happen to run into a conflict, i.e my 
project uses two unrelated libraries sharing a name for their 
top-level namespace, there's no way out for me, right?

(Except relying on shared objects to isolate symbols)


See modules doc page off dlang.org.
import io = std.stdio;
void main()
{ io.writeln("hello!"); // ok, calls std.stdio.writeln
std.stdio.writeln("hello!"); // error, std is undefined
writeln("hello!"); // error, writeln is undefined
}
Thansk, but I was talking about module name collisions, e.g if my 
project uses two unrelated libraries, both - sloppily - defining 
a module called "geom.utils".


Can this be worked-around using renamed imports?

How is "import myUtils = geom.utils" going to know which of both 
"geom.utils" I'm talking about?
As far as I understand, _module_ renaming at import is rather a 
tool for defining shorthands, than for disambiguation.


I agree that it might not be a problem in practice - just wanting 
to make sure I'm not missing something.




Re: A suggestion for modules names / sharing code between projects

2016-03-02 Thread Sebastien Alaiwan via Digitalmars-d

Hi guys,

thanks a lot for your answers!

On Wednesday, 2 March 2016 at 22:42:18 UTC, ag0aep6g wrote:

On 02.03.2016 21:40, Sebastien Alaiwan wrote:
- I only work with separate compilation, using gdc 
(Windows-dmd produces OMF, which is a dealbreaker ; but I use 
rdmd a lot for small programs).
dmd gives you COFF with -m64 or -m32mscoff (-m32 is the default 
is => OMF).

Thanks for the tip, I didn't know that.
I have actually deeper reasons for not using dmd (seemless 
cross-compilation, e.g "make CROSS_COMPILE=i686-w64-mingw32" or 
"make CROSS_COMPILE=arm-linux-gnueabi" without any special case 
in the Makefile ; and the way its unusual command line 
(-oftarget) conflicts with MSYS's path conversion, etc. ).


I don't know the exact reasons why the module system works like 
it does. Maybe you're on to something, maybe not. Anyway, here 
are some problems/surprises I see with your alternative idea 
(which basically means using relative file paths, if I get you 
right):

Yes, I think we can say that.

I 100% agree with all the limitations you listed ; which makes a 
forced absolute-import system appear to be a lot more comfortable!


There's also one big limitation I see that you didn't list:
* If you're importing lib_a and lib_b both depending on lib_c, 
you need them to expect lib_c to be at the same path/namespace 
location. The current module system guarantees this, because all 
package names are global.


(And linking with precompiled binaries might become a nightmare)

However, using global package names means you're relying on the 
library providers to avoid conflicts, in their package names, 
although they might not even know each other. Which basically 
implies to always put your name / company name in your package 
name, like "import alaiwan.lib_algo.array2d;", or "import 
apple.lib_algo.array2d".
Or rename my "lib_algo" to something meaningless / less common, 
like "import diamond.array2d" (e.g Derelict, Pegged, imaged, 
etc.).


If, unfortunately, I happen to run into a conflict, i.e my 
project uses two unrelated libraries sharing a name for their 
top-level namespace, there's no way out for me, right?

(Except relying on shared objects to isolate symbols)


I don't think your alternative is obviously superior.
Me neither :-) I'm just sharing my doubts about the current 
system!


The current way of having exactly one fixed name per module 
makes it easy to see what's going on at any point. You have to 
pay for that by typing a bit more, I guess.

Yes, I guess you're right.

I'm going to follow your advice and Mike's for my projects, this 
will allow me to see things more clearly. Anyway, thanks again to 
all of you for your answers!





Re: A suggestion for modules names / sharing code between projects

2016-03-02 Thread Sebastien Alaiwan via Digitalmars-d

On Wednesday, 2 March 2016 at 08:50:50 UTC, Mike Parker wrote:
I'm curious what sort of system you have in mind. How does the 
current system not work for you and what would you prefer to 
see?


First, you must know that I've been a C++ programmer for way too 
much time ;
and that my way of thinking about modules has probably been - 
badly - shaped accordingly :-).


However, it feels wrong that modules (foo, bar) inside the same 
package ("pkg") need to repeat the "absolute" package name when 
referencing each other, I mean "import pkg.foo" instead of 
"import foo". Not because of the extra typing, but because of the 
dependency it introduces. But as I said, maybe my vision is wrong.


I'm going to answer your first question, I hope you don't get 
bored before the end ;-)


First some generic stuff:
- The libraries I write are source-only, and I never install them 
system-wide, as many projects might depend on different revisions 
of one library.
- I only work with separate compilation, using gdc (Windows-dmd 
produces OMF, which is a dealbreaker ; but I use rdmd a lot for 
small programs).
- I often mix C code with D, sometimes, C++ code. I compile them 
using gcc/g++.
- My build system is composed of simple dependency-aware 
non-recursive makefiles (BTW, this is a great way to keep your 
code decoupled, otherwise you might not get feedback that you're 
making a dependency mess soon enough!)


This is the best way I found to be able to checkout one project, 
type 'make',
and have everything build properly, wether I'm doing it on 
GNU/Linux or on Windows (MSYS).


Everything is under version control using GNU Bazaar (which 
basically is to git what D is to C++).


I have a small library, which I call "lib_algo", containing 
semi-related utilities (containers, topological sort, etc.) which 
I use in many projects.


$ cd myLibs
$ find lib_algo -name "*.d" -or -name "*.mk" -or name "Makefile"
lib_algo/options.d
lib_algo/misc.d
lib_algo/pointer.d
lib_algo/set.d
lib_algo/queue.d
lib_algo/algebraic.d
lib_algo/vector.d
lib_algo/bresenham.d
lib_algo/topological.d
lib_algo/fix_array.d
lib_algo/test.d
lib_algo/pool.d
lib_algo/stack.d
lib_algo/list.d
lib_algo/array2d.d
lib_algo/project.mk
lib_algo/Makefile
(I know, there's some overlap with Phobos today)

The project.mk simply contains the list of source files of the 
library.
The test.d file actually contains a main function, which I use to 
run manual tests of the library functionalities. It's not 
included in the project.mk list. And it obviously must import the 
files needing to be tested.


With this flat directory structure, the only way test.d can 
import files is by saying, "import stack;", not  "import 
lib_algo.stack".
(adding ".." to the list of import directories might do the 
trick, but this would definitely feel wrong).


Now, let's see a project using this "lib_algo", I call it 
"myGame".


$ cd myProjects
$ find 
myGame/lib_algo/options.d
myGame/lib_algo/
myGame/lib_algo/array2d.d
myGame/lib_algo/project.mk
myGame/lib_algo/Makefile
myGame/src/main.d
myGame/Makefile


"lib_algo" simply is an external (aka "git submodule"), pointing 
to a specific revision of the repository "lib_algo".
The top-level "Makefile" of the projects includes 
"lib_algo/project.mk", and all is well, I can compose the list of 
source files to compile without having to rely on 
"convenience/intermediate" libraries.


Now, if "src/main.d" wants to import "array2d.d", currently, I 
must write "import array2d.d", and add "-I myGame/lib_algo" to my 
compiler command line.

I said in a previous post that this would not scale.
That's because with this scheme, obviously, I can't have two 
modules having the same name, even if they belong to different 
libraries.
Let's say I have "lib_algo/utils.d" and "lib_geom/utils.d", if 
"main.d" says "import utils.d", it's ambiguous.


Clearly, "main.d" needs to say "import lib_algo.utils". However, 
if I do this,
I must add a module declaration in "lib_algo/utils.d", and I must 
change

"lib_algo/test.d" so it says "import lib_algo.utils".

This is where it begins to feel clumsy. Why should lib_algo need 
to be

modified in order to resolve an ambiguity happening in "myGame" ?

Moreover, we saw earlier that this modification of the import 
done by
"lib_algo/test.d" had consequences on the directory structure of 
lib_algo.
In order to be able to build the test executable for lib_algo, I 
now need to

have the following structure:
$ find lib_algo
lib_algo/lib_algo/options.d
lib_algo/lib_algo/
lib_algo/lib_algo/array2d.d
lib_algo/test.d
(test.d can be put inside the inner "lib_algo")

Am I the only one to find this counterintuitive - and probably 
wrong?

What am I missing there?

(And please, don't tell me that I need to switch to 
dub+git+dmd+D_source_files_only ; I like my tools to be 
composable)


Thanks for reading!



Re: A suggestion for modules names / sharing code between projects

2016-03-02 Thread Sebastien Alaiwan via Digitalmars-d
On Wednesday, 2 March 2016 at 00:16:57 UTC, Guillaume Piolat 
wrote:

Would this work?

1. pick a single module name like

module math.optimize;

2. import that module with:

import math.optimize;

3. put this module in a hierarchy like that:

math/optimize.d

4. pass -I to the compiler

However it may clutter your module namespace a bit more.


Yeah, this is what I going to do: flat hierarchy, like one 'src' 
directory and one 'extra' directory in the project root, globally 
unique package names, passing -Isrc and -Iextra to the compiler, 
and ... module declaration directives.

I was trying to avoid them, but maybe it's not possible.

I still think that software entities shouldn't depend upon their 
own name, or their absolute location in the top-level namespace.


This is true for classes, functions/methods, variables, 
structures ... maybe this rule becomes invalid when third-party 
packages enter a project?




Re: A suggestion for modules names / sharing code between projects

2016-02-29 Thread Sebastien Alaiwan via Digitalmars-d

Hi Mike, thanks for taking the time to answer.
If I understand you correctly, you're advising me to keep my file 
hierarchy mostly in sync with my module hierarchy (except for one 
potential leading "src/" or "libs/" that I would add to the 
import search directories). That's fine with me.


However, what's the point of having module declarations 
directives in the language then?


If I understand correctly, the only way to make an interesting 
use of these directives makes separate compilation impossible.


I know dmd is lightning-fast and everything, and not having to 
manage build dependencies anymore is very attractive.


However, we still need separate compilation. Otherwise your 
turnaround time is going to resemble a tractor pulling 
competition as your project grows. Recompiling everything 
everytime you change one module is not an option ; Some of the 
source files I work on in my company require 30s to compile 
*alone* ; it's easy to reach, especially when the language has 
features like templates and CTFE (Pegged anyone?).





Re: A suggestion for modules names / sharing code between projects

2016-02-29 Thread Sebastien Alaiwan via Digitalmars-d

On Monday, 29 February 2016 at 21:35:48 UTC, Adam D. Ruppe wrote:
On Monday, 29 February 2016 at 19:03:53 UTC, Sebastien Alaiwan 
wrote:
What if I change the name of the package? Now I might have to 
change hundreds of module declarations from "module 
oldname.algo;" to "module newname.algo;".


That's easy to automate, just run a find/replace across all the 
files, or if you want to go really fancy, modify hackerpilot's 
dfix to do it for you.


Yes, I know ; but how can you say there's no redundancy when I 
have to rely on find/replace to keep all these declarations in 
sync?


Re: A suggestion for modules names / sharing code between projects

2016-02-29 Thread Sebastien Alaiwan via Digitalmars-d

On Monday, 29 February 2016 at 21:33:37 UTC, Adam D. Ruppe wrote:
On Monday, 29 February 2016 at 21:04:52 UTC, Sebastien Alaiwan 
wrote:
Ok so now let's say I rename the directory "lib" to "foo". If 
I don't change the "import lib.hello" to "import foo.hello", 
how is the compiler going to find "hello.d"?


You have to tell the compiler where it is.

Is it only possible with separate compilation?


Either way, the module name is *not* optional.


(As a reminder, as I said, I'm using separate compilation)


meh that's part of your problem, why are you doing it that way?
Because it reduces turnaround time (this is something I actually 
measured on the project I'm currently working on). Having to 
recompile everything each time I make a modification takes up to 
25s on this project. By the way, I'm using gdc. Is it also part 
of my problem? :-)





Re: A suggestion for modules names / sharing code between projects

2016-02-29 Thread Sebastien Alaiwan via Digitalmars-d

On Monday, 29 February 2016 at 20:59:45 UTC, Adam D. Ruppe wrote:
On Monday, 29 February 2016 at 20:05:11 UTC, Sebastien Alaiwan 
wrote:
Although, I'm trying to avoid having these redundant module 
declaration directives at the beginning of each of my library 
files.


Those module declarations aren't redundant - they are virtually 
required (I think it is a mistake that they aren't explicitly 
required in all cases, actually)


The file layout does not matter to the language itself. Only 
that module declaration does - it is NOT optional if you want a 
package name.


$ find
main.d
foo/hello.d

$ cat main.d
import foo.hello;

$ cat foo/hello.d
module foo.hello;

Ok so now let's say I rename the directory "lib" to "foo". If I 
don't change the "import lib.hello" to "import foo.hello", how is 
the compiler going to find "hello.d"?

(As a reminder, as I said, I'm using separate compilation)




Re: A suggestion for modules names / sharing code between projects

2016-02-29 Thread Sebastien Alaiwan via Digitalmars-d

On Monday, 29 February 2016 at 19:56:20 UTC, Jesse Phillips wrote:

I've used this pattern.

 ./projectA/lib/math/algo.d
 ./projectA/lib/math/lcp.d
 ./projectA/lib/math/optimize.d
 ./projectA/gui/main.d

 ./projectB/app/render/gfx.d
 ./projectB/app/render/algo.d
 ./projectB/lib/math/algo.d
 ./projectB/lib/math/lcp.d
 ./projectB/lib/math/optimize.d
 ./projectB/main.d

Dub doesn't like me too much, but in general it works.

Note that it doesn't matter where your modules live, if they 
have the declared module name in the file:


module math.optimize;

If DMD reads this file all your imports should look like:

import math.optimize;

Even if the file is in app/physics/math.

Yeah, I'm using this pattern too at the moment ;
Although, I'm trying to avoid having these redundant module 
declaration directives at the beginning of each of my library 
files.




Re: A suggestion for modules names / sharing code between projects

2016-02-29 Thread Sebastien Alaiwan via Digitalmars-d
On Monday, 29 February 2016 at 19:23:08 UTC, Jonathan M Davis 
wrote:
My solution would be to never have the same file be part of 
multiple projects. Share code using libraries rather than just 
sharing the files. Then modules won't be changing where they 
are in a project or anything like that. The shared code goes 
into a library, and whichever projects need it link against 
that library. You get better modularization and encapsulation 
that way and completely avoid the problem that you're having.
Thanks for your answer ; from what I understand, you're basically 
telling me to only import binary/precompiled libraries, right?


Although I had to admit I had never considered this solution, 
never have the same file be part of multiple projects seems 
extremely restrictive to me.
How is it going to work if I want to share heavily templated 
code, like a container library?


A suggestion for modules names / sharing code between projects

2016-02-29 Thread Sebastien Alaiwan via Digitalmars-d

Hi all,

I've came across the following problem number of times.
Let's say I have project A and B, sharing code but having a 
different tree structure.


$ find projectA
./projectA/internal/math/algo.d
./projectA/internal/math/lcp.d
./projectA/internal/math/optimize.d
./projectA/gui/main.d

$ find projectB
./projectB/app/render/gfx.d
./projectB/app/render/algo.d
./projectB/app/physics/math/algo.d
./projectB/app/physics/math/lcp.d
./projectB/app/physics/math/optimize.d
./projectB/main.d

The directory "math" is shared between projects. It actually is 
an external/submodule. So it has a standalone existence as a 
library, and might one day be used by projectC.

(In the context of this issue, I'm using separate compilation).

I'd like to be able to write, in projectA's main:

import internal.math.optimize;

This requires me to add, at the beginning of "optimize.d" file, 
this module definition:

module internal.math.optimize;

However, this "optimize.d" file is shared between projects, now 
it's becoming specific to projectA.


How am I supposed to share code between projects then?

Clearly, putting everyone back into the same "file namespace" by 
adding every subdirectory of my project as import paths through 
the command line is not going to scale.


After many projects spent thinking of this issue, I came to the 
conclusion that being able to specify, on the command line, the 
module name of the module currently being compiled, thus allowing 
to override it to be inferred to the basename of the file, would 
solve this problem.


As a side-problem related to this one, having to repeat the 
"absolute" module path to all module declarations/importations is 
tedious. What if I change the name of the package? Now I might 
have to change hundreds of module declarations from "module 
oldname.algo;" to "module newname.algo;".


Clearly, something must be wrong here (and I hope it's me!)
I'd be very happy if someone showed me how solve this problem, 
this has bothered me for ages.






Re: inout, delegates, and visitor functions.

2015-10-24 Thread Sebastien Alaiwan via Digitalmars-d-learn

Hi ponce,
Thanks for your suggestion.
I think I may have found the beginning of a solution:

class E
{
  import std.traits;

  void apply(this F, U)(void delegate(U e) f)
if(is(Unqual!U == E))
  {
f(this);
  }

  int val;
}

int main()
{
  void setToZero(E e)
  {
e.val = 0;
  }

  void printValue(const E e)
  {
import std.stdio;
writefln("Value: %s", e.val);
  }

  E obj;

  obj.apply();
  obj.apply();

  const(E) objConst;
  //objConst.apply();
  objConst.apply();

  return 0;
}


Basically, I avoid the 'const'/'inout' attribute of the 'apply' 
function by using a 'this F' template argument.
Then, I need a second template argument 'U', otherwise, I can't 
call 'printValue' on a non-const E instance.





inout, delegates, and visitor functions.

2015-10-24 Thread Sebastien Alaiwan via Digitalmars-d-learn

Hi all,

I'm trying to get the following code to work.
(This code is a simplified version of some algebraic type).
Is it possible to only declare one version of the 'apply' 
function?

Or should I declare the const version and the non-const version?

I tried using "inout", but I got the following error:

test.d(28): Error: inout method test.E.apply is not callable 
using a mutable object



class E
{
  void apply(void delegate(inout(E) e) f) inout
  {
f(this);
  }

  int val;
}

void m()
{
  void setToZero(E e)
  {
e.val = 0;
  }

  void printValue(const E e)
  {
import std.stdio;
writefln("Value: %s", e.val);
  }

  E obj;

  obj.apply();
  obj.apply();
}

Thanks!



Re: D : dmd vs gdc : which one to choose?

2015-02-19 Thread Sebastien Alaiwan via Digitalmars-d-learn
On Thursday, 19 February 2015 at 08:46:11 UTC, Mayuresh Kathe 
wrote:

Should I choose DMD or go with GDC?


I work with projects whose code is half written in C, half 
written in D. I use GNU make to build them. I found out that 
using GDC was a much better choice for several reasons:


- project portability 1: under Windows, dmd generates OMF object 
files that can't be linked by the gcc linker, while gdc generates 
COFF objet files. Which means:

   - I can use the same Makefile regardless of the target OS.
   - I can link mingw-compiled C code with D code.
   - I avoid the struggle of finding OMF versions of SDL.lib, 
advapi32.lib, etc.


- project portability 2: stupid detail, but the weird dmd way of 
specifying the output file in the command line ( dmd 
-ofmyfile.o ) defeats the heuristics of MSYS2 path conversion. 
That's a dealbreaker for me.


- when I'm running Debian/Ubuntu, the simple ability to natively 
run apt-get install gdc to install/upgrade is very practical.


As dmd's compilation speed is blazingly fast, it remains a cool 
way of writing automation scripts (#!/bin/usr/env rdmd), much 
better, in my opinion, than Bash, or even Python.