Re: From the D Blog: A Pattern for Head-mutable Structures

2020-07-01 Thread Johannes Pfau via Digitalmars-d-announce
Am Fri, 26 Jun 2020 08:36:06 + schrieb Mike Parker:

> I suspect they track HTTP referrers and red flag multiple hits to the
> same link from the same referrer. However they do it, I would expect
> linking directly to search results is something they account for.

Can't we just set Referrer-Policy: no-referrer in the web interface? 
Mailing list and newsgroup shouldn't be affected anyway.



-- 
Johannes


Re: Rationale for accepting DIP 1028 as is

2020-05-28 Thread Johannes Pfau via Digitalmars-d-announce
Am Thu, 28 May 2020 12:28:16 + schrieb Sebastiaan Koppe:

> On Thursday, 28 May 2020 at 09:21:09 UTC, Jonathan M Davis wrote:
>> He did unfortunately manage to convince Atila, so the DIP has been
>> accepted, but based on the discussions, I think that you may be the
>> only person I've seen say anything positive about the DIP treating
>> extern(C) functions as @safe.
>>
>> - Jonathan M Davis
> 
> I think Walter had to make a tough call with many tradeoffs. The
> defining feature of engineering I would say.
> 
> Is he wrong? Maybe, I don't know. The obvious path is far from always
> being a winner.
> 
> If it does come back to haunt him, he can always add a DIP to make
> extern(!D) @system by default. It won't invalidate any work.

This would be another round of massively breaking user code. And this is 
going to be exactly the argument that will be used to dismiss any DIP 
trying to change the defaults later on.


-- 
Johannes


Re: Rationale for accepting DIP 1028 as is

2020-05-28 Thread Johannes Pfau via Digitalmars-d-announce
Am Thu, 28 May 2020 10:50:44 +0200 schrieb Daniel Kozak:

> On Thu, May 28, 2020 at 4:56 AM Jonathan M Davis via
> Digitalmars-d-announce  wrote:
>>
>> As far as I can tell, Walter understands the issues but fundamentally
>> disagrees with pretty much everyone else on the issue.
> 
> I do not think so, the issue is, that there could be more people who
> agree with Walter (like me),
> but because we agree we do not participate.

You can not really assume any opinion for people who did not participate, 
unless you can really prove why there's a bias. I did not participate 
either and I do not agree with Walter. So now we can say the opinions of 
those who did not participate in the discussion are split 50:50 ;-)

We could assume there's a slight bias of those agreeing with Walter not 
responding because they don't have to actively convince anyone, as the 
DIP has been accepted. But given how much negative feedbcak there is, it's 
also likely people would voice their opinion to support the decision. 
Really the best thing we can assume is that the opionions of those not 
participating are split in the same way as the ones of those who are 
participating. The strawpoll posted recently suggests that as well:
https://www.strawpoll.me/20184671/r

-- 
Johannes


Re: DMD release compiler flags when building with GDC

2019-11-10 Thread Johannes Pfau via Digitalmars-d-learn
Am Sat, 09 Nov 2019 20:43:20 + schrieb Per Nordlöw:

> I've noticed that the make flag ENABLE_LTO=1 fails as
> 
>  Error: unrecognized switch '-flto=full'
> 
> when building dmd with GDC 9.
> 
> Does gdc-9 support lto? If so what flags should I use?
> 
> If not what are the preferred DFLAGS when building dmd with gdc?


I think -flto is the proper flag for GCC/GDC. I don't know if LTO is 
working though. A long time ago there were some bugs, but maybe that's 
been fixed. You probably just have to try and see ;-)


-- 
Johannes


Re: Building GDC with auto-generated header files

2019-07-30 Thread Johannes Pfau via Digitalmars-d-learn
Am Tue, 30 Jul 2019 15:19:44 +1200 schrieb rikki cattermole:

> On 30/07/2019 4:11 AM, Eduard Staniloiu wrote:
>> Cheers, everybody
>> 
>> I'm working on this as part of my GSoC project [0].
>> 
>> I'm working on building gdc with the auto-generated `frontend.h` [1],
>> but I'm having some issues
>> 
>> There are functions in dmd that don't have an `extern (C)` or `extern
>> (C++)` but they are used by gdc (are exposed in `.h` files)
>> 
>> An example of such a function is `checkNonAssignmentArrayOp`[2] from
>> `dmd/arrayop.d` which is can be found in `gcc/d/dmd/expression.h` [3]
> 
> It may have previously been extern(C) or its a gdc specific patch.
> Either way PR please.

Actually the code at https://github.com/gcc-mirror/gcc/blob/master/gcc/d/
dmd is still the C++ frontend. The DMD frontend in upstream master 
(https://github.com/dlang/dmd/blob/master/) and in GCC master are very 
different versions, so mismatches are expected.

The latest DDMD GDC is here: https://github.com/gcc-mirror/gcc/commits/
ibuclaw/gdc However, it's still not a good idea to mix and match files 
from DMD upstream master and that GDC branch, as they will not be 100% in 
sync. It's best to simply use only files from the gcc/d repo, as that's 
what's used when compiling GDC.

You could also have a look at the gcc/d/dmd/MERGE file, which will tell 
you what upstream DMD commit has been used in the respective GDC tree.

-- 
Johannes


Re: Release D 2.087.0

2019-07-15 Thread Johannes Pfau via Digitalmars-d-announce
Am Mon, 15 Jul 2019 20:39:14 + schrieb David Nadlinger:

> On Monday, 15 July 2019 at 20:27:16 UTC, Johannes Pfau wrote:
>> I guess this should be documented somewhere then.
> 
> See druntime/CONTRIBUTING.md:
> 
> ```
> In general, only modules in the 'core' package should be made public.
> The single exception is the 'object' module which is not in any package.
> 
> The convention is to put private modules in the 'rt' or 'gc' packages
> depending on what they do and only put public modules in 'core'.
> 
> Also, always avoid importing private modules in public modules. […]
> ```
> 

Well, this just opens the discussion on private vs. public modules again. 
The new array hooks are private as well, according to the definition 
above they would have to be in rt. And core.internal.* certainly aren't 
public modules.

I don't see how "should be made public" can be interpreted as "should be 
installed", especially considering that templates need source code 
installed (core.internal), but that's completely orthogonal to what 
functions should be private (core.internal) / public to users of druntime.

However, I'll open a PR to clarify that paragraph.



> This split has been in place since back in the D1/Tango days.

Sure, the core vs rt split did. But core.internal did not exist in D1.

-- 
Johannes


Re: Release D 2.087.0

2019-07-15 Thread Johannes Pfau via Digitalmars-d-announce
Am Mon, 15 Jul 2019 19:52:57 + schrieb David Nadlinger:

> On Monday, 15 July 2019 at 11:33:44 UTC, Mike Franklin wrote:
>>  My understanding is the `rt` is the language implementation
>> and `core` is the low level library for users.
> 
> This understanding would be mistaken. We haven't been shipping `rt` on
> the import path for a long time. `core.internal` exists precisely to
> fill this role, i.e. code that needs to be around at druntime import
> time, but isn't supposed to be accessed directly by users.
> 
>   — David

I guess this should be documented somewhere then. GDC has always shipped 
and still ships rt. and never had any problem with that.

-- 
Johannes


Re: Release D 2.087.0

2019-07-15 Thread Johannes Pfau via Digitalmars-d-announce
Am Mon, 15 Jul 2019 20:14:46 + schrieb David Nadlinger:

> On Monday, 15 July 2019 at 14:00:23 UTC, Mike Franklin wrote:
>> I'm sorry it broke digger, but digger is not how we typically build
>> DMD, druntime, and Phobos.
> 
> Either way, there is a simple resolution here: Put new template code or
> other artefacts that are actually used via import in core.* (e.g.
> core.internal.*). This also provides a natural boundary between legacy
> code and the new runtime interface. If more and more code gets
> template-ised, rt.* will slowly wither away, but there is nothing wrong
> with that. At some point, it will just cease to exist naturally.

Well, I guess if we all agree that rt. is basically deprecated, this may 
be a good way to move forward.


-- 
Johannes


Re: Release D 2.087.0

2019-07-15 Thread Johannes Pfau via Digitalmars-d-announce
Am Mon, 15 Jul 2019 17:59:25 + schrieb Seb:

> On Monday, 15 July 2019 at 14:00:23 UTC, Mike Franklin wrote:
>> On Monday, 15 July 2019 at 13:00:08 UTC, Vladimir Panteleev wrote:
>>
 We are trying to implement many of those `extern(C)` runtime hooks as
 templates.  Those templates need to be implicitly imported through
 object.d.  That means code that was in `rt` is converted to a
 template, and then moved to object.d. However, as we do more and more
 of them object.d becomes unwieldy.

 I took the initiative to prevent object.d from turning into a more of
 a monstrosity that it already is, and moved those runtime templates
 (which used to reside in `rt`) back into `rt`.
>>>
>>> This is not a problem, and not at all related to the issue we're
>>> discussing. The problem is that you chose to move them into `rt`
>>> instead of somewhere under `core`, which would respect existing
>>> conventions and avoid breakages like the ones we've seen reported in
>>> this thread.
>>
>> It is related.  If I follow your suggestion to move these
>> implementations to `core.internal` and continue with the objective of
>> converting all runtime hooks to templates, the vast majority of `rt`
>> will end up being moved to `core.internal`.  Is that what you're
>> suggesting?
>>
>> `rt` is the language implementation.  `core.internal` contains the
>> utilities used internally by druntime and "privately" imported by
>> Phobos.  Following that established convention, I made the right
>> decision.
>>
>> I'm sorry it broke digger, but digger is not how we typically build
>> DMD, druntime, and Phobos.
>>
>> Mike
> 
> The point is that we don't ship the sources of rt to the user. That's
> the separation. With templates sources must be made available to the
> user, s.t. the compiler can instantiate them. However, rt doesn't get
> shipped to the user as it is compiled only.
> 

But why is that? What's the benefit here? And do we skip to this 
convention forever, only for legacy reasons?
We always shipped rt in gdc btw. Nobody ever complained.


If we decide to move that code to core.internal, I'm with Mike that we 
should simply move all array code out of rt. In the long term, we may 
even end up moving anything out of rt: Modern D code is template heavy, 
template code needs the sources available. Inlining needs the sources as 
well. The more we get rid of TypeInfo and the more we modernize the 
compiler/runtime interface, this weill become an issue.

And duplicating extern(C) declarations, syncing them manually, ... is a 
safety liability and maintainance nightmare (see my other post). So in no 
way should we start to add more such functions interfacing rt to 
core.internal.


-- 
Johannes


Re: Release D 2.087.0

2019-07-15 Thread Johannes Pfau via Digitalmars-d-announce
Am Mon, 15 Jul 2019 14:00:23 + schrieb Mike Franklin:

> On Monday, 15 July 2019 at 13:00:08 UTC, Vladimir Panteleev wrote:
> 
>>> We are trying to implement many of those `extern(C)` runtime hooks as
>>> templates.  Those templates need to be implicitly imported through
>>> object.d.  That means code that was in `rt` is converted to a
>>> template, and then moved to object.d. However, as we do more and more
>>> of them object.d becomes unwieldy.
>>>
>>> I took the initiative to prevent object.d from turning into a more of
>>> a monstrosity that it already is, and moved those runtime templates
>>> (which used to reside in `rt`) back into `rt`.
>>
>> This is not a problem, and not at all related to the issue we're
>> discussing. The problem is that you chose to move them into `rt`
>> instead of somewhere under `core`, which would respect existing
>> conventions and avoid breakages like the ones we've seen reported in
>> this thread.
> 
> It is related.  If I follow your suggestion to move these
> implementations to `core.internal` and continue with the objective of
> converting all runtime hooks to templates, the vast majority of `rt`
> will end up being moved to `core.internal`.  Is that what you're
> suggesting?
> 
> `rt` is the language implementation.  `core.internal` contains the
> utilities used internally by druntime and "privately" imported by
> Phobos.  Following that established convention, I made the right
> decision.
> 
> I'm sorry it broke digger, but digger is not how we typically build DMD,
> druntime, and Phobos.
> 
> Mike

I agree here: rt is code deeply tied to the language / compiler. 
core.internal is largely code which is useful as standalone module 
(abort, convert, parts of dassert, lifetime, traits, string).

However, the structure is not really clear: rt.util (older than 
core.internal) should probably rather be part of core.internal and some 
code in core.internal (arrayop, assert) should be in rt.

Either way, dictating a code structure on druntime only because of build 
system aspects (these files are installed, these are not) seems to be a 
very bad idea. The code should be structured in a way to minimize cross-
module dependencies and to seperate compiler specific from generic code.

In addition, the build system shipped as part of druntime is the 
authoritative way to build the project. Even though digger is an 
important tool, we can't really compromise on code quality in druntime 
only to stay compatible to build systems using undocumented internals of 
the runtime build process.

-- 
Johannes


Re: Release D 2.087.0

2019-07-15 Thread Johannes Pfau via Digitalmars-d-announce
Am Mon, 15 Jul 2019 12:27:22 + schrieb Vladimir Panteleev:

> On Monday, 15 July 2019 at 12:14:16 UTC, Mike Franklin wrote:
>> Many of the implementations in `rt/array` require importing or
>> referencing other implementations in `rt` (e.g. `rt.lifetime`).
>>  If they were moved to `core.internal` they would require
>> importing `rt` or peeking into `rt` with various hacks, which exactly
>> what you've said should not be done.
> 
> This isn't exactly true. The restriction is that core should not
> *import* rt. Have a look at all the extern(C) definitions in Druntime -
> using extern(C) functions to communicate between the compiler and rt, as
> well as core and rt, is not a "hack", but an established mechanism to
> invoke the low-level implementations in Druntime.


Grepping for extern in core.internal yields one result outside of 
pareoptions.d. If you count parseoptions.d, 6 results.

I wonder how you can advertise this as a good idea: You have to manually 
keep declarations in sync, you have to be very careful to get the 
attributes right, module constructor evaluation order guarantees don't 
hold, no mangling (no type safety), you pollute the C namespace, no 
inlining, no templates.

This is an established workaround at best, in no way it's a good solution.

-- 
Johannes


Re: Release D 2.087.0

2019-07-15 Thread Johannes Pfau via Digitalmars-d-announce
Am Mon, 15 Jul 2019 12:40:50 + schrieb Vladimir Panteleev:

> On Monday, 15 July 2019 at 12:36:14 UTC, Mike Franklin wrote:
>> Many of the implementations in `rt/array` are templates, so the entire
>> implementation should be available through object.d, not just
>> declarations.
> 
> The amount of templated code is still finite, otherwise you would have
> needed to include all of rt into the set of files to be made importable.

Which is due to the limited time available in GSoC. Right now, the 
implementations simply call typeid and forward to the TypeInfo based 
implementations. But all of these implementations are ultimately meant to 
be fully templated, and then you'll need access to lots of rt functions.

So why should we we now move the functions to core, so that we can later 
move them back to rt?

> 
>> In `core.internal`, I see utilities that appear to be intended for use
>> only within runtime, or "privately" imported by Phobos.
>>  I do not see implementations for fundamental language features
>> as can be found in `rt`.  The code in `rt/array` implementations for
>> fundamental language features, not utilities to be used privately with
>> druntime.
> 
> Please have a closer look:
> 
> - core.internal.hash contains the implementation of hashing routines
> used for associative arrays.
> - core.internal.arrayop contains the implementation of array vector
> operations. This one doesn't seem to be too far from your work in
> question.
> 

Both don't have any real dependencies. Maybe all the array implementation 
stuff could be moved to core, but until that happens moving only parts of 
the implementation seems to be a bad idea. Especially considering this:

>> "rt can import core, but core can't import rt." makes it pretty clear
>> to me.


It's probably a mistake that we even have both rt and core.internal. 
core.internal seems to be a much more recent addition (2013), probably 
inspired by std.internal. It's no wonder there's duplication and no 
clearly defined scope for the packages.

And why would anyone think it's a good idea not to install the rt 
headers? What do you gain from this, except from a few KB saved disk 
space?

-- 
Johannes


Re: Release D 2.087.0

2019-07-07 Thread Johannes Pfau via Digitalmars-d-announce
Am Sun, 07 Jul 2019 08:06:57 + schrieb uranuz:

> After updating compiler to 2.087 I got a lot of deprecation warnings
> linked to std.json module. I have found all of the usages of deprecated
> symbols in my project and changed them to the new ones. All these
> warnings are about changing JSON_TYPE to JSONType JSON_TYPE.STRING to
> JSONType.string and etc...
> But after eleminating deprecated symbols from my project I still have
> deprecation warnings. Seems that these symbols being accessed from
> Phobos, because I am pretty sure that I don't have other external
> dependencies that use std.json. The problem it that because of this
> `spamming` messages I can miss `real` deprecation warnings. Is there
> some way to `fix` it? The is some part of compiler output (all of it is
> too long): /usr/include/dmd/phobos/std/conv.d(987,34): Deprecation: enum
> member `std.json.JSONType.INTEGER` is deprecated - Use .integer
> /usr/include/dmd/phobos/std/conv.d(987,34): Deprecation: enum member


I think phobos does not explicitly use these deprecated symbols, but the 
reflection code in format triggers the deprecation messages:

import std.json, std.stdio;
void main()
{
JSONType c;
writefln("%s", c);
}

I'm not sure if this can be solved, maybe deprecated members can be 
explicitly ignored in format, but maybe the reflection code itself is 
triggering the deprecation (which would then probably be a DMD bug).

-- 
Johannes


Re: DConf 2019 Livestream

2019-05-09 Thread Johannes Pfau via Digitalmars-d-announce
Am Thu, 09 May 2019 09:27:13 -0700 schrieb H. S. Teoh:

> On Thu, May 09, 2019 at 01:54:31AM -0400, Nick Sabalausky (Abscissa) via
> Digitalmars-d-announce wrote:
> [...]
>> This sort of stuff happens literally EVERY year! At this point, you can
>> pretty much guarantee that for any Dconf, Day 1's keynote doesn't get
>> professionally livestreamed, if its recorded at all. At the very LEAST,
>> it makes us look bad.
>> 
>> Is there SOMETHING we can do about this moving forward? Maybe use
>> Dconf/Dfoundation funds to hire a proven video crew not reliant on
>> venue, or something...?
> 
> +1. This repeated unreliability of streaming/recording is embarrassing.
> We should just use our own video crew next DConf. *After* testing
> everything on-venue *before* the actual start of the conference, so that
> any issues are noticed and addressed beforehand.
> 
> 
> T

I guess we could contact the c3voc team, they organize streaming and VOD 
for various conferences (most notably the Chaos Communication Congress but 
also the SystemD (All Systems Go!), LinuxTag and various smaller events 
were handled by these guys). Apart from publishing (postprocessed) 
recordings and handling livestreams, they also have a reLive feature 
which allows you to watch the (not postprocessed) livestream recordings 
immediately after streaming ends. And they upload videos to youtube as 
well:
https://www.youtube.com/user/mediacccde/videos?app=desktop

https://c3voc.de/
https://c3voc.de/eventkalender
https://streaming.media.ccc.de/
https://media.ccc.de/a

Example: https://media.ccc.de/v/35c3-9783-the_mars_rover_on-board_computer
https://www.youtube.com/results?search_query=the+mars+rover+on-
board+computer

-- 
Johannes


Re: Where is GDC being developed?

2019-03-21 Thread Johannes Pfau via Digitalmars-d-learn

On Thursday, 21 March 2019 at 08:19:56 UTC, Per Nordlöw wrote:

At

https://github.com/D-Programming-GDC/GDC/commits/master

there's the heading

"This repository has been archived by the owner. It is now 
read-only."


Where will the development of GDC continue?


We use https://github.com/D-Programming-GDC/gcc for CI, but 
commits will go to the GCC SVN first, so GCC SVN or snapshot 
tarballs is the recommended way to get the latest GDC.


There is one exception: When GCC development is in feature 
freeze, we might provide newer DMD frontends in a gdc-next branch 
at https://github.com/D-Programming-GDC/gcc . However, so far we 
have not set up this branch, this will probably happen in the 
next two weeks. Maybe I'll also provide DDMD-FE backports for 
GCC9 in that repo, but I'm not sure yet. The latest DDMD-FE is 
somewhere in the archived repos, but it hasn't been updated for 
some time.


Re: GDC with D frontend 2.081.2

2018-08-29 Thread Johannes Pfau via Digitalmars-d-announce
Am Tue, 28 Aug 2018 10:19:46 +0200 schrieb Daniel Kozak:

> On Tue, Aug 28, 2018 at 8:40 AM Eugene Wissner via
> Digitalmars-d-announce <
> digitalmars-d-announce@puremagic.com> wrote:
> 
>> On Tuesday, 28 August 2018 at 06:18:28 UTC, Daniel Kozak wrote:
>> > On Mon, Aug 27, 2018 at 7:55 PM Eugene Wissner via
>> > Digitalmars-d-announce < digitalmars-d-announce@puremagic.com>
>> > wrote:
>> >
>> >> On Monday, 27 August 2018 at 17:23:04 UTC, Arun Chandrasekaran
>> >> wrote:
>> >> > 1. It would be good to print the DMD frontend version with `gdc
>> >> > --version`. It is helpful in reporting bugs. LDC does this.
>> >> >
>> >> Unfortunately it doesn't seem to be possible. GCC doesn't allow to
>> >> change --version output:
>> >> https://bugzilla.gdcproject.org/show_bug.cgi?id=89
>> >>
>> >>
>> > This is not true, right now on archlinux if you type gdc --version it
>> > will display d frontend version
>> > https://bugzilla.gdcproject.org/show_bug.cgi?id=89#c1
>>
>> Is it set with --with-pkgversion? The same information will be
>> displayed for gcc and g++ then. It is not always desirable if you ship
>> the compiler as a whole (with libtool etc).
>>
>>
> Yes and no. It is set with  --with-pkgversion but it is only for gdc.

But this only works as you build gdc and gcc separately. I.e. for gdc, 
you build gcc+gdc, then throw away everything but gdc related 
executables. Then you compile gcc with different --with-pkgversion for 
the gcc package.

However, this has the problem that your gcc executable now does not 
properly forward .d files to gdc as that build did not have --enable-
languages=d. The supported way to do this is build all gcc based 
compilers at once. But then you can't use --with-pkgversion as it will 
apply to all compilers.

-- 
Johannes


Re: Is there any good reason why C++ namespaces are "closed" in D?

2018-08-02 Thread Johannes Pfau via Digitalmars-d
Am Wed, 01 Aug 2018 22:13:05 -0700 schrieb Walter Bright:

> On 8/1/2018 12:01 PM, Manu wrote:
>> You've never justified the design complexity and the baggage it
>> carries.
> Don't confuse you not agreeing with it with I never justified it.
> 
> And please don't confuse me not listening to you with me not agreeing
> with you.
> 
> It *is* possible for reasonable people to disagree, especially when any
> solution will involve many tradeoffs and compromises.

In your most recent posts you provided some rationale for this, but 
nowhere as much as would have been necessary if anybody else proposed 
this feature and had to write a DIP for it. Introducing c++ namespace 
scopes added quite some complexity to the language and so far, you seem 
to be the only proponent of this, whereas we have many opponents. In the 
DIP process, such a change would have required quite a solid 
justification, examples, comparison to alternative solutions etc. Such a 
detailed rationale has never been given for this feature.

-- 
Johannes


Re: Is there any good reason why C++ namespaces are "closed" in D?

2018-08-02 Thread Johannes Pfau via Digitalmars-d
Am Wed, 01 Aug 2018 16:04:01 -0700 schrieb Walter Bright:

> 
> Now, with D:
> 
>  extern (C++, ab) void foo(long);
>  foo(0);// works!
>  ---
>  extern (C++, ab) void foo(long);
>  extern (C++, ab) void foo(int);   // error!
>  ---
>  extern (C++, ab) void foo(long);
>  extern (C++, cd) void foo(int);
>  foo(0);// error!
> 
> I juxtaposed the lines so it's obvious. It's not so obvious when there's
> a thousand lines of code between each of those lines. It's even worse
> when foo(long) sends a birthday card to your daughter, and foo(int)
> launches nuclear missiles.

You probably didn't completely think this through: Earlier you suggested 
to use aliases to avoid explicitly specifying the c++ scopes. Then you 
suggested to use mixins or translator tools to automate alias generation 
and avoiding manually writing that boiler plate code. But if you do that:

-
extern (C++, ab) void foo(long);
extern (C++, cd) void foo(int);
alias foo = ab.foo;
alias foo = cd.foo;
-

You've now got exactly the same problem with hijacking...

So the benefit of explicit c++ namespace scoping is only a benefit if you 
do not use this alias trick. But then you face all other problems 
mentioned earlier...

As a result, everybody now has to use the aliasing trick, the hijacking 
problem still exists and we have to write lots of useless boilerplate.

-- 
Johannes


Re: Is there any good reason why C++ namespaces are "closed" in D?

2018-08-01 Thread Johannes Pfau via Digitalmars-d
Am Wed, 01 Aug 2018 16:04:01 -0700 schrieb Walter Bright:

>> Certainly, it does come across like you didn't trust the D module
>> system to do its job for some reason.
> 
> Reorganizing the code into modules means potentially forcing users to
> split code from one C++ file into multiple D files. How's that really
> going to work if you have a translation tool? One of the aspects of Java
> I didn't care for was forcing each class into its own file.

Why would that not work with a translation tool? Just establish a fixed C+
+ namespace / D module mapping, then whenever processing something in a C+
+ namespace append the declaration to the corresponding file.

> 
> So while Manu is clearly happy with cutting up a C++ file into multiple
> D files,
> I doubt that is universal. His proposal would pretty much require that
> for anyone trying to work with C++ namespaces who ever has a name
> collision/hijack or wants to make the code robust against
> collision/hijacking.
> 
> An example of silent hijacking:
> 
> extern (C++, "ab") void foo(long); // original code ... lots of code
> ...
> extern (C++, "cd") void foo(int); // added later by intern, should
> have been
>   // placed in another module
> ... a thousand lines later ...
> foo(0); // OOPS! now calling cd.foo() rather than ab.foo(), D sux
> 
> You might say "nobody would ever write code like that." But that's like
> the C folks saying real C programmers won't write:
> 
>  int a[10];
>  for (int i = 0; i <= 10; ++i)
> ...a[i]...
> 
> But we both know they do just often enough for it to be a disaster.
> 
> Now, with D:
> 
>  extern (C++, ab) void foo(long);
>  foo(0);// works!
>  ---
>  extern (C++, ab) void foo(long);
>  extern (C++, ab) void foo(int);   // error!
>  ---
>  extern (C++, ab) void foo(long);
>  extern (C++, cd) void foo(int);
>  foo(0);// error!
> 
> I juxtaposed the lines so it's obvious. It's not so obvious when there's
> a thousand lines of code between each of those lines. It's even worse
> when foo(long) sends a birthday card to your daughter, and foo(int)
> launches nuclear missiles.

If you insist on using a 'translator' tool anyway, it's trivial to detect 
this problem automatically in such a tool.  

-- 
Johannes


Re: Is there any good reason why C++ namespaces are "closed" in D?

2018-08-01 Thread Johannes Pfau via Digitalmars-d
Am Wed, 01 Aug 2018 16:31:57 -0700 schrieb Walter Bright:

> On 7/31/2018 1:47 AM, Atila Neves wrote:
>> The only good way (I don't think the mixin template and struct
>> solutions count)
>> to link to any of that today would be to have one enormous D file with
>> _everything_ in it, including nested namespaces.
> 
> Why doesn't it count? The user doesn't need to write that code, the
> translator does. 

I remember a time when people here were joking about all the boilerplate 
you have to write when using java and that it's only usable with an IDE. 
Now we've got a C++ interfacing which requires lots of boilerplate and is 
only usable with an external translater tool...

I guess that would be acceptable if there is a real benefit, but I have 
not seen a single argument for the current behavior in this thread. It's 
great that we can workaround the scoping with lots of boilerplate, but 
when is C++ namespaces introducing a scope in D actually useful? Can you 
give an example where this scoping is necessary? So far I have not seen a 
single person happily using that scoping feature.

-- 
Johannes


Re: DMD, Vibe.d, and Dub

2018-07-31 Thread Johannes Pfau via Digitalmars-d
Am Wed, 18 Jul 2018 19:08:32 +0100 schrieb Russel Winder:

> On Wed, 2018-07-18 at 17:45 +0000, Johannes Pfau via Digitalmars-d
> wrote:
>> Am Wed, 18 Jul 2018 13:29:00 +0100 schrieb Russel Winder:
>> 
>> 
> […]
>> > libssl installed but libssl-dev not. I can't quite see why the linker
>> > ld needs the development files, it just needs the shared objects to
>> > be present.
>> 
>> Debian moved the lib*.so => lib*.so.123version symlinks into the -dev
>> packages some time ago, so now you can't link without -dev packages.
>> Not the smartest move imho
> 
> I think I shall find it hard to discover a reason why you are wrong,
> but clearly the Debian devs in charge managed to.

I actually found a reason why you do not want the .so symlink in normal 
runtime packages: If you have a library libfoo with different versions, 
i.e. 1.0 and 2.0, the libfoo packages for 1.0 and 2.0 do not have any 
conflicting files so you can in theory install both library versions 
(with same package name) at the same time. I don't know if debian 
supports this but I think on fedora it's possible to install multiple 
versions of the same package. Havin the .so symlink in the non-dev 
package would prevent this usage pattern.

-- 
Johannes


Re: New to GDC on ARM 32-bit Ubuntu

2018-07-19 Thread Johannes Pfau via Digitalmars-d-learn
Am Tue, 17 Jul 2018 04:51:04 + schrieb Cecil Ward:

> I am getting an error when I try and compile anything with the GDC
> compiler which is coming up associated with source code within a D
> include file which is not one of mine
> 
> I am using a Raspberry Pi with Ubuntu 16.04 and have just done an
> "apt-get install gdc". Using ldc works fine.
> 
> The error is :
> root@raspberrypi:~#   gdc mac_hex.d -O3 -frelease
> /usr/include/d/core/stdc/config.d:58:3: error: static if conditional
> cannot be at global scope
> static if( (void*).sizeof > int.sizeof )
> ^

These files in /usr/include/d probably belong to the ldc package and 
therefore are not compatible with gdc. gdc automatically picks up files 
in /usr/include/d so this folder should not contain compiler-specific 
files.

I think this has been fixed in more recent ubuntu releases. For now you 
could uninstall ldc to see if this is really the problem.

-- 
Johannes


Re: DMD, Vibe.d, and Dub

2018-07-18 Thread Johannes Pfau via Digitalmars-d
Am Wed, 18 Jul 2018 13:29:00 +0100 schrieb Russel Winder:

> On Wed, 2018-07-18 at 11:41 +, Seb via Digitalmars-d wrote:
>> On Wednesday, 18 July 2018 at 11:35:05 UTC, Russel Winder wrote:
>> > On Tue, 2018-07-17 at 21:46 +, Radu via Digitalmars-d wrote:
>> > > On Tuesday, 17 July 2018 at 18:55:07 UTC, Russel Winder wrote:
>> > > > [...]
>> > > 
>> > > Missing openssl libs? Try installing openssl-dev package.
>> > 
>> > The Debian Sid openssl package is definitely installed. There doesn't
>> > seem to be a separate openssl-dev package.
>> 
>> It's called libssl-dev
> 
> libssl installed but libssl-dev not. I can't quite see why the linker ld
> needs the development files, it just needs the shared objects to be
> present.

Debian moved the lib*.so => lib*.so.123version symlinks into the -dev 
packages some time ago, so now you can't link without -dev packages. Not 
the smartest move imho

-- 
Johannes


Re: Copy Constructor DIP

2018-07-12 Thread Johannes Pfau via Digitalmars-d
Am Thu, 12 Jul 2018 17:32:06 + schrieb Johannes Pfau:

> Am Thu, 12 Jul 2018 09:48:37 -0400 schrieb Andrei Alexandrescu:
> 
>>> I agree that the current syntax is lacking. This was Andrei's
>>> proposition and I was initially against it, but he said to put it in
>>> the DIP so that we can discuss it as a community. Maybe this syntax is
>>> better:
>>> 
>>> @this(ref S a another)
>>> 
>>> It looks like the c++ copy constructor but the `@` makes it different
>>> from a constructor, so we're good. What do you think?
>> 
>> We will not add syntax if we can help it.
> 
> We have this(this) for postblits so how about this(ref this a) for copy
> constructors?
> 
> Unfortunately this is currently valid code and compiles: this is treated
> as typeof(this). However, we have already deprecated that, so maybe we
> can reuse the syntax? It should be a quite consistent evolution from
> this(this).
> 
> (Another option is this(ref this A a) which does not conflict with
> existing syntax).

I just read your other replies Andrei. I guess if we're ever going use 
the same syntax for implicit conversions, the @implicit syntax is indeed 
consistent and logical. As long as it's only used for copy-ctors the name 
feels 'strange'.

-- 
Johannes


Re: Copy Constructor DIP

2018-07-12 Thread Johannes Pfau via Digitalmars-d
Am Thu, 12 Jul 2018 09:48:37 -0400 schrieb Andrei Alexandrescu:

>> I agree that the current syntax is lacking. This was Andrei's
>> proposition and I was initially against it, but he said to put it in
>> the DIP so that we can discuss it as a community. Maybe this syntax is
>> better:
>> 
>> @this(ref S a another)
>> 
>> It looks like the c++ copy constructor but the `@` makes it different
>> from a constructor, so we're good. What do you think?
> 
> We will not add syntax if we can help it.

We have this(this) for postblits so how about this(ref this a) for copy 
constructors?

Unfortunately this is currently valid code and compiles: this is treated 
as typeof(this). However, we have already deprecated that, so maybe we 
can reuse the syntax? It should be a quite consistent evolution from 
this(this).

(Another option is this(ref this A a) which does not conflict with 
existing syntax).

-- 
Johannes


Re: Why are we not using libbacktrace for backtrace?

2018-06-15 Thread Johannes Pfau via Digitalmars-d
Am Thu, 14 Jun 2018 20:57:05 + schrieb Yuxuan Shui:

> On Thursday, 14 June 2018 at 17:26:50 UTC, Johannes Pfau wrote:
>> Am Thu, 14 Jun 2018 01:19:30 + schrieb Yuxuan Shui:
>>
>>> Just ran into a problem where program will crash during stack trace.
>>> Turns out not only does druntime not support compressed debug info, it
>>> cannot handle it at all.
>>> 
>>> So I was thinking why don't we use a existing and proven library for
>>> this, instead of roll our own?
>>
>> GDC uses libbacktrace since 2013: https://github.com/D-Programming-GDC/
>> GDC/blob/master/libphobos/libdruntime/gcc/backtrace.d
>>
>> I think the main problem for DMD/LDC is that libbacktrace is not an
>> installed library, it's only available while building GCC.
> 
> libbacktrace is a standalone library:
> https://github.com/ianlancetaylor/libbacktrace
> 
> GCC is using it.

It was initially developed for GCC and only available in the GCC tree. 
Ian Lance Taylor is a GCC developer.

However, my point is that libbacktrace does not install as a .so shared 
library. Try to find packages for debian, rhel, ... It's just not 
distributed.
As there is a standalone github repo now, the DMD builds could probably 
compile the source code into libdruntime like GCC does, but it's not as 
simple as linking a library.

-- 
Johannes


Re: Why are we not using libbacktrace for backtrace?

2018-06-14 Thread Johannes Pfau via Digitalmars-d
Am Thu, 14 Jun 2018 01:19:30 + schrieb Yuxuan Shui:

> Just ran into a problem where program will crash during stack trace.
> Turns out not only does druntime not support compressed debug info, it
> cannot handle it at all.
> 
> So I was thinking why don't we use a existing and proven library for
> this, instead of roll our own?

GDC uses libbacktrace since 2013: https://github.com/D-Programming-GDC/
GDC/blob/master/libphobos/libdruntime/gcc/backtrace.d

I think the main problem for DMD/LDC is that libbacktrace is not an 
installed library, it's only available while building GCC.


-- 
Johannes


Re: Replacing C's memcpy with a D implementation

2018-06-11 Thread Johannes Pfau via Digitalmars-d
Am Mon, 11 Jun 2018 10:54:23 + schrieb Mike Franklin:

> On Monday, 11 June 2018 at 10:38:30 UTC, Mike Franklin wrote:
>> On Monday, 11 June 2018 at 10:07:39 UTC, Walter Bright wrote:
>>
 I think there might also be optimization opportunities using
 templates, metaprogramming, and type introspection, that are not
 currently possible with the current design.
>>>
>>> Just making it a template doesn't automatically enable any of this.
>>
>> I think it does, because I can then generate specific code based on the
>> type information at compile-time.
> 
> Also, before you do any more nay-saying, you might want to revisit this
> talk https://www.youtube.com/watch?v=endKC3fDxqs which demonstrates
> precisely the kind of benefits that can be achieved with these kinds of
> changes to the compiler/runtime interface.
> 
> Mike

I guess for most D runtime hooks, using templates is a good idea to 
enable inlining and further optimizations.

I understand that you actually need to reimplement memcpy, as in your 
microcontroller usecase you don't want to have any C runtime. So you'll 
basically have to rewrite the C runtime parts D depends on.

However, I think for memcpy and similar functions you're probably better 
off keeping the C interface. This directly provides the benefit of 
compiler intrinsics/optimizations. And marking memcpy as nothrow/pure/
system/nogc is simple* either way. For the D implementation, the compiler 
will verify this for you, for the C implementation, you have to mark the 
function depending on the C implementation. But that's mostly trivial.

On a related note, I agree that the compiler sometimes cheats by ignoring 
attributes, especially when calling TypeInfo related functions, and this 
is a huge problem. Runtime TypeInfo is not descriptive enough to fully 
represent the types and whenever the compiler the casts without properly 
checking first, there's the possibility of a problem.

-- 
Johannes


Re: #dbugfix 18234

2018-06-11 Thread Johannes Pfau via Digitalmars-d
Am Mon, 11 Jun 2018 16:37:05 + schrieb Basile B.:

> Russel Winder, "Shove" and finally myself, have encountered a strange
> linker error with almost always the same symbols related to a template
> instance...
> 
> See:
> 
> -
> https://forum.dlang.org/post/mailman.855.1526549201.29801.digitalmars-d-
l...@puremagic.com
> - https://issues.dlang.org/show_bug.cgi?id=18234 -
> https://issues.dlang.org/show_bug.cgi?id=18971
> 
> It's just a matter of time before a fourth folk come with the same
> error.

Is this a duplicate bug?
https://issues.dlang.org/show_bug.cgi?id=17712

Looks similar.

-- 
Johannes


Re: std.digest can't CTFE?

2018-06-10 Thread Johannes Pfau via Digitalmars-d
Am Fri, 08 Jun 2018 11:46:41 -0700 schrieb Manu:
> 
> I'm already burning about 3x my reasonably allocate-able free time to
> DMD PR's...
> I'd really love if someone else would look at that :)

I'll see if I can allocate some time for that. Should be a mostly trivial 
change.

> I'm not quite sure what you mean though; endian conversion functions are
> still endian conversion functions, and they shouldn't be affected here.

Yes, but the point made in that article is that you can implement 
*Endian<=>native conversions without knowing the native endianness. This 
would immediately make these functions CTFE-able.

> The problem is in the std.digest code where it *calls* endian functions
> (or makes endian assumptions). There need be no reference to endian in
> std.digest... if code is pulling bytes from an int (ie, cast(byte*)) or
> something, just use ubyte[4] and index it instead if uint, etc. I'm
> surprised that digest code would use anything other than byte buffers.
> It may be that there are some optimised version()-ed fast-paths might be
> endian conscious, but the default path has no reason to not work.

That's not how hash algorithms are usually specified. These algorithms 
perform bit rotate operations, additions, multiplications on these 
values*. You could probably implement these on byte[4] values instead, 
but you'll waste time porting the algorithm, benchmarking possible 
performance impacts and it will be more difficult to compare the 
implementation to the reference implementation (think of audits).

So it's not realistic to change this.

* An interesting question here is if you could actually always ignore 
system endianess and do simple casts when cleverly adjusting all 
constants in the algorithm to fit?
-- 
Johannes


Re: std.digest can't CTFE?

2018-06-08 Thread Johannes Pfau via Digitalmars-d
Am Sat, 02 Jun 2018 06:31:37 + schrieb Atila Neves:

> On Friday, 1 June 2018 at 20:12:23 UTC, Kagamin wrote:
>> On Friday, 1 June 2018 at 10:04:52 UTC, Johannes Pfau wrote:
>>> However you want to call it, the algorithms interpret data as numbers
>>> which means that the binary representation differs based on endianess.
>>> If you want portable results, you can't ignore that fact in the
>>> implementation. So even though the algorithms are not dependent on the
>>> endianess, the representation of the result is. Therefore standards do
>>> usually propose an internal byte order.
>>
>> Huh? The algorithm packs bytes into integers and does it independently
>> of platform. Once integers are formed, the arithmetic operations are
>> independent of endianness. It works this way even in pure javascript,
>> which is not sensitive to endianness.
> 
> It's a common programming misconception that endianness matters much.
> It's one of those that just won't go away, like "GC languages are slow"
> or "C is magically fast". I recommend reading this:
> 
> https://commandcenter.blogspot.com/2012/04/byte-order-fallacy.html
> 
> In short, unless you're a compiler writer or implementing a binary
> protocol endianness only matters if you cast between pointers and
> integers. So... Don't.
> 
> Atila

That's an interesting point. When I said the algorithm depends on the 
system endianess I was indeed always thinking in terms of machine code 
(i.e. if system endianess=data endianess you hopefully do nothing at all, 
otherwise you need some conversion).
But it is indeed true that describing conversion as mathematical shift 
operations + indexing will leave handling these differences to the 
compilers. So you can probably say the algorithm doesn't depend on system 
endianess, although a low level representation of implementations will. I 
guess this is what Kagamin wanted to explain, please excuse me for not 
getting the point.

So in our case, we can obviously use that higher-abstraction-level 
interpretation and the idiom used in the article indeed works fine in 
CTFE. So somebody (@Manu?) just has to fix std.bitmanip *EndianToNative 
nativeTo*Endian functions to use this (probably benchmarking performance 
impacts). Then std.digest should simply start working or should at least 
be easy to fix for CTFE support.

-- 
Johannes


Re: std.digest can't CTFE?

2018-06-01 Thread Johannes Pfau via Digitalmars-d
Am Fri, 01 Jun 2018 08:50:19 + schrieb Kagamin:

> On Friday, 1 June 2018 at 08:37:33 UTC, Johannes Pfau wrote:
>> I don't know if anything changed in this regard since std.digest was
>> written some time ago. But if you get the std.bitmanip  nativeTo*Endian
>> and *EndianToNative functions to work in CTFE, std.digest should work
>> as well.
> 
> Standard cryptographic algorithms are by design not dependent on
> endianness, rather they set on a specific endianness.

However you want to call it, the algorithms interpret data as numbers 
which means that the binary representation differs based on endianess.
If you want portable results, you can't ignore that fact in the 
implementation. So even though the algorithms are not dependent on the 
endianess, the representation of the result is. Therefore standards do 
usually propose an internal byte order.

-- 
Johannes


Re: std.digest can't CTFE?

2018-06-01 Thread Johannes Pfau via Digitalmars-d
Am Thu, 31 May 2018 18:12:35 -0700 schrieb Manu:

> Hashing's not low-level. It would be great if these did CTFE; generating
> compile-time hashes is a thing that would be really useful!
> Right here, I have a string class that carries a hash around with it for
> comparison reasons. Such string literals would prefer to have CT hashes.
> 

As I was the one who wrote that doc comment: For basically all hash 
implementations you'll be casting from an integer type to the raw bytes 
representation somewhere. As the binary presentation needs to be 
portable, you need to be aware of the endianess of the system you're 
running your code on. AFAIR CTFE does (did?) not provide any way to do 
endianess-dependent conversions at all and there's also no way to know 
the CTFE endianess, so this is a fundamental limitation. (E.g. if you have 
a cross-compiler targeting a system with a different endianess, 
version(BigEndian) will give you the target endianess. But what will 
actually be used in CTFE?).

I don't know if anything changed in this regard since std.digest was 
written some time ago. But if you get the std.bitmanip  nativeTo*Endian 
and *EndianToNative functions to work in CTFE, std.digest should work as 
well.

There may be some workaround, as IIRC druntimes core.internal.hash works 
in CTFE? It's either this, or it's buggy in that cross-compilation 
scenario ;-)

-- 
Johannes


Re: Need help with the dmd package on NixOS

2018-05-18 Thread Johannes Pfau via Digitalmars-d

On Friday, 18 May 2018 at 11:28:30 UTC, Mike Franklin wrote:

On Friday, 18 May 2018 at 10:28:37 UTC, Thomas Mader wrote:

On Friday, 11 May 2018 at 04:27:20 UTC, Thomas Mader wrote:
My suspicion about the switch to glibc 2.27 being the problem 
was wrong.
I did a very timeconsuming bisection and found the problem 
commit to be the one which bumped binutils to 2.30.


Can somebody help me to answer the question from 
https://sourceware.org/bugzilla/show_bug.cgi?id=23199#c4 
please.
The object is created by the dmd backend but where in the code 
is binutils used?


I'm not sure I understand.  Does binutils need to be used to 
generated an object file?  My understanding is the DMD creates 
the object file without the help of binutils.


As far as I know, that's correct. GCC-based compilers emit ASM 
code only and leave assembling of the objects files to the 
'binutils as' assembler. That's probably the reason they assumed 
it's a binutils bug. For DMD, binutils is not involved when 
creating object files. So this is likely a DMD bug.


-- Johannes




Re: Favorite GUI library?

2018-04-26 Thread Johannes Pfau via Digitalmars-d
Am Wed, 25 Apr 2018 22:45:59 -0400 schrieb Nick Sabalausky (Abscissa):

> On 04/25/2018 10:31 PM, Nick Sabalausky (Abscissa) wrote:
>> 
>> Yea. Google's [complain, gripe, blah, blah, blah...]
> I found this to be a very interesting, and not particularly surprising,
> peek at the way things work^H^H^H^Hoperate inside Google-ville:
> 
> https://mtlynch.io/why-i-quit-google/
> 
> I guess it indirectly explains many things. Like why my Android device
> can't even handle basic WiFi things like...oh...not loosing my wireless
> password every-single-time. Or...connecting to another machine *on the
> same freaking network* without using a Google-hosted service (erm,
> sorry, I mean "cloud") as a go-between. Well, no matter, just like my
> laptop, I'll just ditch the pack-in OS in favor of Linux...oh
> wait...crap.

Maybe this will help:
https://puri.sm/shop/librem-5/

Hope is the last thing to die ;-)

-- 
Johannes


Re: Issues with debugging GC-related crashes #2

2018-04-19 Thread Johannes Pfau via Digitalmars-d
Am Thu, 19 Apr 2018 07:04:14 + schrieb Johannes Pfau:

> Am Thu, 19 Apr 2018 06:33:27 + schrieb Johannes Pfau:
> 
> 
>> Generally if you produced a crash in gdb it should be reproducible if
>> you restart the program in gdb. So once you have a crash, you should be
>> able to restart the program and look at the _dso_registry and see the
>> same addresses somewhere. If you then think you see memory corruption
>> somewhere you could also use read or write watchpoints.
>> 
>> But just to be sure: you're not adding any GC ranges manually, right?
>> You could also try to compare the GC range to the address range layout
>> in /proc/$PID/maps .
> 
> Of course, if this is a GC pool / heap range adding breakpoints in the
> sections code won't be useful. Then I'd try to add a write watchpoint on
> pooltable.minAddr / maxAddr, restart the programm in gdb and see where /
> why the values are set.

Having a quick look at https://github.com/ldc-developers/druntime/blob/
ldc/src/gc/pooltable.d: The GC seems to allocate multiple pools using 
malloc, but only keeps track of one minimum/maximum address for all 
pools. Now if there's some other memory area malloced in between these 
pools, you will end up with a huge memory block. When this will get 
scanned and if any of the memory in-between the GC pools is protected, 
you might see the GC crash. However, I don't really know anything about 
the GC code, so some GC expert would have to confirm this.



-- 
Johannes


Re: Issues with debugging GC-related crashes #2

2018-04-19 Thread Johannes Pfau via Digitalmars-d
Am Thu, 19 Apr 2018 06:33:27 + schrieb Johannes Pfau:

> 
> Generally if you produced a crash in gdb it should be reproducible if
> you restart the program in gdb. So once you have a crash, you should be
> able to restart the program and look at the _dso_registry and see the
> same addresses somewhere. If you then think you see memory corruption
> somewhere you could also use read or write watchpoints.
> 
> But just to be sure: you're not adding any GC ranges manually, right?
> You could also try to compare the GC range to the address range layout
> in /proc/$PID/maps .

Of course, if this is a GC pool / heap range adding breakpoints in the 
sections code won't be useful. Then I'd try to add a write watchpoint on 
pooltable.minAddr / maxAddr, restart the programm in gdb and see where / 
why the values are set.

-- 
Johannes


Re: Issues with debugging GC-related crashes #2

2018-04-19 Thread Johannes Pfau via Digitalmars-d
Am Wed, 18 Apr 2018 22:24:13 + schrieb Matthias Klumpp:

> On Wednesday, 18 April 2018 at 22:12:12 UTC, kinke wrote:
>> On Wednesday, 18 April 2018 at 20:36:03 UTC, Johannes Pfau wrote:
>>> Actually this sounds very familiar:
>>> https://github.com/D-Programming-GDC/GDC/pull/236
>>
>> Interesting, but I don't think it applies here. Both start and end
>> addresses are 16-bytes aligned, and both cannot be accessed according
>> to the stack trace (`pbot=0x7fcf4d721010 > at address 0x7fcf4d721010>, ptop=0x7fcf4e321010 > memory at address 0x7fcf4e321010>`). That's quite interesting too:
>> `memSize = 209153867776`. Don't know what exactly it is, but it's a
>> pretty large number (~194 GB).
> 
> size_t memSize = pooltable.maxAddr - minAddr;
> (https://github.com/ldc-developers/druntime/blob/ldc/src/gc/impl/
conservative/gc.d#L1982
> )
> That wouldn't make sense for a pool size...
> 
> The machine this is running on has 16G memory, at the time of the crash
> the software was using ~2.1G memory, with 130G virtual memory due to
> LMDB memory mapping (I wonder what happens if I reduce that...)

I see. Then I'd try to debug where the range originally comes from, try 
adding breakpoints in _d_dso_registry, registerGCRanges and similar 
functions here: https://github.com/dlang/druntime/blob/master/src/rt/
sections_elf_shared.d#L421

Generally if you produced a crash in gdb it should be reproducible if you 
restart the program in gdb. So once you have a crash, you should be able 
to restart the program and look at the _dso_registry and see the same 
addresses somewhere. If you then think you see memory corruption 
somewhere you could also use read or write watchpoints.

But just to be sure: you're not adding any GC ranges manually, right?
You could also try to compare the GC range to the address range layout 
in /proc/$PID/maps .



-- 
Johannes


Re: Issues with debugging GC-related crashes #2

2018-04-18 Thread Johannes Pfau via Digitalmars-d
Am Wed, 18 Apr 2018 17:40:56 + schrieb Matthias Klumpp:
> 
> The crashes always appear in
> https://github.com/dlang/druntime/blob/master/src/gc/impl/conservative/
gc.d#L1990
> 

The important point to note here is that this is not one of these 'GC 
collected something because it was not reachable' bugs. A crash in the GC 
mark routine means it somehow scans an invalid address range. Actually, 
I've seen this before...


> Meanwhile, I also tried to reproduce the crash locally in a chroot, with
> no result. All libraries used between the machine where the crashes
> occur and my local machine were 100% identical,
> the only differences I am aware of are obviously the hardware (AWS cloud
> vs. home workstation) and the Linux kernel (4.4.0 vs 4.15.0)
> 
> The crash happens when built with LDC or DMD, that doesn't influence the
> result. Copying over a binary from the working machine to the crashing
> one also results in the same errors.


Actually this sounds very familiar:
https://github.com/D-Programming-GDC/GDC/pull/236

it took us quite some time to reduce and debug this:

https://github.com/D-Programming-GDC/GDC/pull/236/commits/
5021b8d031fcacac52ee43d83508a5d2856606cd

So I wondered why I couldn't find this in the upstream druntime code. 
Turns out our pull request has never been merged

https://github.com/dlang/druntime/pull/1678


-- 
Johannes


Re: [OT] gdc status

2018-04-13 Thread Johannes Pfau via Digitalmars-d-announce
Am Wed, 11 Apr 2018 16:44:32 +0300 schrieb drug:

> 11.04.2018 16:26, Uknown пишет:
>> On Wednesday, 11 April 2018 at 13:17:23 UTC, drug wrote:
>>> 11.04.2018 15:22, bachmeier пишет:
 On Wednesday, 11 April 2018 at 09:45:07 UTC, Jonathan M Davis wrote:
 ... Given that GDC has been added to GCC...
>>> Is it true? I don't see anything like that here
>>> https://gcc.gnu.org/gcc-8/changes.html
>> 
>> Here's relevant news from Phoronix:
>> 
>> https://www.phoronix.com/scan.php?page=news_item=D-Frontend-For-GCC
>> 
>> Here's the relevant announcement:
>> https://gcc.gnu.org/ml/gcc/2017-06/msg00111.html
> I've read it. Unfortunately it doesn't answer my question. I've heard
> there were some problems.

IIRC copyright stuff once again stalled further discussion and the 
relevant GCC guys are not responding:
https://www.mail-archive.com/gcc-patches@gcc.gnu.org/msg186124.html


-- 
Johannes


Re: Am I reading this wrong, or is std.getopt *really* this stupid?

2018-03-25 Thread Johannes Pfau via Digitalmars-d
Am Sat, 24 Mar 2018 17:24:28 -0400 schrieb Andrei Alexandrescu:

> On 3/24/18 12:59 PM, H. S. Teoh wrote:
>> On Sat, Mar 24, 2018 at 12:11:18PM -0400, Andrei Alexandrescu via
>> Digitalmars-d wrote:
>> [...]
>>> Anyhow. Right now the order of processing is the same as the lexical
>>> order in which flags are passed to getopt. There may be use cases for
>>> which that's the more desirable way to go about things, so if you
>>> author a PR to change the order you'd need to build an argument on why
>>> command-line order is better. FWIW the traditional POSIX doctrine
>>> makes behavior of flags independent of their order, which would imply
>>> the current choice is more natural.
>> 
>> So what about making this configurable?
> 
> That'd be great. I'm thinking something like an option
> std.getopt.config.commandLineOrder. Must be first option specified right
> after arguments. Sounds good?

I don't really understand why you want to this keep lexical order 
functionality. There's a well defined use case for command line order: 
Allowing users to write commands in a natural, left-to-right style, where 
options on the right are more specific: systemctl status -l ...

I've never heard of any use case where the lexical order of the arguments 
passed to getopt matters for parsing user supplied command arguments. Is 
there any use case for this?

I thought the only reason we have this lexical order parsing is because 
it's simpler to implement. But if we'll get the non-quadratic command-line 
order implementation there's no reason to keep and maintain the quadratic 
implementation.

-- 
Johannes


Re: rvalues -> ref (yup... again!)

2018-03-24 Thread Johannes Pfau via Digitalmars-d
Am Sat, 24 Mar 2018 17:10:53 + schrieb Johannes Pfau:

> Am Sat, 24 Mar 2018 01:04:00 -0600 schrieb Jonathan M Davis:
> 
>> As it stands, because a function can't accept rvalues by ref, it's
>> usually reasonable to assume that a function accepts its argument by
>> ref because it's mutating that argument rather than simply because it's
>> trying to avoid a copy. If ref suddenly starts accepting rvalues, then
>> we lose that.
> 
> Any reason you can't simply use `ref` to imply 'modifies value' and
> `const ref` as 'passed by ref for performance reasons'?

Sorry, I see Manu already asked the same question.

-- 
Johannes


Re: rvalues -> ref (yup... again!)

2018-03-24 Thread Johannes Pfau via Digitalmars-d
Am Sat, 24 Mar 2018 01:04:00 -0600 schrieb Jonathan M Davis:

> As it stands, because a function can't accept rvalues by ref, it's
> usually reasonable to assume that a function accepts its argument by ref
> because it's mutating that argument rather than simply because it's
> trying to avoid a copy. If ref suddenly starts accepting rvalues, then
> we lose that.

Any reason you can't simply use `ref` to imply 'modifies value' and 
`const ref` as 'passed by ref for performance reasons'?

-- 
Johannes


Re: dmd -unittest= (same syntax as -i)

2018-03-16 Thread Johannes Pfau via Digitalmars-d
Am Thu, 15 Mar 2018 23:21:42 + schrieb Jonathan Marler:

> On Thursday, 15 March 2018 at 23:11:41 UTC, Johannes Pfau wrote:
>> Am Wed, 14 Mar 2018 14:22:01 -0700 schrieb Timothee Cour:
>>
>>> [...]
>>
>> And then we'll have to add yet another "-import" switch for DLL
>> support. Now we have 3 switches doing essentially the same: Telling the
>> compiler which modules are currently compiled and which modules are
>> part of an external library. Instead of just using the next best simple
>> solution, I think we should take a step back, think about this and
>> design a proper, generic solution.
>>
>> [...]
> 
> I had the same idea but mine was to add this metadata in the library
> file itself instead of having it as a separate file.

This is to some degree nicer, as it allows for self contained 
distribution. But then you have to support different library formats, 
it's more difficult to include support in IDEs and it's more difficult to 
extend the format.

> However, this
> design is "orthogonal" to -i= and -unittest=,  in both cases you may
> want to include/exclude certain modules regardless of whether or not
> they are in a library.

When would this be the case for -i? You never want to include modules in 
compilation which are in an external library you also link to, as you'll 
get duplicate symbol errors. I also don't see why you would want to 
exclude a module from compilation which is imported somewhere and not in 
any external library. Maybe to avoid generating the ModuleInfo and 
TypeInfo for 'header only' modules. But then you're breaking the 
assumption that typeid(X) works for any type, so excluding modules from 
compilation can't be a recommended practice.

For -unittest, I can see that you may sometimes want to test across 
library boundaries, but then you'd have to keep the tests out of the 
library anyway. I guess there could be cases where you want to instantiate 
a templated unittest for some type specializitaion, But I've never seen a 
real world use of that. Excluding local module from unittesting may be 
more useful, but I think version() and runtime selection of modules to be 
tested should cover basically all use cases.

Can you explain in some more detail what use cases you think of?

-- 
Johannes


Re: dmd -unittest= (same syntax as -i)

2018-03-15 Thread Johannes Pfau via Digitalmars-d
Am Wed, 14 Mar 2018 14:22:01 -0700 schrieb Timothee Cour:

> would a PR for `dmd -unittest= (same syntax as -i)` be welcome?
> wouldn't that avoid all the complicatiosn with version(StdUnittest) ?
> eg use case:
> 
> # compile with unittests just for package foo (excluding subpackage
> foo.bar)
> dmd -unittest=foo -unittest=-foo.bar -i main.d

And then we'll have to add yet another "-import" switch for DLL support. 
Now we have 3 switches doing essentially the same: Telling the compiler 
which modules are currently compiled and which modules are part of an 
external library. Instead of just using the next best simple solution, I 
think we should take a step back, think about this and design a proper, 
generic solution.

Then instead of having to use up to 6 flags(1) to link a library we can 
use one and even end up with better user experience than what we have now 
or what C++ provides:

(1)
dmd -I /libfoo -L-Bstatic -L-llibfoo -i main.d -unittest main.d -import 
libfoo*

(2)
dmd -library=foo:static

The key here is to realize that all the necessary information for 
unittests, DLL imports, library linking and automatically finding source 
dependencies is knowing the library layout. The compiler needs to know 
which modules are externally in libraries and which are currently build*.

In order to achieve this, we should define a standardized library 
metadata file which lists:
* source files belonging to that library (replaces -i and -import and -  
  unittest=)
* (relative) source path (to replace -I)
* library name (to replace -L)
* optional linker flags

Some of these are actually covered by the pkg-config format which is 
extensible and already used by meson when building D projects. Then we 
simply need the compiler to parse these files (optionally also generate 
them) and we can have a much cleaner user experience.

As these files are the extensible, you can also add documentation URIs 
for example. In the end, you could simply open your IDE, browse a list of 
libraries (even online, if we map names 1:1 to dub names), select the 
library and the IDE could provide documentation. The compiler then 
transparently handles includes, library linking, 

I think I'll start writing a DIP for this on Sunday but given the state 
of the DIP queue it'll take some time till this will get reviewed. 


* (There actually may be some corner cases where you may want to exclude 
unittests for some modules which are actually compiled. I think version 
statements are fine for that usecase. We should try to find other special 
use cases and see whether this simple, generic approach works for all of 
them.)

-- 
Johannes


Re: reduce mangled name sizes via link-time symbol renaming

2018-01-25 Thread Johannes Pfau via Digitalmars-d
Am Thu, 25 Jan 2018 14:24:12 -0800
schrieb Timothee Cour :

> could a solution like proposed below be adapted to automatically
> reduce size of long symbol names?
> 
> It allows final object files to be smaller; eg see the problem this
> causes:
> 
> * String Switch Lowering:
> http://forum.dlang.org/thread/p4d777$1vij$1...@digitalmars.com
> caution: NSFW! contains huge mangled symbol name!
> * http://lists.llvm.org/pipermail/lldb-dev/2018-January/013180.html
> "[lldb-dev] Huge mangled names are causing long delays when loading
> symbol table symbols")
> 
> 
> ```
> main.d:
> void foo_test1(){ }
> void main(){ foo_test1(); }
> 
> dmd -c libmain.a
> 
> ld -r libmain.a -o libmain2.a -alias _D4main9foo_test1FZv _foobar
> -unexported_symbol _D4main9foo_test1FZv
> # or : via `-alias_list filename`
> 
> #NOTE: dummy.d only needed because somehow dmd needs at least one
> object file or source file, a static library is somehow not enough
> (dmd bug?)
> 
> dmd -of=main2 libmain2.a dummy.d
> 
> nm main2 | grep _foobar # ok
> 
> ./main2 # ok
> ```
> 
> NOTE: to automate this process it could find all symbol names >
> threshold and apply a mapping form long mangled names to short aliases
> (eg: object_file_name + incremented_counter), that file with all the
> mappings can be supplied for a demangler (eg for lldb/gdb debugging
> etc)

What is the benefit of using link-time renaming (a linker specific
feature) instead of directly renaming the symbol in the compiler? We
could be quite radical and hash all symbols > a certain threshold. As
long as we have a hash function with strong enough collision resistance
there shouldn't be any problem.

AFAICS we only need the mapping hashed_name ==> full name for
debugging. So maybe we can simply stuff the full, mangled name somehow
into dwarf debug information? We can even keep dwarf debug information
in external files and support for this is just being added to GCCs
libbacktrace, so even stack traces could work fine.

-- Johannes



Re: @ctfeonly

2017-12-07 Thread Johannes Pfau via Digitalmars-d
Am Thu, 7 Dec 2017 13:38:54 -0800
schrieb Walter Bright :

> On 12/6/2017 11:41 PM, Mike Franklin wrote:
> > On Thursday, 7 December 2017 at 04:45:15 UTC, Jonathan M Davis
> > wrote: 
> >> The simplest way to do that is to write a unit test that uses a
> >> static assertion. As I understand it, with the way CTFE works, it
> >> pretty much can't know whether a function can be called at compile
> >> time until it tries, but a unit test can catch it if the function
> >> no longer works at compile time.  
> > 
> > Not bad, but that is swaying into the cumbersome category.  If
> > that's the best we can do, a @ctfeonly attribute starts looking
> > pretty good.  
> 
> More and more attributes to do essentially trivial things is
> cumbersomeness all on its own.

I think this is more of an optimization UDA than a standard attribute.
So it's similar to all the noinline, forceinline, weak, section etc.
attributes: https://wiki.dlang.org/Using_GDC#Attributes

-- Johannes



Re: @ctfeonly

2017-12-07 Thread Johannes Pfau via Digitalmars-d
Am Thu, 07 Dec 2017 01:32:35 -0700
schrieb Jonathan M Davis :

> 
> In the vast majority of cases, when a function is used for CTFE, it's
> also used during runtime. So, in most cases, you want to ensure that
> a function works both with CTFE and without, and in those cases
> something like @ctfeonly wouldn't make any sense. In my experience,
> pretty much the only time that something like @ctfeonly would make
> any sense would be with a function for generating a string mixin.

Not only string mixins. When programming for microcontrollers you want
to do as much in CTFE as possible, as space for executable code is
severely limited. So you may for example want to use CTFE to generate
some lookup tables and similar stuff. Basically the whole
'initialize a variable / constant using CTFE' idiom benefits a lot from
such an attribute.

-- Johannes



Re: @ctfeonly

2017-12-07 Thread Johannes Pfau via Digitalmars-d
Am Wed, 06 Dec 2017 20:18:57 -0700
schrieb Jonathan M Davis :

> Folks have talked about all kinds of template code and stuff being
> kept around in binaries even though it was only used at compile time
> (e.g. stuff like isInputRange), but I don't know how much that's
> actually true.

You probably never call isInputRange at runtime, so the code is likely
stripped. However, TypeInfo of structs used only at CTFE is still
generated and not stripped. I remember we once had this problem with
gcc.attribute, a module which shouldn't generate any code but generated
useless TypeInfo.

-- Johannes



Re: @ctfeonly

2017-12-07 Thread Johannes Pfau via Digitalmars-d
Am Thu, 7 Dec 2017 05:55:54 +0200
schrieb ketmar :

> ketmar wrote:
> 
> > Nicholas Wilson wrote:
> >  
> >> Also not generating the code in the first place means less I/O for
> >> the compiler and less work for the linker.  
> > this is solvable without any additional flags, tho: compiler should
> > just skip codegen phase for any function that is not referenced by
> > another compiled function (except for library case).  
> 
> p.s.: actually, dmd already creates .a files suitable for
> smartlinking (otherwise any binary would include the whole
> libphobos2.a ;-). but for "dmd mycode.d" dmd doesn't do this ('cause
> it is quite compilcated for non-library case). the whole issue prolly
> can be solved by "generate smart object files for linking" flag
> (which will create .a files for everything, so linker can do it's
> smart work).

AFAIK there's a design flaw in D which prevents a compiler from
doing any such operations without additional user input:

Currently you can write code like this:
---
module mod;

private int thisIsNeverUsed()
{
return 42;
}

private int thisIsUsed(int a)
{
return 42 + a;
}

int someTemplate(T)(T t)
{
return t.thisIsUsed();
}
---

Whether thisIsUsed and thisIsNeverUsed actually have to appear in the
object file depends on how someTemplate is instantiated. Generally, when
compiling module mod you can never know whether thisIsUsed or
thisIsNeverUsed are actually required. You can not evaluate the
someTemplate template without specifying a concrete type for T. 

This means neither the compiler nor the linker can remove seemingly
unused, private functions. For GDC this means we simply mark all
functions as TREE_PUBLIC in the GCC backend.

Note that this is also an issue for exporting templates from DLLs on
windows. I think the DLL DIP which requires to mark private functions
as 'export' if they are used outside outside of the module (even via
templates) will fix this problem and allow for some nice optimizations.
Until then, smart linking isn't really possible.

BTW: The private/export combination probably still wouldn't solve all
problems: Maybe you want to mark the whole module as @nogc. @nogc
checking is done in semantic phase, so it will still error about GC
usage in functions which later turn out to be only used in CTFE.
Detecting this in the linker or compiler backend is too late. So we'd
have to detect unexported, unused private functions in semantic. I'm
not sure if this is feasible or whether a simple @ctfeonly UDA isn't
much simpler to implement.

Additionally @ctfeonly documents intended usage and allows for nice
error messages when using a function at runtime. Relying on the linker
to remove private, unexported functions can break easily.

-- Johannes



Re: Language server protocol

2017-11-16 Thread Johannes Pfau via Digitalmars-d
Am Thu, 16 Nov 2017 19:09:14 +
schrieb Arun Chandrasekaran :

> Is someone working on D community to implement 
> https://langserver.org ?
> 
> What will the D community miss out if we ignore LSP?
> 
> PS: HackerPilot's tools are very helpful.

https://github.com/Pure-D/serve-d


-- Johannes



Re: TLS + LDC + Android (ARM) = FAIL

2017-11-01 Thread Johannes Pfau via Digitalmars-d
Am Wed, 01 Nov 2017 19:24:42 +
schrieb Joakim <dl...@joakim.fea.st>:

> On Wednesday, 1 November 2017 at 18:28:12 UTC, Johannes Pfau 
> wrote:
> > ARM: Fine. Android: probably won't work well. AFAIK we're only 
> > missing emulated TLS / GC integration, so most test will pass 
> > but real apps will crash because of GC memory corruption. I 
> > guess I should finally get back to fixing that problem ;-) OTOH 
> > Android doesn't even support GCC anymore, so I don't really see 
> > much benefit in maintaining GDC Android support.  
> 
> I don't see what their deciding to drop gcc has to do with 
> whether gdc should support Android.

If there's a backend bug in GCC related to Android nobody will take
responsibility as it's not officially supported. All tutorials and
documentation will focus on LLVM based compilers. It's certainly still
interesting to use GDC for Android, but it is more work (especially
considering Android is one of the few systems requiring emutls) for
little benefit, especially if most users are going to use LLVM
anyway.

With the limited time available I think GDC should focus on systems
where GCC is a first class or even the preferred compiler.
X86/MIPS/ARM/PPC/Linux as some distributions such as debian might prefer
a GCC based compiler. Then there are embedded toolchains which
primarily use GCC: MSP and ARM (GCC Arm Embedded project). Also many
console homebrew toolchains exclusively use GCC.

I just don't think we have to support two compilers for any target
with the little resources we have.


-- Johannes



Re: TLS + LDC + Android (ARM) = FAIL

2017-11-01 Thread Johannes Pfau via Digitalmars-d
Am Wed, 01 Nov 2017 18:32:37 +
schrieb Igor Shirkalin <maths...@inbox.ru>:

> On Wednesday, 1 November 2017 at 18:28:12 UTC, Johannes Pfau 
> wrote:
> > Am Wed, 01 Nov 2017 17:42:22 +
> > schrieb David Nadlinger <c...@klickverbot.at>:
> >  
> >> On Wednesday, 1 November 2017 at 17:30:05 UTC, Iain Buclaw 
> >> wrote:  
> >> > [...]  
> >> 
> >> Or quite possibly fewer, depending on what one understands 
> >> "platform" and "support" to mean. ;)
> >> 
> >> What is the state of GDC on Android/ARM – has anyone been 
> >> using it recently?
> >> 
> >>   — David
> >>   
> >
> > ARM: Fine. Android: probably won't work well. AFAIK we're only 
> > missing emulated TLS / GC integration, so most test will pass 
> > but real apps will crash because of GC memory corruption. I 
> > guess I should finally get back to fixing that problem ;-) OTOH 
> > Android doesn't even support GCC anymore,  
> 
> > so I don't really see much benefit in maintaining GDC Android 
> > support.
> > -- Johannes  
> 
> That's too bad. I'd do it here for food.
> 
> - Igor
> 

I understand that D support for Android is important. I just think that
if google now supports only LLVM for Android, focusing on a LLVM based
compiler such as LDC is a more reasonable way to support Android.


-- Johannes



Re: TLS + LDC + Android (ARM) = FAIL

2017-11-01 Thread Johannes Pfau via Digitalmars-d
Am Wed, 01 Nov 2017 18:06:29 +
schrieb Joakim :

> On Wednesday, 1 November 2017 at 17:24:32 UTC, Igor Shirkalin 
> wrote:
> > We solved the subject with modifying druntime source related 
> > with tls. Imaging, we have lost a lot of D's features.
> > As far as I know DMD or GDC are not available for ARM 
> > architecture. So we need LDC.
> > A short story: we have big C/C++ project that links D (LDC) 
> > code for different platforms.
> >
> > Does new "-betterC" mean we may use parallelism with using 
> > separate linker?
> >
> > - IS  
> 
> If you're having problems with the emulated TLS I put together 
> for Android, it is most likely because I didn't document well 
> what needs to be done when linking for Android.  Specifically, 
> there are three rules that _must_ be followed:
> 
> 1. You must use the ld.bfd linker, ld.gold won't do.
> 2. You must have a D main function, even for a shared library 
> (which can be put next to android_main, if you're using the 
> default Android wrapper from my D android library).
> 3. The ELF object with the D main function must be passed to the 
> linker first.
> 
> If you look at my examples on the wiki, you'll see that they all 
> follow these rules:
> 
> https://wiki.dlang.org/Build_D_for_Android
> 
> I should have called these rules out separately though, like I'm 
> doing here, a documentation oversight.

Also when mixing D and C code, you can't access extern TLS variables
across the language boundary. Maybe OP tries to do that as he mixes D/C
code?

-- Johannes



Re: TLS + LDC + Android (ARM) = FAIL

2017-11-01 Thread Johannes Pfau via Digitalmars-d
Am Wed, 01 Nov 2017 17:42:22 +
schrieb David Nadlinger :

> On Wednesday, 1 November 2017 at 17:30:05 UTC, Iain Buclaw wrote:
> > GDC supports the same or maybe more platforms than LDC. :-)  
> 
> Or quite possibly fewer, depending on what one understands 
> "platform" and "support" to mean. ;)
> 
> What is the state of GDC on Android/ARM – has anyone been using 
> it recently?
> 
>   — David
> 

ARM: Fine. Android: probably won't work well. AFAIK we're only
missing emulated TLS / GC integration, so most test will pass but real
apps will crash because of GC memory corruption. I guess I should
finally get back to fixing that problem ;-) OTOH Android doesn't even
support GCC anymore, so I don't really see much benefit in maintaining
GDC Android support.


-- Johannes



Re: D for Android

2017-09-19 Thread Johannes Pfau via Digitalmars-d
Am Tue, 19 Sep 2017 12:38:15 +
schrieb twkrimm :

> On Tuesday, 19 September 2017 at 07:44:47 UTC, Andrea Fontana 
> wrote:
> > On Tuesday, 19 September 2017 at 03:25:08 UTC, Joakim wrote:  
> >> Next up, 32-bit ARM Android devices are now supported, I'm 
> >> looking at getting 64-bit AArch64 Android up and running.  
> >
> > Keep it up!
> > Andrea  
> 
> Joakim
> 
> I think the  Atmel processors (AVR) that Microchhip bought are 
> 32-bit ARM based.
> It would be neat to develop D programs for limited resource 
> processors.

OT, but Atmel produces:
AVR 8 bit microcontrollers, AVR custom architecture
AVR32 32 bit microcontrollers, AVR32 custom architecture
ARM based products (the SAM* series), ARM7, ARM9, Cortex-M/Cortex-A

Microchip additionally maintains custom 8,16 and 32 bit PIC
architectures.

Joakim's Android work is much appreciated, but for these types of
bare-metal controllers you'll have to look at Mike's work:
https://github.com/JinShil/stm32f42_discovery_demo

This is for 32bit ARM only. I wrote a proof-of concept hello-world for
AVR 8bit controllers (blink an LED) some time ago. On the compiler
side, not much is missing and betterC-related changes fix most compiler
problems. What you really need though is register definitions and
nobody wrote those for AVR 8 bit controllers yet.

-- Johannes



Re: RFC: Implementation of binary assignment operators (e.g s.x += 2) for @property functions

2017-08-15 Thread Johannes Pfau via Digitalmars-d
Am Tue, 15 Aug 2017 07:52:17 +
schrieb Gary Willoughby :

> On Tuesday, 15 August 2017 at 03:53:44 UTC, Michael V. Franklin 
> wrote:
> > An implementation of binary assignment operators for @property 
> > functions has been submitted to the DMD pull request queue at 
> > https://github.com/dlang/dmd/pull/7079.  It addresses the 
> > following issues:
> >
> > Issue 8006 - Implement proper in-place-modification for 
> > properties
> > https://issues.dlang.org/show_bug.cgi?id=8006  
> 
> I thought @property's behaviour had been removed from the 
> language and even though the attribute remains, it doesn't 
> actually do anything?
> 
> 

You're probably thinking of the special optional/non-optional
parenthesis rules and the -property compiler switch which was removed.

@property should still be used according to the style guide:
https://dlang.org/dstyle.html
and as far as I can tell it's heavily used in phobos.

Properties behave more like field variables in some traits:
http://dlang.org/spec/traits.html
https://dlang.org/phobos/std_traits.html

But I think that's the only relevant difference between properties and
normal functions right now.

-- Johannes



Re: RFC: Implementation of binary assignment operators (e.g s.x += 2) for @property functions

2017-08-15 Thread Johannes Pfau via Digitalmars-d
Am Tue, 15 Aug 2017 03:53:44 +
schrieb Michael V. Franklin :

> We ask for your comments whether they be in approval or 
> disapproval of this pull request so we can determine the best way 
> forward.
> 
> Thank you,
> Michael V. Franklin

+1. Then @property finally becomes useful ;-)


-- Johannes



Re: D on AArch64 CPU

2017-08-09 Thread Johannes Pfau via Digitalmars-d-learn
Am Sun, 14 May 2017 15:05:08 +
schrieb Richard Delorme :

> I recently bought the infamous Raspberry pi 3, which has got a 
> cortex-a53 4 cores 1.2 Ghz CPU (Broadcom). After installing on it 
> a 64 bit OS (a non official fedora 25), I was wondering if it was 
> possible to install a D compiler on it.
> 

> I finally try GDC, on 6.3 gcc, and with support of version 2.68 
> of the D language. After struggling a little on a few 
> phobos/druntime files, I got a compiler here too:
> $ gdc --version
> gdc (GCC) 6.3.0
> Copyright © 2016 Free Software Foundation, Inc.
> 

Iain recently updated GDC & phobos up to 2.074 and we have a pull
request for 2.075. So don't worry about fixing old GDC phobos/druntime
versions, recent gdc git branches should already have AArch64 phobos
changes.

We have a test runner for AArch and GDC master here:
https://buildbot.dgnu.org/#/builders/2/builds/29

There are still some failing test suite tests though and AFAICS we
currently don't build phobos on that CI at all.

(We can run ARM/AArch tests without special hardware, thanks to
QEMU's user mode emulation)

-- Johannes



Re: Visual Studio Code code-d serve-d beta release

2017-08-08 Thread Johannes Pfau via Digitalmars-d-announce
Am Tue, 08 Aug 2017 17:13:18 +
schrieb WebFreak001 :

> On Tuesday, 8 August 2017 at 08:03:05 UTC, Arjan wrote:
> > Small request: could the setting "d.stdlibPath" be inferred 
> > from the compiler in use? DMD and LDC both have a conf file in 
> > which the paths are already set.  
> 
> oh cool I didn't know that, is there a standard path to where 
> these conf files are though?

The D frontend (and therefore all compilers) already has code to print
the import paths. Unfortunately this code is only used when an import
is not found:
--
test.d:1:8: Fehler: module a is in file 'a.d' which cannot be read
 import a;
^
import path[0] = /usr/include/d
import path[1]
= /opt/gdc/lib/gcc/x86_64-unknown-linux-gnu/4.9.4/include/d
--

It should be trivial though to refactor this code and add a
command-line switch to dump the import path. See Module::read in
dmodule.c. If Walter opposes adding this to DMD (one more command line
switch!) we could probably still add it to GDC glue. This code is all
you need:

if (global.path)
{
for (size_t i = 0; i < global.path->dim; i++)
{
const char *p = (*global.path)[i];
fprintf(stderr, "import path[%llu] = %s\n", (ulonglong)i, p);
}
}


-- Johannes



Re: SVD_to_D: Generate over 100k lines of highly-optimized microcontroller mmapped-IO code in the blink of an eye

2017-08-01 Thread Johannes Pfau via Digitalmars-d-announce
Am Mon, 31 Jul 2017 08:51:16 +
schrieb Mike :

> https://github.com/JinShil/svd_to_d
> 
> SVD_to_D is a command-line utility that generates D code from ARM 
> Cortex-M SVD files.
> 
> SVD files are XML files that describe, in great detail, the 
> memory layout and characteristics of registers in an ARM Cortex-M 
> microcontroller. See 
> https://github.com/posborne/cmsis-svd/tree/master/data for a 
> curated list of SVD files for many ARM Cortex-M microcontrollers 
> from various silicon vendeors.
> 
>  From the information in an SVD file, code for accessing the 
> microcontroller's memory-mapped-io registers can be automatically 
> generated, and SVD_to_D does exactly that.

Nice work! SVD seems to be an ARM standard / initiative? I wish there
was something similar for MSP/AVR/PIC controllers.


-- Johannes



Re: DIP 1012--Attributes--Preliminary Review Round 1

2017-07-28 Thread Johannes Pfau via Digitalmars-d
Am Thu, 27 Jul 2017 23:38:33 +
schrieb Nicholas Wilson :

> On Thursday, 27 July 2017 at 15:48:04 UTC, Olivier FAURE wrote:
> > On Thursday, 27 July 2017 at 14:44:23 UTC, Mike Parker wrote:  
> >> DIP 1012 is titled "Attributes".
> >>
> >> https://github.com/dlang/DIPs/blob/master/DIPs/DIP1012.md  
> >
> > This DIP proposes a very complex change (treating attributes as 
> > Enums), but doesn't really provide a rationale for these 
> > changes.  
> 
> It is actually a very simple change, from the end user 
> perspective.
> * Function attributes that were keyword like, become regular 
> attributes.
> * They can be applied to modules, acting as a default for 
> applicable symbols in the module.
> 

I think it also makes sense from a compiler perspective. When these
attributes were introduce, we didn't have UDAs yet. Then we introduced
UDAs and now UDAs are more full-featured than the original compiler
hardcoded attributes:

UDAs do not introduce names into the global namespace, UDAs can use
fully qualified names, multiple UDAs can be combined or aliased (as
commonly done in C for dll export attributes using #defines. We can't
do such things in D right now). So I think moving the compiler
attributes to UDAs is certainly useful.

But it seems this DIP fails to stress the rationale and confuses people
with some implementation detail. I think it's important to show the
simple use cases, where object.d auto imports everything and aliases
are used so you can use normal @nogc void foo()... syntax. Then maybe
show how to group or alias attributes.

-- Johannes



Re: C style 'static' functions

2017-07-19 Thread Johannes Pfau via Digitalmars-d-learn
Am Wed, 19 Jul 2017 19:18:03 +
schrieb Petar Kirov [ZombineDev] <petar.p.ki...@gmail.com>:

> On Wednesday, 19 July 2017 at 18:49:32 UTC, Johannes Pfau wrote:
> >
> > Can you explain why _object-level visibility_ would matter in 
> > this case?  
> 
> (I'm sure you have more experience with shared libraries than me, 
> so correct me if I'm wrong)
> 
> We can't do attribute inference for exported functions because 
> changing the function body may easily change the function 
> signature (-> name mangling) and break clients of the (shared) 
> library. Therefore, it follows that attribute inference can only 
> be done for non-exported functions.

OK, I didn't think of the stable ABI argument, that indeed does make
sense. Leads to the strange consequence though that private functions
called from templates need to be exported and therefore can't use
inference.

OT: if a function private function is exported and called from a public
template things are difficult either way. Such a function needs to be
considered to be 'logically' public: As the template code instantiated
in another library will not get updated when you update the library
with the private function, you also have to ensure that the program
logic is still valid when mixing a new implementation of the private
function and an old implementation of the template function

-- Johannes



Re: C style 'static' functions

2017-07-19 Thread Johannes Pfau via Digitalmars-d-learn
Am Wed, 19 Jul 2017 17:37:48 +
schrieb Kagamin :

> On Wednesday, 19 July 2017 at 15:28:50 UTC, Steven Schveighoffer 
> wrote:
> > I'm not so sure of that. Private functions still generate 
> > symbols. I think in C, there is no symbol (at least in the 
> > object file) for static functions or variables.  
> 
> They generate hidden symbols. That's just how it implements 
> private functions in C: you can't do anything else without 
> mangling.

This is not entirely correct. The symbols are local symbols in elf
terminology, so local to an object file. Hidden symbols are local to an
executable or shared library.

> You probably can't compile two C units into one object 
> file if they have static functions with the same name - this 
> would require mangling to make two symbols different.

1) C does have mangling for static variables:
void foo() {static int x;}
==> .local  x.1796

2)
Object file? No, but you cant compile two translation units into one
object file anyway or declare two functions with the same name in one
translation file.
For executables and libraries, ELF takes care of this. One major usecase
of static functions is not polluting the global namespace.

---
static int foo(int a, int b)
{
return a + b + 42;
}

int bar(int a, int b)
{
return foo(a, b);
}
---
nm =>
0017 T bar
 t foo

---
static int foo(int a, int b)
{
return -42;
}

int bar(int a, int b);

int main()
{
return bar(1, 2);
}
---
nm =>
 U bar
 t foo
 U _GLOBAL_OFFSET_TABLE_
0011 T main

nm a.out | grep foo =>
063a t foo
0670 t foo

Additionally, when compiling with optimizations both foos are gone: All
calls are inlined, the functions are never referenced and therefore
removed. This can reduce executable size a lot if you have many local
helper functions, so D may benefit from this optimization as well.


-- Johannes



Re: C style 'static' functions

2017-07-19 Thread Johannes Pfau via Digitalmars-d-learn
Am Wed, 19 Jul 2017 17:25:18 +
schrieb Petar Kirov [ZombineDev] :


> >
> > Note: not 100% sure of all this, but this is always the way 
> > I've looked at it.  
> 
> You're probably right about the current implementation, but I was 
> talking about the intended semantics. I believe that with DIP45, 
> only functions and global variables annotated with the export 
> storage class would necessary have externally visible symbols.
> 

Yes, this DIP is the solution to have true C-like static functions.
Non-exported private will then be equivalent to C static.

> Also, consider this enhancement request (which I think Walter and 
> Andrei approve of) - 
> https://issues.dlang.org/show_bug.cgi?id=13567 - which would be 
> doable only if private functions don't have externally visible 
> symbols.

Can you explain why _object-level visibility_ would matter in this case?

-- Johannes



Re: C style 'static' functions

2017-07-19 Thread Johannes Pfau via Digitalmars-d-learn
On Wednesday, 19 July 2017 at 15:28:50 UTC, Steven Schveighoffer 
wrote:

On 7/19/17 8:16 AM, Petar Kirov [ZombineDev] wrote:

On Wednesday, 19 July 2017 at 12:11:38 UTC, John Burton wrote:

On Wednesday, 19 July 2017 at 12:05:09 UTC, Kagamin wrote:

Try a newer compiler, this was fixed recently.


Hmm it turns out this machine has 2.0.65 on which is fairly 
ancient. I'd not realized this machine had not been updated.


Sorry for wasting everyones' time if that's so, and thanks 
for the help.


Just for the record, private is the analog of C's static. All 
private free and member functions are callable only from the 
module they are defined in. This is in contrast with C++, 
Java, C# where private members are visible only the class they 
are defined in.


I'm not so sure of that. Private functions still generate 
symbols. I think in C, there is no symbol (at least in the 
object file) for static functions or variables.


You could still call a private function in a D module via the 
mangled name I believe.


-Steve

Note: not 100% sure of all this, but this is always the way 
I've looked at it.


That's correct. We unfortunately can't do certain optimizations 
because of this (executable size related: removing unused or 
inlined only functions, ...).


The reason we can't make private functions object local are 
templates. A public template can access private functions, but 
the template instance may be emitted to another object. And as 
templates can't be checzked speculatively we don't even know if 
there's a template accessing a private function.


Dlls on Windows face a similar problem. Once we get the export 
templates proposed in earlier Dll discussions we can make 
non-exported, private functions object local.


Re: Some GC and emulated TLS questions (GDC related)

2017-07-16 Thread Johannes Pfau via Digitalmars-d
Am Sun, 16 Jul 2017 14:48:04 +0200
schrieb Iain Buclaw via Digitalmars-d :

> 
> I sense a revert coming on...
> 
> https://github.com/D-Programming-GDC/GDC/commit/cf5e9e323b26d21a652bc2933dd886faba90281c
> 
> Iain.

Correct, though more in a metaphorical sense ;-)

Ideally, I'd want a boost licensed, high level D implementation in
core.thread. Instead of using __gthread get/setspecific, we simply add a
GC managed (i.e. plain stupid) void[][] _tlsVars array to
core.thread.Thread, use core.sync for locking and core.atomic to manage
array indices. With all the high-level stuff we can reuse from druntime
(resizing/reserving arrays) such an implementation is probably < 100
LOC. Most importantly, as we can't overwrite the functions in libgcc
we'd also use custom function names (__d_emutls_get_address).

The one thing stopping me though is that I don't think I can implement
this and boost-license it now that I almost know the libgcc
implementation by heart...

-- Johannes



Re: Some GC and emulated TLS questions (GDC related)

2017-07-16 Thread Johannes Pfau via Digitalmars-d
Am Sat, 15 Jul 2017 10:49:39 +
schrieb Joakim <dl...@joakim.fea.st>:

> On Friday, 14 July 2017 at 09:13:26 UTC, Johannes Pfau wrote:
> > Another solution could be to enhance libgcc emutls to allow 
> > custom allocators, then have a special allocation function in 
> > druntime for all D emutls variables. As far as I know there is 
> > no GC heap that is scanned, but not automatically collected?  
> 
> I believe that's what's done with the TLS ranges now, they're 
> scanned but not collected, though they're not part of the GC heap.

Indeed. We used to use GC.addRange for this and this was said to be
slow when using many ranges. So I'm basically asking whether the scan
delegate has got the same problem or whether it can cope with thousands
of small ranges.
A scanned but not collected heap is slightly different, as the GC can
internally treat the allocator memory as one huge memory range. When
allocating using C malloc, every single allocation needs to be scanned
individually. A scan+/do not collect allocator can probably be built
using the std.experimental.allocator primitives but that code is not in
druntime.

> 
> > I'd need a way to completely manually manage GC.malloc/GC.free 
> > memory without the GC collecting this memory, but still 
> > scanning this memory for pointers. Does something like this 
> > exist?  
> 
> It doesn't have to be GC.malloc/GC.free, right?  The current 
> DMD-style emutls simply mallocs and frees the TLS data itself and 
> only expects the GC to scan it.

The problem here again is whether this scales properly when using
thousands of non contiguous memory ranges. DMD style TLS can allocate
one memory block per thread for all variables. GCC style will allocate
one block per thread and variable.

> 
> > Another option is simply using the DMD-style emutls. But as far 
> > as I can see the DMD implementation never supported dynamic 
> > loading of shared libraries? This is something the GCC emutls 
> > support is quite good at: It doesn't have any platform 
> > dependencies (apart from mutexes and some way to store one 
> > thread specific pointer+destructor) and should work with all 
> > kinds of shared library combinations. DMD style emutls also 
> > does not allow sharing TLS variables between D and other 
> > languages.  
> 
> Yes, DMD's emutls was never made to work with loading multiple 
> shared libraries.  As for sharing with other languages without 
> copying the TLS data over, that seems a rare scenario.

Yes, probably the best solution for now is to reimplement GCC style
emutls with shared library support in druntime for all compilers and
forget about C/C++ TLS compatibility. Even if we could get patches into
libgcc it'd take years till all relevant systems have been updated to
new libgcc versions.

> 
> > So I was thinking, if DMD style emutls really doesn't support 
> > shared libraries, maybe we should just clone a GCC-style, 
> > compiler and OS agnostic emutls implementation into druntime? A 
> > D implementation could simply allocate all internal arrays 
> > using the GC. This should be just as efficient as the C 
> > implementation for variable access and interfacing to the GC is 
> > trivial. It gets somewhat more complicated if we want to use 
> > this in betterC though. We also lose C/C++ compatibility though 
> > by using such a custom implementation.  
> 
> It would be a good alternative to have, and you're not going to 
> care in betterC mode, since there's no druntime or GC.  You'd 
> have to be careful how you called TLS data from C/C++, but it 
> could still be done.
> 
> > The rest of this post is a description of the GCC emutls code. 
> > Someone
> > can use this specification to implement a clean-room design D 
> > emutls
> > clone.
> > Source code can be found here, but beware of the GPL license:
> > https://github.com/gcc-mirror/gcc/blob/master/libgcc/emutls.c
> >
> > [...]  
> 
> There is also this llvm implementation, available under 
> permissive licenses and actually documented somewhat:
> 
> https://github.com/llvm-mirror/compiler-rt/blob/master/lib/builtins/emutls.c

Unfortunately also not boost compatible, so we can't simply port that
code either, as far as I can see?


-- Johannes



Re: Some GC and emulated TLS questions (GDC related)

2017-07-16 Thread Johannes Pfau via Digitalmars-d
Am Fri, 14 Jul 2017 12:47:55 +
schrieb Kagamin :

> Just allocate emutls array in managed heap and pin it somewhere, 
> then everything referenced by it will be preserved.

This is basically the option of replicating GCC-style emutls in
druntime. This is quite simple to implement and you don't even need
special pinning, as the Thread instance object in core.thread can refer
to the TLS array.

This solution can't be implemented in libgcc though, as obviously the
GC is not always available to allocate the arrays in pure C programs ;-)


-- Johannes



Some GC and emulated TLS questions (GDC related)

2017-07-14 Thread Johannes Pfau via Digitalmars-d
As you might know, GDC currently doesn't properly hook up the GC to the
GCC emulated TLS support in libgcc. Because of that, TLS memory is not
scanned on target systems with emulated TLS. For GCC this includes
MinGW, Android (although Google switched to LLVM anyway) and some more
architectures. Proper integration likely needs some modifications in
the libgcc emutls code so I need some more information about the GC to
really propose a good solution.


The main problem is that GCC emutls does not use contiguous memory
blocks. So instead of scanning one range containing N variables we'll
have one range for every single TLS variable per thread.
So assuming we could iterate over all these variables (this would be
an extension required in libgcc), would scanTLSRanges in rt.sections
produce acceptable performance in these cases? Depending on the
number of TLS variables and threads there may be thousands of ranges
to scan.

Another solution could be to enhance libgcc emutls to allow custom
allocators, then have a special allocation function in druntime for all
D emutls variables. As far as I know there is no GC heap that is
scanned, but not automatically collected? I'd need a way to completely
manually manage GC.malloc/GC.free memory without the GC collecting this
memory, but still scanning this memory for pointers. Does something
like this exist?

Another option is simply using the DMD-style emutls. But as far as I can
see the DMD implementation never supported dynamic loading of shared
libraries? This is something the GCC emutls support is quite good at:
It doesn't have any platform dependencies (apart from mutexes and some
way to store one thread specific pointer+destructor) and should work
with all kinds of shared library combinations. DMD style emutls also
does not allow sharing TLS variables between D and other languages.

So I was thinking, if DMD style emutls really doesn't support shared
libraries, maybe we should just clone a GCC-style, compiler and OS
agnostic emutls implementation into druntime? A D implementation could
simply allocate all internal arrays using the GC. This should be just
as efficient as the C implementation for variable access and interfacing
to the GC is trivial. It gets somewhat more complicated if we want to
use this in betterC though. We also lose C/C++ compatibility though by
using such a custom implementation.




The rest of this post is a description of the GCC emutls code. Someone
can use this specification to implement a clean-room design D emutls
clone.
Source code can be found here, but beware of the GPL license:
https://github.com/gcc-mirror/gcc/blob/master/libgcc/emutls.c

Unlike DMD TLS, the GCC TLS code does not put all initialization memory
into one section. In fact, the code is completely runtime and
compile time linker agnostic so it can't use section start/stop
markers. Instead, every TLS variable is handled individually. For every
variable, an instance of __emutls_object is created in the (writeable)
data segment. __emutls_object is defined as:

struct __emutls_object
{
word size;
word align;
union {pointer offset; void* ptr};
void* templ;
}

The void* ptr is only used as an optimization for single threaded
programs, so I'll ignore this for now in the further description.

Whenever such a variable is accessed, the compiler calls
__emutls_get_address(&(__emutls_object in data segment)). This function
first does an atomic load of the __emutls_object.offset variable. If it
is zero, this particular TLS variable has not been accessed in any
thread before.

If this is the case, first check if the global emutls
initialization function (emutls_init) has been run already, if not run
it (__gthread_once). The initialization function initializes the mutex
variable and creates a thread local variable using __gthread_key_create
with the destructor function set to emutls_destroy.

Back to __emutls_get_address: If offset was zero and we ran the
emutls_init if required, we now lock the mutex. We have a global
variable emutls_size to count the number of total variables. We now
increase the emutls_size counter and atomically set
__emutls_object.offset = emutls_size.

We now have an __emutls_object.offset index assigned. Either using the
procedure described above or maybe we're called at a later stage again
and offset was already != zero. Now we get a per-thread pointer using
__gthread_getspecific. This is a pointer to an __emutls_array which is
simply a size value, followed by size void*. If
__gthread_getspecific returns null this is the first time we access a
TLS variable in this thread. Then allocate a new __emutls_array (size =
emutls_size + 32 + 1(for the size field)) and save using
__gthread_setspecific. If we already had an array for this thread,
check if __emutls_object.offset index is larger than the array. Then
reallocate the array (double the size, if still to small add +32, then
either way add +1). Update using __gthread_setspecific.

Now we have enough 

Re: Compile without generating code

2017-07-06 Thread Johannes Pfau via Digitalmars-d
Am Wed, 05 Jul 2017 22:05:53 +
schrieb Stefan Koch :

> On Wednesday, 5 July 2017 at 21:58:45 UTC, Lewis wrote:
> > I was reading 
> > https://blog.rust-lang.org/2017/07/05/Rust-Roadmap-Update.html, 
> > which mentioned that the Rust compiler now has a mode to go 
> > through the motions of compiling and show errors, but without 
> > generating any code. This way you can do a much faster build 
> > while iterating until you have no compile errors, then do a 
> > single build with code generation once everything looks good.
> >
> > [...]  
> 
> We already have it.
> use -o- and it'll disable codegen.

And GDC supports the standard -fsyntax-only GCC flag.


-- Johannes



Re: gdc is in

2017-06-21 Thread Johannes Pfau via Digitalmars-d
Am Wed, 21 Jun 2017 15:44:08 +
schrieb Nordlöw :

> On Wednesday, 21 June 2017 at 15:11:39 UTC, Joakim wrote:
> > the gcc tree:
> >
> > https://gcc.gnu.org/ml/gcc/2017-06/msg00111.html
> >
> > Congratulations to Iain and the gdc team. :)
> >
> > I found out because it's on the front page of HN right now, 
> > where commenters are asking questions about D.  
> 
> Which frontend version (2.0xx) is GDC currently at?

2.068 was the last C++ version then Iain backported changes to the C++
version get phobos 2.071.2 working. So it's effectively a C++ version of
2.071.2 or maybe slighly newer.

(The main reason for this backporting was to get a C++ version
which provides the same interface/headers as the current D frontend
version. This should allow for 'seamless' switching between the C++ and
D frontends)

-- Johannes



Re: gdc is in

2017-06-21 Thread Johannes Pfau via Digitalmars-d
Am Wed, 21 Jun 2017 15:11:39 +
schrieb Joakim :

> the gcc tree:
> 
> https://gcc.gnu.org/ml/gcc/2017-06/msg00111.html
> 
> Congratulations to Iain and the gdc team. :)
> 
> I found out because it's on the front page of HN right now, where 
> commenters are asking questions about D.

Awesome! And here's our status page for the patch review:

https://wiki.dlang.org/GDC/GCCSubmission

-- Johannes



Re: Life in the Fast Lane (@nogc blog post)

2017-06-17 Thread Johannes Pfau via Digitalmars-d-announce
Am Fri, 16 Jun 2017 13:51:18 +
schrieb Mike Parker :

> I've been meaning to get this done for weeks but have had a 
> severe case of writer's block. The fact that I had no other posts 
> ready to go this week and no time to write anything at all 
> motivated me to make time for it and get it done anyway. My wife 
> didn't complain when I told her I had to abandon our regular 
> bedtime Netflix time block (though she did extract a concession 
> that I have no vote in the next series we watch). Thanks to 
> Vladimir, Guillaume, and Steve, for their great feedback on such 
> short notice. Their assistance kept the blog from going quiet 
> this week.
> 
> The blog:
> https://dlang.org/blog/2017/06/16/life-in-the-fast-lane/
> 
> Reddit:
> https://www.reddit.com/r/programming/comments/6hmlfq/life_in_the_fast_lane_using_d_without_the_gc/
> 
> 

Nice blog post!

> Let’s imagine a hypothetical programmer named J.P. who, for reasons
> he considers valid, has decided he would like to avoid garbage
> collection completely in his D program. He has two immediate options.

I think I might know that hypothetical programmer ;-)

-- Johannes



Re: Fantastic exchange from DConf

2017-05-18 Thread Johannes Pfau via Digitalmars-d

On Thursday, 18 May 2017 at 08:24:18 UTC, Walter Bright wrote:

On 5/17/2017 10:07 PM, Patrick Schluter wrote:

D requires afaict at least a 32 bit system


Yes.



You've said this some times before but never explained why 
there's such a limitation? I've actually used GDC to run code on 
8bit AVR as well as 16bit MSP430 controllers.


The only thing I can think of is 'far pointer' support, but the 
times have changed in this regard as well:


TI implements 16bit or 20bit pointers for their 16 bit MSP 
architecture, but they never mix pointers: [1]


The problem with a "medium" model, or any model where size_t 
and sizeof(void *)
are not the same, is that they technically violate the ISO C 
standard. GCC has
minimal support for such models, and having done some in the 
past, I recommend against it.


AVR for a long time only allowed access to high memory using 
special functions, no compiler support [2]. Nowadays GCC supports 
named address spaces [3] but I think we could implement this 
completely in library code: Basically using a type wrapper 
template should work. The only difficulty is making it work with 
volatile_load and if we can't find a better solution we'll need a 
new intrinsic data_load!(Flags = volatile, addrSpace = 
addrspace(foo),...)(address).



Then there's the small additional 'problem' that slices will be 
more expensive on these architectures: If you already need 2 
registers to fit a pointer and 2 for size_t a slice will need 4 
registers. So there may be some performance penalty but OTOH 
these RISC machines usually have more general purpose registers 
available than X86.


[1] 
https://e2e.ti.com/support/development_tools/compiler/f/343/t/451127

[2] http://www.nongnu.org/avr-libc/user-manual/pgmspace.html
[3] https://gcc.gnu.org/onlinedocs/gcc/Named-Address-Spaces.html

-- Johannes


Re: "Rolling Hash computation" or "Content Defined Chunking"

2017-05-06 Thread Johannes Pfau via Digitalmars-d-learn
Am Mon, 01 May 2017 21:01:43 +
schrieb notna :

> Hi Dlander's.
> 
> Found some interesting reads ([1] [2] [3]) about the $SUBJECT and 
> wonder if there is anything available in the Dland?!
> 
> If yes, pls. share.
> If not, how could it be done (D'ish)
> 
> [1] - 
> https://moinakg.wordpress.com/2013/06/22/high-performance-content-defined-chunking/
>  - 
> https://github.com/moinakg/pcompress/blob/master/rabin/rabin_dedup.c
> 
> [2] - 
> https://restic.github.io/blog/2015-09-12/restic-foundation1-cdc
> 
> [3] - http://www.infoarena.ro/blog/rolling-hash
> 
> Thanks & regards

Interesting concept. I'm not aware of any D implementation but it
shouldn't be difficult to implement this in D:
https://en.wikipedia.org/wiki/Rolling_hash#Cyclic_polynomial

There's a BSD licensed haskell implementation, so a BSD licensed port
would be very easy to implement:
https://hackage.haskell.org/package/optimal-blocks-0.1.0
https://hackage.haskell.org/package/optimal-blocks-0.1.0/docs/src/Algorithm-OptimalBlocks-BuzzHash.html

To make an implementation D'ish it could integrate with either
std.digest or process input ranges. If you want to use it exclusively
for chunking your code can be more efficient (process InputRange until
a boundary condition is met). When using input ranges, prefer some kind
of buffered approach, Range!ubyte[] instead of Range!ubyte for better
performance.

If you really want the rolling hash value for each byte in a sequence
this will be less efficient as you'll have to enter data byte-by-byte.
In this case it's extremely important for performance that your
function can be inlined, so use templates:

ubyte[] data;
foreach(b; data)
{
// This needs to be inlined for performance reasons
rollinghash.put(b);
}

-- Johannes



Re: DConf 2017 livestream

2017-05-04 Thread Johannes Pfau via Digitalmars-d-announce

On Thursday, 4 May 2017 at 08:02:22 UTC, Johannes Pfau wrote:

The DConf 2017 livestream is available here:
https://www.youtube.com/watch?v=MqrJZg6PgnM


Looks like the youtube video ID changes when the stream is 
stopped / restarted.
Look for the livestream on 
https://www.youtube.com/user/sociomantic/feed

or try https://www.youtube.com/user/sociomantic/live instead.


DConf 2017 livestream

2017-05-04 Thread Johannes Pfau via Digitalmars-d-announce
As nobody posted this in the announce group yet, I'll just repeat 
this information here:


The DConf 2017 livestream is available here:
https://www.youtube.com/watch?v=MqrJZg6PgnM


See the DLangConf twitter account for more information:
https://twitter.com/DLangConf


Re: What are we going to do about mobile?

2017-05-01 Thread Johannes Pfau via Digitalmars-d
Am Mon, 1 May 2017 14:44:35 +0200
schrieb Iain Buclaw via Digitalmars-d :

> On 1 May 2017 at 14:40, Iain Buclaw  wrote:
> > So that's 3 build servers - 1x ARM7, 1x ARM8, and 1x x86. ;-)  
> 
> With the latter also testing all crosses we can do (there are 18
> different gdc cross-compilers in Ubuntu, for 12 distinct
> architectures).

BTW is there some documentation on how to update / rebuild these
debian / ubuntu packages with updated GDC sources? 

-- Johannes



Re: Compare boost::hana to D

2017-04-22 Thread Johannes Pfau via Digitalmars-d
Am Wed, 19 Apr 2017 18:02:46 +
schrieb Adrian Matoga :

> On Wednesday, 19 April 2017 at 08:19:52 UTC, Ali Çehreli wrote:
> > I'm brushing up on my C++ to prepare for my C++Now 2017 
> > presentation[1]. boost::hana is an impressive library that 
> > overlaps with many D features:
> >
> >   
> > http://www.boost.org/doc/libs/1_64_0_b2/libs/hana/doc/html/index.html
> >
> > Have you used boost::hana? What are your thoughts on it?
> >
> > And please share your ideas for the presentation. There has 
> > been threads here about C++ closing the gap. Does D still bring 
> > competitive advantage or is it becoming irrelevant? (Obviously, 
> > some think its irrelevant already.) I'm trying to collect 
> > opinions... :)
> >
> > Thank you,
> > Ali
> >
> > [1] 
> > http://cppnow.org/2017-conference/announcements/2017/04/09/d-keynote.html  
> 
> I was at C++ Meeting 2016 in Berlin, where Louis Dionne talked 
> about hana in his keynote [1]. I've summarized my feelings in a 
> blog post [2]. In short, you can do the same tricks in D, but 
> frequently there's an idiomatic way to express the same thing 
> just as concisely without them.
> And of course, feel free to use any part of my post in your talk. 
> :)
> 
> [1] https://www.youtube.com/watch?v=X_p9X5RzBJE
> [2] https://epi.github.io/2017/03/18/less_fun.html
> 

OT but is there any benefit to identify events with strings? As long as
you use compile time only events I'd prefer a syntax as in
https://github.com/WebFreak001/EventSystem

(one benefit is that it's 100% IDE autocomplete compatible)

I guess if you want runtime registration of events identifying by name
is useful. But then you also somehow have to encode the parameter types
to make the whole thing safe...

-- Johannes



msgpack-ll: Low level @nogc, nothrow, @safe, pure, betterC MessagePack (de)serializer

2017-04-17 Thread Johannes Pfau via Digitalmars-d-announce
Hello list,

msgpack-ll is a new low-level @nogc, nothrow, @safe, pure and betterC
compatible MessagePack serializer and deserializer. The library was
designed to avoid any external dependencies and handle the low-level
protocol details only. It only depends the phobos bigEndianToNative and
nativeToBigEndian templates from std.bitmanip. It uses an optimized API
to avoid any runtime bounds checks and still be 100% memory safe.

The library doesn't have to do any error handling or buffer management
and never dynamically allocates memory. It's meant as a building block
for higher level serializers (e.g. vibeD data.serialization) or as
a high-speed serialization library. The github README shows a quick
overview of the generated ASM for serialization and deserialization.

dub: http://code.dlang.org/packages/msgpack-ll
github: https://github.com/jpf91/msgpack-ll
api: https://jpf91.github.io/msgpack-ll/msgpack_ll.html

-- Johannes



Re: What are we going to do about mobile?

2017-04-16 Thread Johannes Pfau via Digitalmars-d
Am Sun, 16 Apr 2017 10:13:50 +0200
schrieb Iain Buclaw via Digitalmars-d :

> 
> I asked at a recent D meetup about what gitlab CI used as their
> backing platform, and it seems like it's a front for TravisCI.  YMMV,
> but I found the Travis platform to be too slow (it was struggling to
> even build GDC in under 40 minutes), and too limiting to be used as a
> CI for large projects.

That's probably for the hosted gitlab solution though. For self-hosted
gitlab you can set up custom machines as gitlab workers. The biggest
drawback here is missing gitlab integration.

> 
> Johannes, what if I get a couple new small boxes, one ARM, one
> non-descriptive x86.  The project site and binary downloads could then
> be used to the non-descriptive box, meanwhile the ARM box and the
> existing server can be turned into a build servers - there's enough
> disk space and memory on the current server to have a at least half a
> dozen build environments on the current server, testing also i386 and
> x32 would be beneficial along with any number cross-compilers
> (testsuite can be ran with runnable tests disabled).

Sounds like a plan. What CI server should we use though?

I tried concourse-ci which seems nice at first, but it's too
opinionated to be useful for us (now worker cache, no way for newer
commits to auto-cancel builds for older commits, ...)


-- Johannes



Re: Compilation problems with GDC/GCC

2017-04-16 Thread Johannes Pfau via Digitalmars-d-learn
Am Sat, 15 Apr 2017 14:01:51 +
schrieb DRex :

> On Saturday, 15 April 2017 at 13:08:29 UTC, DRex wrote:
> > On Saturday, 15 April 2017 at 13:02:43 UTC, DRex wrote:  
> >> On Saturday, 15 April 2017 at 12:45:47 UTC, DRex wrote:  
> >
> > Update to the Update,
> >
> > I fixed the lib failing to open by copying it to the location 
> > of my sources, and setting the ld to use libraries in that 
> > folder, however I am still getting the aforementioned undefined 
> > references :/ ..  
> 
> Okay, so I decided to link using GDC in verbose mode to figure 
> out what gdc is passing to gcc/the linker, and I copied the 
> output and ld linked the files, but I still have a problem.  The 
> program is linked and created but cant run, and ld produces the 
> following error:
> 
> ld: error in /usr/lib/gcc/x86_64-linux-gnu/5/collect2(.eh_frame); 
> no .eh_frame_hdr table will be created.
> 
> I haven't the foggiest what this means, and no Idea how to fix 
> it.  Does anyone know how to fix this issue?
> 
> 

Are there any additional warnings? Maybe try running in verbose
mode to get some more information?

-- Johannes



Re: What are we going to do about mobile?

2017-04-16 Thread Johannes Pfau via Digitalmars-d
Am Sat, 15 Apr 2017 09:52:49 +
schrieb Johan Engelen :

> I'd be happy to use the Pi3 as permanent tester, if the risks of 
> a hacker intruding my home network are manageable ;-)
> 

If you want to be sure use a cheap DMZ setup.

VLAN based: 
Connect your PI to some switch supporting VLAN and use an untagged port
assigned to one VLAN (i.e. the raspberry port only communicates in one
VLAN). Then if you use an OpenWRT/LEDE or similar main router simply set
up a custom firewall zone for that VLAN and disable routing between this
zone and your home LAN zone.

If you don't have a capable main router there's another solution: Buy a
cheap wr841n router for 15€
(https://wiki.openwrt.org/toh/tp-link/tl-wr841nd)
* install LEDE (lede-project.org)
* connect the router to your home lan and the raspberry pi
  * home network: DHCP client, wan
  * raspberry pi: DHCP Server, lan
* Adjust firewall to drop packets to/from your local home LAN range
  (manually or using bcp38 and luci-app-bcp38 packages)


-- Johannes



Re: What are we going to do about mobile?

2017-04-16 Thread Johannes Pfau via Digitalmars-d
Am Sat, 15 Apr 2017 15:11:08 +
schrieb Laeeth Isharc :

> 
> Not sure how much memory ldc takes to build.  If it would be 
> helpful for ARM I could contribute a couple of servers on 
> scaleway or similar.  

At least for GDC building the compiler on low-end platforms is too
resource demanding (Though the times when std.datetime needed > 2GB ram
to compile are gone for good, IIRC). I think cross-compiler tetsing is
the solution here but that involves some work on the DMD test runner.

> Gitlab has test runners built in, at least for enterprise version 
> (which is not particularly expensive) and we have been happy with 
> that.
> 
> Laeeth
> 

The free version has test runner as well. What bothers me about gitlab
is the github integration. gitlab-CI only works with a gitlab instance
so you have to mirror the github repository to gitlab. This is usually
not too difficult, but you have to be careful to make pull request
tsting and similar more complex ffeatures work correctly. I also think
they don't have anything ready to push CI status to github.


-- Johannes



Re: Compilation problems with GDC/GCC

2017-04-14 Thread Johannes Pfau via Digitalmars-d-learn
Am Fri, 14 Apr 2017 13:03:22 +
schrieb DRex :

> On Friday, 14 April 2017 at 12:01:39 UTC, DRex wrote:
> >
> > the -r option redirects the linked object files into another 
> > object file, so the point being I can pass a D object and a C 
> > object to the linker and produce another object file.
> >
> > As for linking D files, do you mean passing the druntime 
> > libraries to ld?  I used gdc -v and it gave me a whole bunch of 
> > info, it showed the an entry 'LIBRARY_PATH' which contains the 
> > path to libgphobos and libgdruntime as well as a whole bunch of 
> > other libs, i'm assuming that is what you are telling me to 
> > pass to the linker?  
> 
> I have tried passing libgphobos2.a and libgdruntime.a (and at one 
> point every library in the folder I found those two libs in) to 
> ld to link with my D source, but it still throws a billion 
> 'undefined reference' errors.
> 
> I really need help here, I have tried so many different things 
> and am losing my mind trying to get this to work.
> 
> the problem I have with passing the -r option to ld through gdc 
> is that -Wl is looking for libgcc_s.a which doesnt even exist on 
> the computer, which is annoying

GDC should generally only need to link to -lgdruntime (and -lgphobos
if you need it). However, if you really link using ld you'll have to
provide the C startup files, -lc and similar stuff for C as well, which
gets quite complicated.

You'll have to post the exact commands you used and some
of the missing symbol names so we can give better answers.

-- Johannes



Re: Deduplicating template reflection code

2017-04-14 Thread Johannes Pfau via Digitalmars-d-learn
Am Fri, 14 Apr 2017 13:41:45 +
schrieb Moritz Maxeiner <mor...@ucworks.org>:

> On Friday, 14 April 2017 at 11:29:03 UTC, Johannes Pfau wrote:
> >
> > Is there some way to wrap the 'type selection'? In pseudo-code 
> > something like this:
> >
> > enum FilteredOverloads(API) = ...
> >
> > foreach(Overload, FilteredOverloads!API)
> > {
> > 
> > }  
> 
> Sure, but that's a bit more complex:
> 
> ---
> [...] // IgnoreUDA declaration
> [...] // isSpecialFunction declaration
> 
> ///
> template FilteredOverloads(API)
> {
>  import std.traits : hasUDA, isSomeFunction, 
> MemberFunctionsTuple;
>  import std.meta : staticMap;
>  import std.typetuple : TypeTuple;
> 
>  enum derivedMembers = __traits(derivedMembers, API);
> 
>  template MemberOverloads(string member)
>  {
>  static if (__traits(compiles, __traits(getMember, API, 
> member)))
>  {
>  static if (isSomeFunction!(__traits(getMember, API, 
> member))
> && !hasUDA!(__traits(getMember, API, 
> member), IgnoreUDA)
> && !isSpecialFunction!member) {
>  alias MemberOverloads = 
> MemberFunctionsTuple!(API, member);
>  } else {
>  alias MemberOverloads = TypeTuple!();
>  }
>  } else {
>  alias MemberOverloads = TypeTuple!();
>  }
>  }
> 
>  alias FilteredOverloads = staticMap!(MemberOverloads, 
> derivedMembers);
> }
> 
> //pragma(msg, FilteredOverloads!API);
> foreach(Overload; FilteredOverloads!API) {
>  // function dependent code here
> }
> ---
> 
> Nested templates and std.meta are your best friends if this is 
> the solution you prefer :)

Great, thanks that's exactly the solution I wanted. Figuring this out by
myself is a bit above my template skill level ;-)


-- Johannes



Re: Deduplicating template reflection code

2017-04-14 Thread Johannes Pfau via Digitalmars-d-learn
Am Fri, 14 Apr 2017 08:55:48 +
schrieb Moritz Maxeiner :

> 
> mixin Foo!(API, (MethodType) {
> // function dependent code here
> });
> foo();
> ---
> 
> Option 2: Code generation using CTFE
> 
> ---
> string genFoo(alias API, string justDoIt)
> {
>  import std.array : appender;
>  auto code = appender!string;
>  code.put(`[...]1`);
>  code.put(`foreach (MethodType; overloads) {`);
>  code.put(justDoIt);
>  code put(`}`);
>  code.put(`[...]2`);
> }
> 
> mixin(genFoo!(API, q{
>  // function dependent code here
> })());
> ---
> 
> Personally, I'd consider the second approach to be idiomatic, but 
> YMMW.

I'd prefer the first approach, simply to avoid string mixins. I think
these can often get ugly ;-)

Is there some way to wrap the 'type selection'? In pseudo-code something
like this:

enum FilteredOverloads(API) = ...

foreach(Overload, FilteredOverloads!API)
{

}
-- Johannes



Deduplicating template reflection code

2017-04-14 Thread Johannes Pfau via Digitalmars-d-learn
I've got this code duplicated in quite some functions:

-
foreach (member; __traits(derivedMembers, API))
{
// Guards against private members
static if (__traits(compiles, __traits(getMember, API, member)))
{
static if (isSomeFunction!(__traits(getMember, API, member))
&& !hasUDA!(__traits(getMember, API, member), IgnoreUDA)
&& !isSpecialFunction!member)
{
alias overloads = MemberFunctionsTuple!(API, member);

foreach (MethodType; overloads)
{
// function dependent code here
}
}
}
}


What's the idiomatic way to refactor / reuse this code fragment?

-- Johannes



Re: The D ecosystem in Debian with free-as-in-freedom DMD

2017-04-12 Thread Johannes Pfau via Digitalmars-d
Am Wed, 12 Apr 2017 07:42:42 +
schrieb Martin Nowak :

> On Monday, 10 April 2017 at 16:12:35 UTC, Iain Buclaw wrote:
> > Last time someone else looked, it seemed like LDC and DMD make 
> > use of SOVERSION, but do so in an incorrect manner.  
> 
> You know what exactly is the problem? Any suggestion what to use 
> instead?
> 
> It's currently using libphobos2.so.0.74, where using major 
> version 0 to mean unstable might be a misapplication of semver 
> (http://semver.org/#spec-item-4) for SONAME.

I still haven't found some definitive documentation about this, but it
seems linux shared library working essentially works like this:

There's a major and a minor number. There's sometimes a patch version,
but there's no conclusive documentation. Reading the ldconfig source
code you can have as many version levels as you want [3] (though
essentially this is treated like one minor version, it only affects
comparing minor versions). But there are different ldconfig
implementations (linux, bsd) so I don't know if multi-level
minor versions are portable.

* Major version reflects ABI level. A new major version can break ABI
  or add new stuff to ABI.
* Minor versions can only extend the ABI of the major version but
  should not break any ABI.
* Micro / Patch version is mostly unused in ldconfig. It only affects
  comparing minor versions, i.e. when you have libphobos.so.74.0.1 and
  libphobos.so.74.0.2 ldconfig will link
  libphobos.so.74=>libphobos.so.74.0.2

Filename format: lib[name].so.[major].[minor][.patch]
Soname format: always lib[name].so.[major]

It is possible to install and use multiple major versions. Every major
version will always use the last installed minor version. (The
distribution will manage symlinks for libfoo.[major] to
libfoo.[major].[minor] for the largest minor version). Additionally a
libfoo.so is installed (for linking / development only, might even be
in -dev packages) to point to the latest libfoo.[major] symlink.

The libfoo.[major] symlink is used when linking with -lfoo. The
dependency encoded in the executable will use the soname though, so it
will encode libfoo.[major]. If you install a new major library version,
all existing executables will continue to use the major version
hardcoded in the executable. If you update a major version to a new
minor version, all executables using the major version soname will use
the new minor version.

This means:
* Increase major every time you break ABI
* Increase some minor level every time you only extend ABI

So DMD should not keep the major version fixed as 0 (every time you
update libphobos you break all existing binaries, as you break the ABI
of libphobos.so.0).

In GDC we use libtool which encodes like this [1][2]:
libgphobos.so.[major].0.[release] e.g. libgphobos.74.0.2 This is not
100% safe if a minor release breaks ABI though.  

BTW: Interestingly even with these complicated rules you can end up in
situation where versioning does not work: If you link application APP
against libfoo.so on a system with libfoo.so.1.2 the soname will only
encode libfoo.so.1. Now ship the library to a system with libfoo.so.1.1
and you may have missing symbols due to extended ABI in libfoo.so.1.2.
TLDR: Downgrading libraries is not safe with this versioning approach.



[1] https://autotools.io/libtool/version.html
[2]
https://www.gnu.org/software/libtool/manual/html_node/Libtool-versioning.html

ldconfig is responsible for maintaining the symlinks, here's how it
determines which version of a library is newer:

[3]
https://github.com/lattera/glibc/blob/master/elf/ldconfig.c#L939
https://github.com/lattera/glibc/blob/master/elf/dl-cache.c#L138


Re: The D ecosystem in Debian with free-as-in-freedom DMD

2017-04-11 Thread Johannes Pfau via Digitalmars-d
Am Tue, 11 Apr 2017 07:44:45 -0700
schrieb Jonathan M Davis via Digitalmars-d
:

> On Tuesday, April 11, 2017 14:33:01 Matthias Klumpp via Digitalmars-d
> wrote:
> > On Tuesday, 11 April 2017 at 14:26:37 UTC, rikki cattermole wrote:  
> > > [...]
> > > The problem with /usr/include/d is that is where .di files
> > > would be located not .d. This would also match up with the
> > > c/c++ usage of it.  
> >
> > When I asked about this a while back, I was told to just install
> > the sources into the include D as "almost nobody uses .di files
> > except for proprietary libraries" (and do those even exist?).
> > But in any case, any path would be fine with me as long as people
> > can settle on using it - `/usr/share/d` would be available ^^  
> 
> Putting .di files in the include directory makes sense when compared
> to C++, but it's definitely true that they're used quite rarely. They
> gain you very little and cost you a lot (e.g. no CTFE). But unless
> someone were looking to put both .di and .d files for the same module
> on the same system, it wouldn't really be an issue to put them both
> in the include directory. I would expect open source librares to
> use .di files very rarely though.
> 
> - Jonathan M Davis
> 

I'd think of .d files as a superset of .di files. There's no reason to
install both (assuming same library version) and having the .d files
will produce better cross-module inlining so these are preferred.

Of course for proprietary libraries .di files are required. But
nevertheless .di and .d can be mixed in /usr/include/d.

The only downside is that inlining from /usr/include could cause
licensing problems (e.g. LGPL allows only linking, not sure how
inlining would be affected. But it seems they have an Exception for
inlining)


-- Johannes



Re: The D ecosystem in Debian with free-as-in-freedom DMD

2017-04-11 Thread Johannes Pfau via Digitalmars-d
Am Tue, 11 Apr 2017 14:21:57 +
schrieb Matthias Klumpp :

> can be used by Automake 
> (native),

Do you maintain D support for automake? I wrote some basic D support
for autoconf and libtool
(https://github.com/D-Programming-GDC/GDC/tree/master/libphobos/m4) but
no automake support except for some hacks in
https://github.com/D-Programming-GDC/GDC/blob/master/libphobos/d_rules.am

I guess I should upstream this some time.

-- Johannes



Re: dmd Backend converted to Boost License

2017-04-07 Thread Johannes Pfau via Digitalmars-d-announce
Am Fri, 7 Apr 2017 08:14:40 -0700
schrieb Walter Bright :

> https://github.com/dlang/dmd/pull/6680
> 
> Yes, this is for real! Symantec has given their permission to
> relicense it. Thank you, Symantec!

Great news! Maybe someone could notify http://phoronix.com . They've
blogged about D before and reach quite some linux users and developers.


-- Johannes



Re: GDC and shared libraries

2017-04-07 Thread Johannes Pfau via Digitalmars-d
Am Fri, 07 Apr 2017 17:29:46 +0100
schrieb Russel Winder via Digitalmars-d :

> At GDC 5.3.1 there was no support for shared libraries, or, at least,
> so I believe and encoded in the SCons tests. Is there a version of GDC
> from which shared libraries are supported?
> 

Unfortunately the GCC version (5.3.1) doesn't say much. According to
the GDC changelog shared library support was finished September last
year. This is frontend version >= 2.067, though we never released 2.067
(there was never a stable 2.067 revision). 2.068.2 releases should have
full shared library support and of course the current master
gdc-6/5/4.9/4.8 branches have full support as well.


-- Johannes



Re: Proposal: Exceptions and @nogc

2017-04-04 Thread Johannes Pfau via Digitalmars-d
Am Mon, 03 Apr 2017 14:31:39 -0700
schrieb Jonathan M Davis via Digitalmars-d
:

> On Monday, April 03, 2017 14:00:53 Walter Bright via Digitalmars-d
> wrote:
> > The idea of this proposal is to make a nogc program much more
> > achievable. Currently, in order to not link with the GC, you can't
> > use exceptions (or at least not in a memory safe manner). A
> > solution without memory safety is not acceptable.  
> 
> Yeah, the simple fact that you can't allocate exceptions in @nogc
> code is crippling to @nogc, and a lot of code that could otherwise be
> @nogc can't be because of exceptions - though the exception message
> poses a similar problem (especially if you'd normally construct it
> with format), and I don't know how you get around that other than not
> using anything more informative than string literals. Unless I missed
> something, this proposal seems to ignore that particular issue.
> 
> - Jonathan M Davis
> 

Allocate the string using an Allocator, free in the Exceptions ~this?

This has to be integrated somehow with the copying scheme though, so
you'll probably need some kind of reference counting for classes
again or duplicate the string on every copy. 

-- Johannes



Re: Exceptions in @nogc code

2017-04-02 Thread Johannes Pfau via Digitalmars-d
Am Sun, 02 Apr 2017 00:09:09 +
schrieb Adam D. Ruppe :

> On Saturday, 1 April 2017 at 14:54:21 UTC, deadalnix wrote:
> > The problem you want to address is not GC allocations, it is GC 
> > collection cycles. If everything is freed, then there is no GC 
> > problem. not only this, but this is the only way GC and nogc 
> > code will interact with each others.  
> 
> Amen. Moreover, for little things like exceptions, you can 
> probably also just hack it to not do a collection cycle when 
> allocating them.

I do not want GC _allocation_ for embedded systems (don't even
want to link in the GC or GC stub code) ;-)


-- Johannes



Re: So no one is using Amazon S3 with D, why?

2017-03-15 Thread Johannes Pfau via Digitalmars-d
Am Wed, 15 Mar 2017 08:27:23 +
schrieb Suliman :

> On Tuesday, 14 March 2017 at 20:21:44 UTC, aberba wrote:
> > Amazon S3 seem like a common solution for object storage these 
> > days[1] but I'm seeing almost no activity in this area (stable 
> > native D API). Why?
> >
> > [1] https://trends.builtwith.com/cdn/Amazon-S3  
> 
> How much the lowest vibed ready instance cost? I am looking for a 
> cheapest solution for site.

If you really want the _cheapest_ solution you can probably run vibe.d
on small VPS with 64 or 128 MB ram.

See https://www.lowendtalk.com/ and https://lowendbox.com/

I've used https://securedragon.net/ (12USD/year) and https://buyvm.net
(15USD/year) for VPN purposes and never had any problem.

If you need more power there are also often cheaper options than cloud
hosting. For example, I use this as a GDC build server:
https://www.lowendtalk.com/discussion/97773/vapornode-black-friday-7-4gb-kvm-lxc-in-tampa

(However, as these are shared services you have to carefully look at
network throughput / CPU usage guarantees)

-- Johannes



Re: GDC options

2017-03-12 Thread Johannes Pfau via Digitalmars-d-learn
Am Sun, 12 Mar 2017 12:09:01 +
schrieb Russel Winder via Digitalmars-d-learn
:

> Hi,
> 
> ldc2 has the -unittest --main options to compile a file that has
> unittests and no main so as to create a test executable. What causes
> the same behaviour with gdc?
> 

https://github.com/D-Programming-GDC/GDMD/tree/dport

gdmd -unittest --main

The unittest flag for GDC is -funittest but there's no flag to generate
a main function. gdmd generates a temporary file with a main function
to implement this.

-- Johannes



Re: Of the use of unpredictableSeed

2017-03-06 Thread Johannes Pfau via Digitalmars-d
Am Mon, 06 Mar 2017 22:04:44 -0500
schrieb "Nick Sabalausky (Abscissa)"
:

> On 03/06/2017 05:19 PM, sarn wrote:
> > On Monday, 6 March 2017 at 10:12:09 UTC, Shachar Shemesh wrote:  
> >> Excuse me if I'm asking a trivial question. Why not just seed it
> >> from /dev/urandom? (or equivalent on non-Linux platforms. I know
> >> at least Windows has an equivalent).
> >>
> >> Shachar  
> >
> > One reason is that /dev/urandom isn't always available, e.g., in a
> > chroot.  Sure, these are corner cases, but it's annoying when stuff
> > like this doesn't "just work".  
> 
> I don't claim to be any sort of linux expert or anything, but doesn't 
> chroot have a reputation for being a bit of a finicky, leaky
> abstraction anyway? I haven't really used them, but that's been my
> understanding...?

chroots were used for security stuff in the past (chrooting a ftp
server and similar stuff) and they're indeed a leaky abstraction in
that case.

However, chroots can also be used to 'chroot into another OS'. E.g.
people sometimes  chroot into the OS on a harddisk from a livecd. This
is sometimes useful to repair a system, install packages, ...

-- Johannes



Optimizing / removing inlined or ctfe-only private functions

2017-03-04 Thread Johannes Pfau via Digitalmars-d
Here's a recent stackoverflow thread where somebody asked why GDC is
not able to remove completely inlined or unused, module-private
functions: http://stackoverflow.com/q/42494205/471401
In C it's possible to mark a function as static and the compiler won't
emit an externally callable function into the object file if not
necessary (A function could still be required if the address of the
function is taken somewhere in the module).

Turns out we also have a bug report for this:
https://issues.dlang.org/show_bug.cgi?id=6528

One thing I was wondering about though and which is not yet mentioned in
the bug report:

// a.d
private void fooPrivate() {}

/*template*/ void fooPublic(string func = "fooPrivate")()
{
mixin(func ~ "();");
}


When compiling a.d we haven't analyzed the fooPublic template and the
example shows why we can't know at all which private functions could
be called from a template. As the template is instantiated into another
object file (e.g. module b.d) we can't know in a.d that fooPrivate is
actually required.

So does that mean removing private functions in D is completely
impossible as we can't know if a function is unused? People sometimes
refer to the linker as a solution but if a.d is in a shared library
this won't work either.

This seems to be a problem especially for CTFE only functions, as it
means for example that any such function in phobos (e.g. used for
string creation for mixins) bloats the phobos library.

It's interesting to think about template instances here as
well: If a template instance is completely inlined in a module, do we
have to keep the function in the object file? AFAICS no, as the
template should be re-instantiated if used in a different module, but I
don't know the template <=> object file rules in detail. Right now this
means we could get lots of template instances in the phobos shared
library for template instances only used in CTFE:


import std.conv;
private string fooPrivate(int a)
{
return `int b = ` ~ to!string(a) ~";";
}
mixin(fooPrivate(42));

https://godbolt.org/g/VW8yLr

Any idea to measure the impact of this on the binary shared libphobos
file? We probably can get some estimate by counting all template
instances that are only referenced by private functions which are
themselves never referenced...

Any idea how to solve this problem? I think the same problem was
mentioned in the DLL-support context as this implies we also have to
export private functions from modules for templates to work. Was there
some kind of solution / discussion? I think I remember something about
marking `private` functions as `export private` instead?

-- Johannes



  1   2   3   4   5   6   7   8   9   10   >