Re: DIP 1014
@Manu, @Jonathan M Davis > GNU's std::string implementation stores an interior pointer! >_< it's not just GNU's std::string ; it can crop up in other places, see https://github.com/Syniurge/Calypso/issues/70 in opencv (cv:: MatStep) On Wed, Oct 3, 2018 at 8:10 PM Shachar Shemesh via Digitalmars-d wrote: > > On 03/10/18 23:25, Stanislav Blinov wrote: > > It *is* true when the type doesn't have a destructor. Extending that to > > a move hook, it will also be true because destruction will be elided. > > I know what you're talking about, that happens for types that have > > destructors. > > No, destructors have nothing to do with it, as well they shouldn't. The > whole point of D moving structs around is that no destruction is needed. > It took me a while to figure out why your program does appear to work. > At first I thought it was because of inlining, but that was wrong. > > The reason your test case works (sometimes, if you don't breath on it > too heavily) is because the object is actually moved twice. Once when > returning from the function into the variable, and another when copied > into opAssign's argument. This results in it returning to its original > address. > > If you do *anything* to that program, and that includes even changing > its compilation flags (try enabling inlining), it will stop working. > > You should have known that when you found out it doesn't work on ldc: > ldc and dmd use the same front-end. If you think something works > fundamentally different between the two, you are probably wrong. > > To verify my guess is right, I tried the following change: add to > createCounter and createCounterNoNRV in your original program (no > destructors) the following two lines: >int a; >write(a); > > You have added another local variable to the functions, but otherwise > changed absolutely nothing. You will notice your program now has an offset. > > Shachar
Re: D vs nim
i think the explanation in https://nim-lang.org/docs/manual.html#statements-and-expressions-when-statement is pretty clear. In any case you can see for yourself: nim c -r main.nim ```nim proc fun(a:int):auto=a*a static: # makes sure block evaluated at CT when fun(1)==1: echo "ok1" when fun(2)==2: echo "ok2" ``` prints ok1 On Fri, May 4, 2018 at 9:40 AM Mark via Digitalmars-d < digitalmars-d@puremagic.com> wrote: > On Thursday, 3 May 2018 at 23:09:34 UTC, Timothee Cour wrote: > > nim supports static if (`when`) + CTFE. A simple google search > > or searching > > would've revealed that. > > > > On Thu, May 3, 2018 at 3:20 PM Mark via Digitalmars-d < > > digitalmars-d@puremagic.com> wrote: > > > >> On Thursday, 3 May 2018 at 20:57:16 UTC, Dennis wrote: > >> > On Thursday, 3 May 2018 at 19:11:05 UTC, Mark wrote: > >> >> Funnily, none of these languages have a "static if" > >> >> construct, nor do Rust, Swift and Nim. Not one that I could > >> >> find, anyway. > >> > > >> > What qualifies under "static if"? Because Rust, Swift and > >> > Nim do have conditional compilation. > >> > > > https://doc.rust-lang.org/book/first-edition/conditional-compilation.html > >> > > > https://developer.apple.com/library/content/documentation/Swift/Conceptual/Swift_Programming_Language/Statements.html (conditional compilation blocks) > >> > > > https://nim-lang.org/docs/manual.html#statements-and-expressions-when-statement > > > >> Fair enough. I should have written "static if + CTFE". > The little information on the official site describes `when` more > like #ifdef in C than an actual static if. I also went over a few > dozens of modules in the standard library and the statement seems > to be rarely used, and when it does it's usually in an #ifdef-ish > context (like platform specific code). > Perhaps Nim's support for conditional compilation is as powerful > as D's is, but you can see why my impression is currently to the > contrary.
Re: D vs nim
nim supports static if (`when`) + CTFE. A simple google search or searching would've revealed that. On Thu, May 3, 2018 at 3:20 PM Mark via Digitalmars-d < digitalmars-d@puremagic.com> wrote: > On Thursday, 3 May 2018 at 20:57:16 UTC, Dennis wrote: > > On Thursday, 3 May 2018 at 19:11:05 UTC, Mark wrote: > >> Funnily, none of these languages have a "static if" construct, > >> nor do Rust, Swift and Nim. Not one that I could find, anyway. > > > > What qualifies under "static if"? Because Rust, Swift and Nim > > do have conditional compilation. > > https://doc.rust-lang.org/book/first-edition/conditional-compilation.html > > https://developer.apple.com/library/content/documentation/Swift/Conceptual/Swift_Programming_Language/Statements.html (conditional compilation blocks) > > https://nim-lang.org/docs/manual.html#statements-and-expressions-when-statement > Fair enough. I should have written "static if + CTFE".
Re: D vs nim
@helxi I invite you to contribute PR's to https://github.com/timotheecour/D_vs_nim/ where I discuss feature parity and how to translate concepts from D to nim wherever it makes sense On Fri, Apr 13, 2018 at 4:12 PM, helxi via Digitalmars-d wrote: > On Friday, 10 April 2015 at 18:42:20 UTC, Timothee Cour wrote: >> >> Nim looks very promising. >> Is there any comprehensive comparison against D somewhere (if possible >> recent) ? > > > Nim is way more expressive than D afaik. Consider the following imaginary > function: > > proc fn[A : int | float; N; B : seq[A] | DoublyLinkedList[A] | array[N, A] | > set[A]](x: B) : int = > return x.len() + 10 > > This function takes an argument of type B, which is can be either a vector > or forward-list or array of (array's length is N, which can be of any > numeric type) or a set of A. A can be either int or float only. > > Emulating those inline constraints in D would be cumbersome.
Re: D vs nim
that comment was regarding -betterC RAII (with structs) has been available in D for a while, eg: ```d struct A{ ~this(){...} } void fun(){ A a; // when a goes out of scope, will call dtor deterministically } ``` On Tue, Mar 27, 2018 at 4:15 PM, Ali via Digitalmars-d wrote: > On Tuesday, 27 March 2018 at 01:19:44 UTC, timotheecour wrote: >> >> On Wednesday, 22 April 2015 at 06:03:07 UTC, Timothee Cour wrote: >>> >>> On Mon, Apr 13, 2015 at 10:28 AM, Timothee Cour >>> wrote: >>> >>> >>> I would like to refocus this thread on feature set and how it compares to >>> D, not on flame wars about brackets or language marketing issues. >> >> >> >> I've created a git repo https://github.com/timotheecour/D_vs_nim/ with the >> goal: up to date and objective comparison of features between D and nim, and >> 1:1 map of features, tools, idioms and libraries to help D users learn nim >> and vice versa. > > > How is RAII available in D? I did a quick search on this forum but didnt > exactly find what I want > > I found a comment for Walter (saying it was recently added > https://forum.dlang.org/post/p1pa01$kc8$1...@digitalmars.com) > > What was the added feature that now enables RAII in D
does it scale to have 1 person approve of all phobos additions?
https://wiki.dlang.org/Contributing_to_Phobos mentions: > Smaller additions like individual functions can be merged directly after > @andralex approves The arguments for having all changes go through one person have been presented here [1]. However this is how I see things: * if phobos is supposed to be batteries included (https://forum.dlang.org/post/mfrv29$t21$1...@digitalmars.com), it should be able to grow fast; looking at past changelogs, this has not been the case, each release only comes with a handful additions to phobos at most. * let's look at the D survey answers to the question 'what went wrong' while contributing code to dlang on github: https://github.com/wilzbach/state-of-d-2018/blob/master/13c:%20What%20went%20wrong%3F 45 answers out of 69 mentioned the review process was too slow and PR's lingering forever after all comments are addressed. A common theme is PR lingers while waiting for approval from leadership [2] * This creates a vicious cycle which diminishes number of contributors since PR review is so inefficient; as a result a bug fix / useful addition never gets merged. * I'm not sure if there are many examples of large projects where every addition/symbol overload has to be approved by a single person; this would be more bearable if response time was fast; however as noted in survey answers, response time is often in weeks/months, sometimes years. pinging by email also doesn't always work: here was the response I got: > Thanks. This will need to wait - we're busy with DConf for the time being. if one is unavailable, one should delegate Given all this, my recommendation would be for PR's to be merged after all checks have passed and it was approved by 2 committers. --- [1] https://forum.dlang.org/post/ncp5g8$20hr$1...@digitalmars.com > I just asked for a stdlib addition to be pulled back at > https://github.com/D-Programming-Language/phobos/pull/4025. Such decisions > are difficult because the risk is them to be interpreted as stifling > creativity. That is not the case. The only reason for all library additions > to go through one person/small group is to preserve a cohesive vision and > style. At the opposite end, nobody wants a library that's a mishmash of > styles and approaches, even if that includes some of theirs. Please make sure > I know of library additions. I've been on vacation and even when I'm not I > can't monitor and review all library work - it would be a full-time job that > wouldn't leave me time for anything else. Please just email me directly > related pull requests. I always tend to my email. [2] here are some illustrative answers: * PR's linger forever after all comments have been addressed; not enough committers create a bottleneck from andrei/walter (not enough trust/delegation); style issues are a waste of time (we should use tooling instead); negative bias towards 'small' improvements * Andrei Alexandrescu had way too much influence in areas outside his core competency, too many bad decisions made for religious/philosphical reasons relying on the "sufficently powerful compiler" fallacy and appeal to authority * PRs have a tendency to grow stale, especially when waiting on a response from Walter or Andrei. * There is sometimes a tendency for PRs to languish - sometimes for years, particularly if there's any disagreement or if it requires input from Walter or Andrei. Obviously, that doesn't always happen, but it's not entirely uncommon either. * Community is very negative. Leadership seems very unengaged. There is not enough delegation/trust. * I'd say "The whole process was an uphill battle", but that's a huge understatement. Endless "perfect is the enemy of good". Endless pushback and arguing on whether things should even be done at all (especially from MartinNowak - nothing personal, and no offense, but that one alone doubles the amount of arguing that needs done to get anywhere, will object to seemingly anything, and frequently just leads the debate in circles). And long periods with no response. * Still no response to my pull request after fixing it 2 weeks after the initial feedback (more than a few months of waiting now) * It's very hard to get significant improvements to D's weakest areas accepted, because of concerns about breaking changes and/or excessive complexity of proposed solutions. As a result, broken stuff just stays broken. * Mixed experience. Sometimes too much of a "don't touch our project" attitude. (advice: give the contributing developer some ownership of the project. Yes that means making compromises on your own opinions)
Re: I have a patch to let lldb demangle D symbols ; help welcome to improve it
> Even with DMD 2.079 the frame locals don't show up... LDC FTW! confirmed, `fr v` shows frame locals for a binary built with ldc but not dmd (independent of this PR though :) ) => just submitted https://issues.dlang.org/show_bug.cgi?id=18612 On Tue, Mar 13, 2018 at 9:29 AM, Luís Marques via Digitalmars-d wrote: > On Tuesday, 13 March 2018 at 14:00:39 UTC, Luís Marques wrote: >> >> On Monday, 12 March 2018 at 23:03:33 UTC, Timothee Cour wrote: >> Yeah, that works. I'll be trying it more thoroughly and report any issues. > > > Even with DMD 2.079 the frame locals don't show up... LDC FTW!
Re: I have a patch to let lldb demangle D symbols ; help welcome to improve it
> (BTW, if I commented out the plugin path setting I would get an assertion > failure. Just FYI if that helps with the code review.) fixed ; thanks for reporting! On Tue, Mar 13, 2018 at 9:29 AM, Luís Marques via Digitalmars-d wrote: > On Tuesday, 13 March 2018 at 14:00:39 UTC, Luís Marques wrote: >> >> On Monday, 12 March 2018 at 23:03:33 UTC, Timothee Cour wrote: >> Yeah, that works. I'll be trying it more thoroughly and report any issues. > > > Even with DMD 2.079 the frame locals don't show up... LDC FTW!
Re: dmd -unittest= (same syntax as -i)
I originally proposed it here: https://forum.dlang.org/post/mailman.3166.1517969180.9493.digitalmar...@puremagic.com but it was buried under another thread On Wed, Mar 14, 2018 at 3:04 PM, Adam D. Ruppe via Digitalmars-d wrote: > On Wednesday, 14 March 2018 at 21:22:01 UTC, Timothee Cour wrote: >> >> would a PR for `dmd -unittest= (same syntax as -i)` be welcome? > > > so when this came up on irc earlier (was that you?) this was the first > thought that came to my mind. I'd support it, tho I'm no decision maker.
dmd -unittest= (same syntax as -i)
would a PR for `dmd -unittest= (same syntax as -i)` be welcome? wouldn't that avoid all the complicatiosn with version(StdUnittest) ? eg use case: # compile with unittests just for package foo (excluding subpackage foo.bar) dmd -unittest=foo -unittest=-foo.bar -i main.d
Re: I have a patch to let lldb demangle D symbols ; help welcome to improve it
> The mangled names lack the initial underscore, maybe that's related? quite likely, try to compile with ldc or with latest dmd (2.079, which fixed underscores on OSX) so that demangling works. I just tried with dmd 2.078 and it indeed doesn't demangle (as expected) On Mon, Mar 12, 2018 at 7:21 AM, Luís Marques via Digitalmars-d wrote: > On Monday, 12 March 2018 at 14:19:03 UTC, Luís Marques wrote: >> >> On Tuesday, 27 February 2018 at 05:28:41 UTC, Timothee Cour wrote: >>> >>> https://github.com/llvm-mirror/lldb/pull/3 >>> + >>> https://github.com/timotheecour/dtools/blob/master/dtools/lldbdplugin.d > > > (BTW, if I commented out the plugin path setting I would get an assertion > failure. Just FYI if that helps with the code review.)
Re: I have a patch to let lldb demangle D symbols ; help welcome to improve it
updated build instructions, see https://github.com/timotheecour/dtools/commit/8597923dd4ed7691f717b5e1bdbbf2ee66961ef5 On Fri, Mar 9, 2018 at 9:33 AM, Luís Marques via Digitalmars-d wrote: > On Tuesday, 27 February 2018 at 05:28:41 UTC, Timothee Cour wrote: >> >> https://github.com/timotheecour/dtools/blob/master/dtools/lldbdplugin.d > > > dtools seems to rely on the old import visibility behavior and doesn't > compile with a recent D compiler. For instance: > > // functional.d: > import std.algorithm:sort,uniq,walkLength; > > Error: module std.algorithm import 'walkLength' not found
Re: Inline Module / Namespace
I'm sure he meant: ``` --- foo.d module foo; module foo.bar{ void fun(){} } --- foo2.d import foo.bar; ``` On Fri, Mar 9, 2018 at 10:51 AM, Manu via Digitalmars-d wrote: > On 9 March 2018 at 10:44, Jonathan via Digitalmars-d > wrote: >> D kinda lacks a way of creating a module/namespace inside another file. >> >> D does have modules but they have to be in separate files. (Though separate >> files may be better coding practice, why is it D's job to tell me how to >> code.) >> >> I think a simple way to do this with existing syntax is to add functionality >> for `module` to be used as a block. >> >> >> module modulename { >> void fun(){} >> } >> modulename.fun(); >> >> >> An inline module. > > If you tried to `import modulename;` from some other module... how > would the compiler know where to find it?
Re: I have a patch to let lldb demangle D symbols ; help welcome to improve it
> I'm pretty sure Timothee based his patch onto LLDB/LLVM trunk. indeed, see instructions here: https://github.com/timotheecour/dtools/blob/master/dtools/lldbdplugin.d > Seems like they prefer a shared library and not rewriting it in C++ [1]. indeed, I would not support something that requires rewriting demangle in C++ for obvious reasons (lots of useless work, gets out of sync etc). > BTW, there's also GNU libiberty, bart of binutils, which Iain claims have > better support for demangling D symbols than core.demangler. IIRC he wrote that, so we'd need an unbiased opinion :) But more importantly, libiberty is not up to date with latest features in core.demangle (eg back references etc). Also, I'd like to know in what way it'd be better. I had to make some small modifications to core.demangle to avoid https://github.com/timotheecour/dtools/issues/2 ; it works, but a bit ugly (see https://github.com/timotheecour/dtools/issues/2 for discussion) On Tue, Mar 6, 2018 at 12:26 PM, Johan Engelen via Digitalmars-d wrote: > On Tuesday, 6 March 2018 at 20:25:10 UTC, Johan Engelen wrote: >> >> On Tuesday, 6 March 2018 at 18:19:13 UTC, Luís Marques wrote: >>> >>> On my LLVM fork for RISC-V and MSP430 work it doesn't build (no >>> llvm/Support/DJB.h) and on the latest stable, 5.0.1, cmake fails to >>> configure (Unknown CMake command "add_llvm_install_targets"). >> >> >> LLDB and LLVM need to be version synchronized. Did you checkout LLVM and >> LLDB both from their respective same-named release branches? > > > I'm pretty sure Timothee based his patch onto LLDB/LLVM trunk. > > -Johan
why not use git rebase instead of git merge in dlang repos?
There are lots of articles on this topic, eg: https://blog.carbonfive.com/2017/08/28/always-squash-and-rebase-your-git-commits/ (note that squashing down to 1 commit shouldn't be necessary but at least rebasing should be done) github UI also allows to rebase (instead of merge) would really simplify things like visualization / understanding history, git bisect. It shouldn't matter at what commit was a feature started, only thing that matter should be when it was incorporated into master.
Re: I have a patch to let lldb demangle D symbols ; help welcome to improve it
yes, I've fixed the issue with crashes on large symbols using a patched `demangle` ; will update code soon; but feel free to take a look at lldb side of things On Thu, Mar 1, 2018 at 12:23 PM, Luís Marques via Digitalmars-d wrote: > On Tuesday, 27 February 2018 at 11:23:02 UTC, timotheecour wrote: >> >> On Tuesday, 27 February 2018 at 05:28:41 UTC, Timothee Cour wrote: >>> >>> https://github.com/llvm-mirror/lldb/pull/3 >>> + >>> https://github.com/timotheecour/dtools/blob/master/dtools/lldbdplugin.d >>> >>> >>> on OSX, it works great except when encountering large symbols which >>> cause segfault when GC does a collection (triggered inside >>> core.demangle.demangle); >>> Help is welcome to improve that (or more generally to improve D >>> support in lldb, which I started in >>> https://github.com/llvm-mirror/lldb/pull/3) >>> NOTE: lldb doesn't accept github PR's but easier to work with PR's for >>> whoever wants to help on tha in the meantime >> >> >> Specifically, the issue I'm facing is: >> https://github.com/timotheecour/dtools/issues/2 (a crash occurs when >> _d_arraysetlengthiT is called) >> >> any help would be greatly appreciated > > > Thanks for working on this. I'll try to look into this in the next few days. > (If you have further progress on this please provide an update here).
Re: can we un-deprecate .ptr on arrays in @safe code? cf issue 18529
> Hm... borrowing from Timothee's suggestion: > This would be fine and @safe, but may not be useful for all purposes. > However, it would fix your issue. how about this: https://github.com/dlang/phobos/pull/6231 On Tue, Feb 27, 2018 at 12:09 PM, Steven Schveighoffer via Digitalmars-d wrote: > On 2/27/18 3:00 PM, Steven Schveighoffer wrote: >> >> On 2/27/18 12:32 PM, Atila Neves wrote: >> >>> There's a common case where it's not equivalent - when the pointer is >>> null. Imagine I have a C function I want to call: >>> >>> extern(C) void fun(int* things); >>> >>> Imagine also that it's ok to call with null. Well, now I can't use a >>> slice to call this and have it be 1) @safe and 2) not throw RangeError. I >>> ran into this the other way. >> >> >> fun(x.length ? &x[0] : null); > > > Hm... borrowing from Timothee's suggestion: > > @trusted @nogc pure nothrow > T* pointer(T)(T[] a){ >return a.length > 0 ? a.ptr : null; > } > > This would be fine and @safe, but may not be useful for all purposes. > However, it would fix your issue. > > -Steve
Re: can we un-deprecate .ptr on arrays in @safe code? cf issue 18529
this would be more bearable if there was a standard @trusted method to get array `.ptr`, eg: in `object.d` (so that it's indeed standard) ``` @trusted @nogc pure nothrow auto pointer(T)(T a){ return a.ptr; } ``` again, the deprecation message is misleading because `&a[0]` isn't equivalent to `a.ptr` having something like `pointer` (and making deprecation msg use that) would be a better mitigation On Tue, Feb 27, 2018 at 3:56 AM, Jonathan M Davis via Digitalmars-d wrote: > On Tuesday, February 27, 2018 11:33:04 Simen Kjærås via Digitalmars-d wrote: >> And trust me, the compiler complains about both of these. >> Possibly rightfully in the first example, but the latter never >> does anything scary with the given pointers. > > As I understand it, the way that @safety checks generally work is they check > whether a particular operation is @safe or not. They don't usually care > about what is then done with the result. So, if you do something like take > the address of something, that's immediately @system regardless of what you > do with the result. That changes on some level with DIP 1000 and scope, > because then it uses scope to ensure that the lifetime of stuff like > pointers doesn't exceed the lifetime of what they point to so that it can > then know that taking the address is @safe, but without DIP 1000, it takes > very little for something to become @system. e.g. this is compiles with > -dip1000 but otherwise doesn't: > > void main() @safe > { > int i; > assert(&i !is null); > } > > Now, the compiler does seem to be a bit smarter with dynamic arrays and ptr > given that this compiles without -dip1000 > > void main() @safe > { > int[] i; > assert(i.ptr !is null); > } > > However, this doesn't compile with -dip1000: > > void main() @safe > { > int[] i; > auto j = i.ptr; > assert(j !is null); > } > > and not even this compiles with -dip1000: > > void main() @safe > { > int[] i; > scope j = i.ptr; > assert(j !is null); > } > > though I'm inclined to think that that's a bug from what I understand of > -dip1000. > > In any case, @safety checks tend to be fairly primitive, so once you start > mucking around with pointers, it's not hard to write code that gets treated > as @system because of a single expression in the code that is clearly @safe > within the context of the function, but the compiler can't see it. > > And for better or worse, accessing a dynamic array's ptr member is now > @system, because it's not @safe in all circumstances. If the compiler were > smarter, then a number of uses of ptr would probably be @safe, but its > analysis for stuff like that is usually pretty primitive, in part because > making it sophisticated requires stuff like code flow analysis, which the > compiler doesn't do a lot of, precisely because it is complicated and easy > to get wrong. Walter is particularly leery about making it so that stuff is > an error or not based on code flow analysis, and @safe falls into that camp. > Clearly, some of that is going on with DIP 1000, but that seems to be > largely by using the type system to solve the problem rather than doing much > in the way of code flow analysis. > > - Jonathan M Davis > >
can we un-deprecate .ptr on arrays in @safe code? cf issue 18529
see rationale in https://issues.dlang.org/show_bug.cgi?id=18529
I have a patch to let lldb demangle D symbols ; help welcome to improve it
https://github.com/llvm-mirror/lldb/pull/3 + https://github.com/timotheecour/dtools/blob/master/dtools/lldbdplugin.d on OSX, it works great except when encountering large symbols which cause segfault when GC does a collection (triggered inside core.demangle.demangle); Help is welcome to improve that (or more generally to improve D support in lldb, which I started in https://github.com/llvm-mirror/lldb/pull/3) NOTE: lldb doesn't accept github PR's but easier to work with PR's for whoever wants to help on tha in the meantime
how to propagate computed type during CTFE?
in example below, how do I propagate RET (or even `typeof(a)`) to the result value of `inferType`? does this need a language change to allow this? ``` template inference(alias emitter) { auto inference(){ auto inferType(){ emitter!((a){ enum RET=typeof(a).stringof; // type is known here, how to propagate? pragma(msg, RET); // string }) (); return "unknown"; } // how to get RET? (or even typeof(a) ) enum temp=inferType; pragma(msg, temp); } } void main(){ static void fun(alias put)(){ put("hello"); } inference!fun; } ``` use case: allow type inference in `emit` https://github.com/timotheecour/dtools/blob/master/dtools/util/emit.d (see forum discussion here: https://forum.dlang.org/post/mailman.538.1458560190.26339.digitalmar...@puremagic.com)
Re: what are guidelines for when to split a module into a package?
> that doesn't help anyone who's actually reading the documentation and trying > to find stuff that way how about the following fix for that: having a DDOC token on a package.d to indicate merging the submodules in the documentation, eg: ``` /// MERGE_SUBMODULES std/aglorithm/package.d ``` when user browses to https://dlang.org/phobos/std_algorithm.html, he would see the DDOC contents from all direct submodules of std/aglorithm/ right there in that ddoc page. On Thu, Feb 22, 2018 at 12:04 AM, Timothee Cour wrote: >> it actually does reduce compilation times if the imports go directly to the >> module in > question rather than to a module that publicly imports the symbols > > time1=compilation time of `import std.algorithm : find;` before split > time21=compilation time of `import std.algorithm : find;` after split > time22=compilation time of `import std.algorithm.searching : find;` after > split > > my understand is that we have: > time22 < time1 but time21 ~= time1 > so we're in no way worse than before the split > unless time21 > time1 (noticably) in which case you have a strong argument > > > On Wed, Feb 21, 2018 at 11:57 PM, Jonathan M Davis via Digitalmars-d > wrote: >> On Wednesday, February 21, 2018 23:48:32 Timothee Cour via Digitalmars-d >> wrote: >>> ``` >>> import std.algorithm.searching : find; >>> >>> not >>> >>> import std.algorithm : find; >>> ``` >>> >>> that's just a missed opportunity to benefit from the split; we're in >>> no way worse after the split than before the split in that regard. We >>> can just leave it as `import std.algorithm : find;` with no adverse >>> effect. >> >> Maybe, but the CI stuff for Phobos doesn't like that, and it actually does >> reduce compilation times if the imports go directly to the module in >> question rather than to a module that publicly imports the symbols. >> >> - Jonathan M Davis >>
Re: what are guidelines for when to split a module into a package?
> it actually does reduce compilation times if the imports go directly to the > module in question rather than to a module that publicly imports the symbols time1=compilation time of `import std.algorithm : find;` before split time21=compilation time of `import std.algorithm : find;` after split time22=compilation time of `import std.algorithm.searching : find;` after split my understand is that we have: time22 < time1 but time21 ~= time1 so we're in no way worse than before the split unless time21 > time1 (noticably) in which case you have a strong argument On Wed, Feb 21, 2018 at 11:57 PM, Jonathan M Davis via Digitalmars-d wrote: > On Wednesday, February 21, 2018 23:48:32 Timothee Cour via Digitalmars-d > wrote: >> ``` >> import std.algorithm.searching : find; >> >> not >> >> import std.algorithm : find; >> ``` >> >> that's just a missed opportunity to benefit from the split; we're in >> no way worse after the split than before the split in that regard. We >> can just leave it as `import std.algorithm : find;` with no adverse >> effect. > > Maybe, but the CI stuff for Phobos doesn't like that, and it actually does > reduce compilation times if the imports go directly to the module in > question rather than to a module that publicly imports the symbols. > > - Jonathan M Davis >
Re: what are guidelines for when to split a module into a package?
``` import std.algorithm.searching : find; not import std.algorithm : find; ``` that's just a missed opportunity to benefit from the split; we're in no way worse after the split than before the split in that regard. We can just leave it as `import std.algorithm : find;` with no adverse effect. On Wed, Feb 21, 2018 at 11:44 PM, Timothee Cour wrote: >> it's harder to find symbols > > i don't understand this argument. > > ``` > dscanner --declaration startsWith > ./std/algorithm/searching.d(4105:6) > ./std/algorithm/searching.d(4195:6) > ./std/algorithm/searching.d(4265:6) > ./std/algorithm/searching.d(4301:6) > ``` > > > On Wed, Feb 21, 2018 at 11:31 PM, Jonathan M Davis via Digitalmars-d > wrote: >> On Wednesday, February 21, 2018 23:13:33 Timothee Cour via Digitalmars-d >> wrote: >>> from my perspective it makes sense to split a module M into submodules >>> A, B when: >>> * M is large >>> * there's little interaction between A and B (eg only few symbols from >>> A are needed in B and vice versa) >>> * A and B are logically grouped (that is domain specific) >>> * it doesn't turn into an extreme (1 function per module) >>> >>> Advantages of splitting: >>> * easier to review >>> * easier to edit (no need to scroll much to see entirety of module >>> we're editing) >>> * less pollution from top-level imports as they only affect submodule >>> (likewise with top-level attributes etc) >>> * more modular >>> * doesn't affect existing code since `import M` will continue to work >>> after M is split into a package >>> * less memory when using separate compilation >>> * allows fine-grained compiler options (eg we can compile B with `-O` if >>> needed) * allows to run unittests just for A instead of M >>> * allows selective import in client to avoid pulling in too many >>> dependencies (see arguments that were made for std.range.primitives) >>> >>> Disadvantages of splitting: >>> * more files; but not sure why that's a problem so long we don't go >>> into extremes (eg 1 function per module or other things of bad taste) >>> >>> --- >>> while working on https://github.com/dlang/phobos/pull/6178 I had >>> initially split M:std.array into submodules: >>> A:std.array.util (the old std.array) and B:std.array.static_array >>> (everything added in the PR) >>> IMO this made sense according to my above criteria (in this case there >>> was 0 interaction between A and B), but the reviewers disagreed with >>> the split. >>> >>> So, what are the guidelines? >> >> It's decided on a case-by-case basis but is generally only done if the >> module is quite large. std.array is not particularly large. It's less than >> 4000 lines, including unit tests and documentation, and it only has 18 >> top-level symbols. >> >> Also, remember that within Phobos, imports are supposed to be as localized >> as possible - both in terms of where the import is placed and in terms of >> selective imports - e.g. it would be >> >> import std.algorithm.searching : find; >> >> not >> >> import std.algorithm : find; >> >> which means that splitting the module then requires that all of those >> imports be even more specific. User code can choose to do that or not, but >> it does make having modules split up further that much more tedious. Related >> to that is the fact that anyone searching for these symbols now has more >> modules to search through. So, finding symbols will be harder. Take >> std.algorithm for instance. It was split, because it was getting large >> enough that compiling it on machines without large amounts of memory >> resulted in the compiler running out of memory. So, there was a very good >> argument for splitting it. However, now, even if you know that a symbol is >> in std.algorithm, do you know where in std.algorithm it is? Some are obvious >> - e.g. sort is in std.algorithm.sorting. However, others are not so >> obviously - e.g. where does startsWith live? Arguably, it could go in either >> std.algorithm.comparison or std.algorithm.searching. It turns out that it's >> in std.algorithm.searching, but I generally have to look it up. And where to >> functions like map or filter live? std.algorithm.mutation? >> std.algorithm.iteration? It's not necessarily obvious at all. >> >> From the perspective of users trying to find stuff, splitting modules up >> comes at a real cost, and I honestly don't understand why some folks are in >> a hurry to make module really small. That means more import statements when >> using those modules, and it means that it's harder to find symbols. >> >> Personally, I think that we should be very slow to consider splitting >> modules and only do so when it's clear that there's a real need, and >> std.array is nowhere near that level. >> >> - Jonathan M Davis >> >>
Re: what are guidelines for when to split a module into a package?
> it's harder to find symbols i don't understand this argument. ``` dscanner --declaration startsWith ./std/algorithm/searching.d(4105:6) ./std/algorithm/searching.d(4195:6) ./std/algorithm/searching.d(4265:6) ./std/algorithm/searching.d(4301:6) ``` On Wed, Feb 21, 2018 at 11:31 PM, Jonathan M Davis via Digitalmars-d wrote: > On Wednesday, February 21, 2018 23:13:33 Timothee Cour via Digitalmars-d > wrote: >> from my perspective it makes sense to split a module M into submodules >> A, B when: >> * M is large >> * there's little interaction between A and B (eg only few symbols from >> A are needed in B and vice versa) >> * A and B are logically grouped (that is domain specific) >> * it doesn't turn into an extreme (1 function per module) >> >> Advantages of splitting: >> * easier to review >> * easier to edit (no need to scroll much to see entirety of module >> we're editing) >> * less pollution from top-level imports as they only affect submodule >> (likewise with top-level attributes etc) >> * more modular >> * doesn't affect existing code since `import M` will continue to work >> after M is split into a package >> * less memory when using separate compilation >> * allows fine-grained compiler options (eg we can compile B with `-O` if >> needed) * allows to run unittests just for A instead of M >> * allows selective import in client to avoid pulling in too many >> dependencies (see arguments that were made for std.range.primitives) >> >> Disadvantages of splitting: >> * more files; but not sure why that's a problem so long we don't go >> into extremes (eg 1 function per module or other things of bad taste) >> >> --- >> while working on https://github.com/dlang/phobos/pull/6178 I had >> initially split M:std.array into submodules: >> A:std.array.util (the old std.array) and B:std.array.static_array >> (everything added in the PR) >> IMO this made sense according to my above criteria (in this case there >> was 0 interaction between A and B), but the reviewers disagreed with >> the split. >> >> So, what are the guidelines? > > It's decided on a case-by-case basis but is generally only done if the > module is quite large. std.array is not particularly large. It's less than > 4000 lines, including unit tests and documentation, and it only has 18 > top-level symbols. > > Also, remember that within Phobos, imports are supposed to be as localized > as possible - both in terms of where the import is placed and in terms of > selective imports - e.g. it would be > > import std.algorithm.searching : find; > > not > > import std.algorithm : find; > > which means that splitting the module then requires that all of those > imports be even more specific. User code can choose to do that or not, but > it does make having modules split up further that much more tedious. Related > to that is the fact that anyone searching for these symbols now has more > modules to search through. So, finding symbols will be harder. Take > std.algorithm for instance. It was split, because it was getting large > enough that compiling it on machines without large amounts of memory > resulted in the compiler running out of memory. So, there was a very good > argument for splitting it. However, now, even if you know that a symbol is > in std.algorithm, do you know where in std.algorithm it is? Some are obvious > - e.g. sort is in std.algorithm.sorting. However, others are not so > obviously - e.g. where does startsWith live? Arguably, it could go in either > std.algorithm.comparison or std.algorithm.searching. It turns out that it's > in std.algorithm.searching, but I generally have to look it up. And where to > functions like map or filter live? std.algorithm.mutation? > std.algorithm.iteration? It's not necessarily obvious at all. > > From the perspective of users trying to find stuff, splitting modules up > comes at a real cost, and I honestly don't understand why some folks are in > a hurry to make module really small. That means more import statements when > using those modules, and it means that it's harder to find symbols. > > Personally, I think that we should be very slow to consider splitting > modules and only do so when it's clear that there's a real need, and > std.array is nowhere near that level. > > - Jonathan M Davis > >
Re: D source code formatter
note that we'd need to implement https://github.com/dlang-community/dfmt/issues/159 ( option to format only diff-ed lines (like git clang-format)) in order to run dfmt on only the part of source code that was modified in a PR. this is to avoid concern that it affects git history / git blame (although these can skip blacklisted format-only commits or skip white-space diffs) On Wed, Feb 21, 2018 at 11:00 PM, Seb via Digitalmars-d wrote: > On Thursday, 22 February 2018 at 04:35:24 UTC, psychoticRabbit wrote: >> >> I rely (heavily) on clang-format in my C code. It save me so much effort >> and has become a vital day to day tool for me. >> >> I was wondering whether D also has a 'reliable' source code formatter. >> (reliable being a key word there). >> >> Also, if it does, then why is it not included in the distribution - given >> the importance of consistent source code formatting these days. > > > See https://github.com/dlang-community/dfmt/issues/249 for why it was never > included in the release distribution. > > In short: Brian was never really interested in packaging and investing time > into releasing his tools, so it stalled. Now that dub is part of the release > distribution, it's simply: > > dub fetch dfmt > dub run dfmt > > So there's essentially no big need to ship it in the release archives. > Though Sociomantic has recently taken over the release process of dfmt and > currently provides APT packages at bintray: > > https://bintray.com/dlang-community/apt/dfmt
what are guidelines for when to split a module into a package?
from my perspective it makes sense to split a module M into submodules A, B when: * M is large * there's little interaction between A and B (eg only few symbols from A are needed in B and vice versa) * A and B are logically grouped (that is domain specific) * it doesn't turn into an extreme (1 function per module) Advantages of splitting: * easier to review * easier to edit (no need to scroll much to see entirety of module we're editing) * less pollution from top-level imports as they only affect submodule (likewise with top-level attributes etc) * more modular * doesn't affect existing code since `import M` will continue to work after M is split into a package * less memory when using separate compilation * allows fine-grained compiler options (eg we can compile B with `-O` if needed) * allows to run unittests just for A instead of M * allows selective import in client to avoid pulling in too many dependencies (see arguments that were made for std.range.primitives) Disadvantages of splitting: * more files; but not sure why that's a problem so long we don't go into extremes (eg 1 function per module or other things of bad taste) --- while working on https://github.com/dlang/phobos/pull/6178 I had initially split M:std.array into submodules: A:std.array.util (the old std.array) and B:std.array.static_array (everything added in the PR) IMO this made sense according to my above criteria (in this case there was 0 interaction between A and B), but the reviewers disagreed with the split. So, what are the guidelines?
Re: Throwing D exceptions through C++ call stack
https://github.com/Syniurge/Calypso now allows catch C++ exceptions from D handlers (on OSX and linux at least) On Tue, Feb 20, 2018 at 1:04 PM, H. S. Teoh via Digitalmars-d wrote: > I'm piecewise migrating one of my old C++ projects to D, and one of the > major issues right now is exception handling. > > What's the state of C++ exception support right now? Is it safe for a D > function (called from C++ code) to throw an exception, and have the > stack unwind through the C++ call stack and caught by D code at the > bottom of the stack? Or will it potentially interact badly with the C++ > part of the call stack? I'm guessing C++ dtors may not get called, > etc.? > > Conversely, is it safe for C++ code to throw an exception that unwinds > through D functions in the call stack, and caught by a C++ catch block? > > > T > > -- > Elegant or ugly code as well as fine or rude sentences have something in > common: they don't depend on the language. -- Luca De Vitis
Re: variable destructuring in D (tuples and ranges)
see https://forum.dlang.org/post/p3bdp1$2b4e$1...@digitalmars.com [Tuple DIP] On Tue, Feb 20, 2018 at 11:01 AM, valmat via Digitalmars-d wrote: > Hi there! > I just started learn D. > First it is greatful language. > But unfortunatly it doesn't have variable destructuring syntax. > Like this: > ``` > auto x,y,z = tuple(26, "hi", 'a'); > auto x,y,z = [1,2,3]; > auto x,y,z = anyRange; > ``` > Because it is convenient i wrote my own implementation of this opportunity > > Here is an example: > ``` > import std.stdio: writeln; > import std.typecons : tuple; > import vlm.utils.destructing : tie; > > void main() > { > // Traversable (arrays or any lazy ranges) > string a, b, c; > tie(a,b,c) = ["foo1","foo2","foo3","foo4","foo5","foo6"]; > // May retrive undercomplit ranges > tie(a,b,c) = ["bar1","bar2"]; > > size_t i, j, k; > float pi; > int l; > // Automatically cast types (int -> size_t) > tie(i,j,k) = [1,2]; > tie(i,pi,l) = [3.14, 3.14, 3.14]; > > // Tuples > intx; > string y; > char z; > size_t u,v,w; > > tie(x,y,z) = tuple(1, "hello", 'a', 777, 3.14); > tie(x,y,z) = tuple(15, "world"); > } > ``` > > The sourse code is here: > https://gist.github.com/valmat/763c72465d7a1737229ae1c91393d629 > > I would be glad if this would be useful for you > > And how about to consider of inclusion this possibility in the language? > > PS > i'm newbie in D so may be my code is not optimal. > I'm used to writing in C++ style and maybe in D part of my code could be > easier
Re: Tuple DIP
On Mon, Feb 19, 2018 at 4:05 PM, Timon Gehr via Digitalmars-d wrote: > On 20.02.2018 00:53, Timothee Cour wrote: >> > Sure! Also, this: > > void main(string[] args){ > enforce(args.length==5, "Invalid args"); > auto (infile, colname, repl, outfile) = args[1..5].unpack; > // ... > } how does that latter example work?
Re: Tuple DIP
great! maybe worth adding to DIP? (even though `unpack` would be (IIUC) a pure library solution on top of this DIP) and that would work too I guess? ``` string[4] args=...; auto (infile, colname, repl, outfile) = args.unpack; ``` On Mon, Feb 19, 2018 at 3:47 PM, Timon Gehr via Digitalmars-d wrote: > On 20.02.2018 00:43, Timon Gehr wrote: >> >> >> void main(){ >> auto (infile, colname, repl, outfile) = args[1..$].unpack!4("Invalid >> args\n"); >> } > > > Actually: > > void main(string[] args){ > > auto (infile, colname, repl, outfile) = args[1..$].unpack!4("Invalid > args\n"); > } >
Re: How to represent multiple files in a forum post?
On Sun, Feb 18, 2018 at 4:46 PM, Sönke Ludwig via Digitalmars-d wrote: > Am 14.02.2018 um 19:33 schrieb Jonathan Marler: >> >> @timotheecour and I came up with a solution to a common problem: >> >> How to represent multiple files in a forum post? >> > > Why not multipart/mixed? Since this is NNTP based, wouldn't that be the > natural choice? That it, assuming that forum.dlang.org is the target for > this, of course. no, it should be usable in other plain text contexts (eg email, bugzilla entry, github entry, etc)
how to get typeid of extern(C++) classes?
is there a way to get typeid of extern(C++) classes (eg for ones in dmd/astbase.d but not limited to that) ? C++ exposes it via typeid so in theory all the info is there ; I would need it at least for debugging (eg if RTTI is not enabled for all compilers or in release mode that's fine so long there's a documented way to get it for debugging) a lot of extern(C++) classes in dmd use hacks like enum values to get their type but it's unreliable and doesn't work for all AST classes. at least having a way to expose typeid(instance).name() would be a start also, that could be used to fix the bug I posted here: https://forum.dlang.org/post/mailman.3138.1517949584.9493.digitalmar...@puremagic.com cast overly permissive with extern(C++ ) classes; ( ie that `cast(A) b` doesn't care that b is of dynamic type A)
Re: -cov LOC is inadequate for 1 liner branching; need a metric based on branching
> -cov coverage percentage is the line coverage, not the sequence point > coverage [...] > makes it fuzzy in proportion to how many lines contain multiple sequence > points Based on your comment I'm pretty sure you're still not getting my point, so apologies if I was unclear and let me try to explain better: it's not about `line coverage` vs `sequence point coverage`, as that difference is not very large (indeed, just 'fuzzy'). It's about `line coverage` vs `branch coverage` (see exact definition in linked article), that difference is very large in practice. here's my example, but more concretely explained: ``` void main(int a){ if(a>0){ statement1(); // line 3 statement2(); // line 4 ... statement100(); // line 102 } else{ statement101(); // line 104 } } unittest{ fun(1); } ``` * line coverage is around 99%. * sequence point coverage is also 99% (and would be close to that if some lines had multiple statements) * branch coverage is 50%. This is not an artificial example, this is the common case. What's more, code instrumentation to enable branch coverage is not more complex to implement compared to line coverage (I would even venture it's less complex and less costly). On Sun, Feb 11, 2018 at 2:32 PM, Walter Bright via Digitalmars-d wrote: > On 2/11/2018 1:55 PM, Timothee Cour wrote: >> >> I think you're missing my main point: it's explained here >> >> https://www.ncover.com/support/docs/extras/code-coverage/understanding-branch-coverage >> but the gist is that line based coverage is over-counting: >> ``` >> if(A) >>// 100 lines of code >> else >>// 1 line of code >> ``` >> gives a line coverage of ~ 99% vs a branch coverage of ~50% >> (assuming `else` branch never covered in unittests) >> >> What matters as far as bugs are concerned is that 50% of cases are >> covered. Increasing the size of the `if(A)` branch increases line >> coverage (which is irrelevant) but not branch coverage. > > > I understand that point. The -cov coverage percentage is the line coverage, > not the sequence point coverage. (Hence it will never be greater than 100%, > and it will never underestimate the coverage. It would be more accurately > termed an "upper bound" on the coverage.) > > And yes, that makes it fuzzy in proportion to how many lines contain > multiple sequence points. Eliminating that fuzziness does require a vast > increase in the complexity of the -cov implementation.
Re: -cov LOC is inadequate for 1 liner branching; need a metric based on branching
I think you're missing my main point: it's explained here https://www.ncover.com/support/docs/extras/code-coverage/understanding-branch-coverage but the gist is that line based coverage is over-counting: ``` if(A) // 100 lines of code else // 1 line of code ``` gives a line coverage of ~ 99% vs a branch coverage of ~50% (assuming `else` branch never covered in unittests) What matters as far as bugs are concerned is that 50% of cases are covered. Increasing the size of the `if(A)` branch increases line coverage (which is irrelevant) but not branch coverage. On Sun, Feb 11, 2018 at 1:32 PM, Walter Bright via Digitalmars-d wrote: > On 2/5/2018 11:32 AM, Timothee Cour wrote: >> >> just filed https://issues.dlang.org/show_bug.cgi?id=18377: >> >> `dmd -cov -run main.d` shows 100% coverage; this is misleading since a >> branch is not taken: >> >> ``` >> void main(){ >>int a; >>if(false) a+=10; >> } >> ``` > > > Consider how -cov works today: > > 2| x = 3; y = 4; > 1| ++x; > > The first line has a count of 2, because there are two statements and each > contributes one increment to the line. > > 1| x = 3; // reference count > 2| if (true && false) c; > 3| if (true && true) c; > 1| if (false && b) c; > > The sequence points are each counted separately. So, by comparing with the > 'reference count', you can see which sequence points are executed. Also, if > one finds that unattractive, the code can be organized like: > > 1| if (true && > 1| false) > 0| c; > > and the separate counts will be revealed instead of aggregated. > > I agree that this is not ideal, however: > > 1. it works > 2. it is simple and robust > 3. the display to the user is simple > 4. it's easy to aggregate multiple runs together with simple text processing > code > 5. one can 'fix' it with a stylistic change in the formatting of the source > code > > Any further improvement would be a large increase in complexity of the > implementation, and I don't know of reasonable way to present this to the > user in a textual format. > > Is it worth it? I don't think so. Like builtin unittests, the big win with > -cov is it is *trivial* to use, which encourages its adoption. It's a 99% > solution, with 99% of the benefits, with 1% of the implementation effort. We > should be expending effort elsewhere than putting an additional 99% effort > to squeeze out that last 1% of benefit.
Re: proposal: heredoc comments to allow `+/` in comments, eg from urls or documented unittests
all these workarounds are rather ugly; the proposed syntax works all the time (user can just pick a EOC token not in comment) and is analog to existing D heredoc strings, so nothing surprising there. Would PR's be accepted? On Sat, Feb 10, 2018 at 5:01 PM, Jonathan M Davis via Digitalmars-d wrote: > On Saturday, February 10, 2018 15:03:08 Walter Bright via Digitalmars-d > wrote: >> On 2/8/2018 7:06 PM, Timothee Cour wrote: >> > /"EOC >> > This is a multi-line >> > heredoc comment allowing >> > /+ documented unittests containing nesting comments +/ >> > and weird urls like https://gcc.gnu.org/onlinedocs/libstdc++/faq.html >> > EOS"/ >> >> There isn't any commenting scheme that won't trip you up with certain >> characters in the comments. I don't see a compelling case for adding >> another kind of comment. >> >> Vladimir's suggestion to use %2B instead of + seems to resolve this >> adequately. > > You could also always just declare a DDOC macro. > > Just put > > Macros: > PLUS=+ > > in the ddoc comment and then use $(PLUS) instead of +. It's more verbose > that way given that PLUS isn't one of the standard ddoc macros, but it's > more idiomatic to look at. > > - Jonathan M Davis >
Re: proposal: heredoc comments to allow `+/` in comments, eg from urls or documented unittests
version(none) { FOO } doesn't work if FOO doesn't parse. Again, what I proposed is the only 100% reliable way to comment out something On Fri, Feb 9, 2018 at 12:52 AM, Kagamin via Digitalmars-d wrote: > On Friday, 9 February 2018 at 08:44:31 UTC, Nick Sabalausky (Abscissa) > wrote: >> >> On 02/09/2018 03:42 AM, Kagamin wrote: >>> >>> >>> Nested comments are superficial though, >> >> >> Not if you've ever commented out a block of code. > > > Comment this: > string sedArg="s/ +/ /"; > > Comments don't respect even lexical structure of commented code that you > expect, version(none) does.
Re: proposal: heredoc comments to allow `+/` in comments, eg from urls or documented unittests
summary: * `/* */` should never be used * properly nested `/+ +/` indeed don't cause issues * urls cause issues and I've ran into this issue multiple times * unrestricted code (eg foreign code or unfinished D code commented out) also cause issues * hreredoc comments fixes these issues when neeeddd On Fri, Feb 9, 2018 at 12:49 AM, Nick Sabalausky (Abscissa) via Digitalmars-d wrote: > On 02/09/2018 03:37 AM, Kagamin wrote: >> >> /** >> This is a multi-line >> heredoc comment allowing >> // documented unittests containing nesting comments >> and weird urls like https://gcc.gnu.org/onlinedocs/libstdc++/faq.html >> */ > > > > /** > This is a multi-line comment. > Be sure to check the various files at extras/foo*/package.d > And also: https://gcc.gnu.org/onlinedocs/libstdc++/faq.html > */ > > > Kaboom. Thank you, good night.
Re: option -ignore_pure for temporary debugging (or how to wrap an unpure function inside a pure one)?
just filed https://issues.dlang.org/show_bug.cgi?id=18407 Issue 18407 - debug should escape nothrow, @nogc, @safe (not just pure) On Thu, Feb 8, 2018 at 5:38 AM, Steven Schveighoffer via Digitalmars-d wrote: > On 2/8/18 8:32 AM, Steven Schveighoffer wrote: >> >> On 2/7/18 10:32 PM, Timothee Cour wrote: >>> >>> same question with how to wrap a gc function inside a nogc shell, if >>> not, allowing a flag -ignore_nogc that'd enable this (again, for >>> debugging purposes) >> >> >> If you wrap the call in a debug block, it will work. >> >> int foo() pure >> { >> debug writeln("yep, this works"); >> } > > > Gah, I see this was answered 2 other times, but for some reason, your > replies turn out as new threads. > > Sorry for the extra noise. > > -Steve
Re: proposal: heredoc comments to allow `+/` in comments, eg from urls or documented unittests
NOTE: the analog of documenting comments (/++ ...+/ and /** */) could be: /""EOC multiline comment EOC"/ (ie allow both `/""` and `/"` before reading in the heredoc token) On Thu, Feb 8, 2018 at 7:06 PM, Timothee Cour wrote: > same exact idea as motivation for delimited strings > (https://dlang.org/spec/lex.html#delimited_strings) > > ``` > > auto heredoc = q"EOS > This is a multi-line > heredoc string > EOS" > ; > > /"EOC > This is a multi-line > heredoc comment allowing > /+ documented unittests containing nesting comments +/ > and weird urls like https://gcc.gnu.org/onlinedocs/libstdc++/faq.html > EOS"/ > > ```
proposal: heredoc comments to allow `+/` in comments, eg from urls or documented unittests
same exact idea as motivation for delimited strings (https://dlang.org/spec/lex.html#delimited_strings) ``` auto heredoc = q"EOS This is a multi-line heredoc string EOS" ; /"EOC This is a multi-line heredoc comment allowing /+ documented unittests containing nesting comments +/ and weird urls like https://gcc.gnu.org/onlinedocs/libstdc++/faq.html EOS"/ ```
Re: option -ignore_pure for temporary debugging (or how to wrap an unpure function inside a pure one)?
same question with how to wrap a gc function inside a nogc shell, if not, allowing a flag -ignore_nogc that'd enable this (again, for debugging purposes) On Wed, Feb 7, 2018 at 7:29 PM, Timothee Cour wrote: > while hacking into druntime and adding temporary debug information (eg > with custom logging etc) I had a hard time making things compile > because lots of functions are pure nothrow safe, resulting in compile > errors when my custom debugging functions are not pure nothrow safe. > > > How about adding flags ` -ignore_pure` (and perhaps -ignore_safe > -ignore_nothrow) to allow code to compile ignoring safe, pure, nothrow > mismatches? > > This would be meant for temporary debugging obviously, production code > would not enable these flags. > > my workaround for nothrow and safe attributes is to call via wrapNothrow!fun: > > @trusted > nothrow auto wrapNothrow(alias fun, T...)(T a){ > import std.exception; > try{ > return fun(a); > } > catch(Exception t){ > assert(0, t.msg); > } > } > > What would be a workaround to wrap a non-pure function?
option -ignore_pure for temporary debugging (or how to wrap an unpure function inside a pure one)?
while hacking into druntime and adding temporary debug information (eg with custom logging etc) I had a hard time making things compile because lots of functions are pure nothrow safe, resulting in compile errors when my custom debugging functions are not pure nothrow safe. How about adding flags ` -ignore_pure` (and perhaps -ignore_safe -ignore_nothrow) to allow code to compile ignoring safe, pure, nothrow mismatches? This would be meant for temporary debugging obviously, production code would not enable these flags. my workaround for nothrow and safe attributes is to call via wrapNothrow!fun: @trusted nothrow auto wrapNothrow(alias fun, T...)(T a){ import std.exception; try{ return fun(a); } catch(Exception t){ assert(0, t.msg); } } What would be a workaround to wrap a non-pure function?
Re: Status status = __traits(compilesReportError, {string b=10; }) => status.msg=Error: cannot....
understood, but that's responsibility of tester to make sure they're not too flaky (eg using msg.canFind("FOO") (or, if regex weren't slow, regex) On Wed, Feb 7, 2018 at 1:13 PM, Nicholas Wilson via Digitalmars-d wrote: > On Wednesday, 7 February 2018 at 20:29:44 UTC, Timothee Cour wrote: >> >> is there any way to get error from speculative execution (`__traits( >> compiles, ...)`)? would be useful in tests; If not currently how hard >> would that be to implement? eg: >> >> ``` >> >> struct Status{bool ok; string msg;} >> >> Status status = __traits(compilesReportError, {string b=10;}) >> assert(!status.ok); >> assert(status.msg==`main.d(15) Error: cannot implicitly convert >> expression 10 of type int to string`); >> ``` > > > Probably not very hard. Would make for some nice diagnostics,but very flakey > tests. Compiler errors are frequently changed.
Status status = __traits(compilesReportError, {string b=10;}) => status.msg=Error: cannot....
is there any way to get error from speculative execution (`__traits( compiles, ...)`)? would be useful in tests; If not currently how hard would that be to implement? eg: ``` struct Status{bool ok; string msg;} Status status = __traits(compilesReportError, {string b=10;}) assert(!status.ok); assert(status.msg==`main.d(15) Error: cannot implicitly convert expression 10 of type int to string`); ```
dmd -unittest=+foo.bar,+std,-std.stdio args... to specify unittests in select pkg/mod
how about using same syntax (and reusing logic) as newly introduced` -i=+foo.bar,+baz-baz.bad`: `dmd -unittest=+foo.bar,+baz,-baz.bad rest_of_arguments` which would only enable unittests as specified? It's flexible and intuitive, and would solve a common woe with unittests (eg https://forum.dlang.org/post/mailman.3165.1517968619.9493.digitalmar...@puremagic.com) originally proposed here: https://forum.dlang.org/post/mailman.3166.1517969180.9493.digitalmar...@puremagic.com
Re: Bye bye, fast compilation times
how about using same syntax (and reusing logic) as newly introduced: ` -i=+foo.bar,+baz-baz.bad` `dmd -unittest=+foo.bar,+baz,-baz.bad rest_of_arguments` which would only enable unittests as specified? It's flexible and intuitive On Tue, Feb 6, 2018 at 5:56 PM, Jonathan M Davis via Digitalmars-d wrote: > On Wednesday, February 07, 2018 01:47:19 jmh530 via Digitalmars-d wrote: >> On Wednesday, 7 February 2018 at 01:20:04 UTC, H. S. Teoh wrote: >> > So I'd like to propose that we do something similar to what we >> > did with template instantiations a couple of years ago: make it >> > so that unittests are only instantiated if the module they >> > occur in is being compiled, otherwise ignore them (even in the >> > face of -unittest). This way, adding unittests to Phobos won't >> > cause unintentional slowdowns / unittest bloat across *all* D >> > projects that import the affected Phobos modules. (Seen from >> > this angle, it's a pretty hefty cost.) >> >> Would it help to take the approach of mir, i.e. putting >> version(mir_test) before all the unittests? > > It would, but if we have to do that in front of all unittest blocks in a > library, I think that that's a strong sign that the current design has a > serious problem that should be fixed if possible. > > - Jonathan M Davis >
Re: cast overly permissive with extern(C++ ) classes; should cast through `void*`
ugh... dynamicCast is private... On Tue, Feb 6, 2018 at 5:13 PM, Seb via Digitalmars-d wrote: > On Tuesday, 6 February 2018 at 23:32:49 UTC, Timothee Cour wrote: >> >> but yes, API would look similar: >> >> auto Cast(T)(T a) if(...) {} >> auto Cast(string dsl)(T a) {} > > > Ah I see. > I think this is already partially done in Phobos (which shows the > usefulness): > > https://github.com/dlang/phobos/blob/master/std/experimental/typecons.d#L37
Re: cast overly permissive with extern(C++ ) classes; should cast through `void*`
but yes, API would look similar: auto Cast(T)(T a) if(...) {} auto Cast(string dsl)(T a) {} On Tue, Feb 6, 2018 at 3:31 PM, Timothee Cour wrote: > assumeUnique has a very specific use case; i was describing a more > general way to handle various types of cast (and be explicit about > whats allowed) as mentioned above > > On Tue, Feb 6, 2018 at 3:18 PM, Seb via Digitalmars-d > wrote: >> On Tuesday, 6 February 2018 at 23:15:07 UTC, Timothee Cour wrote: >>>> >>>> Like assumeUnique? >>> >>> https://dlang.org/library/std/exception/assume_unique.html >>> >>> what do you mean? u mean adding "unique" to the DSL list ? maybe! >> >> >> >> You asked: Actually how about introducing a library solution for explicit >> casting. >> That's why I thought about the existing assumeUnique: >> >> >> ``` >> immutable(T)[] assumeUnique(T)(ref T[] array) pure nothrow >> { >> auto result = cast(immutable(T)[]) array; >> array = null; >> return result; >> } >> ``` >> >> https://github.com/dlang/phobos/blob/v2.078.1/std/exception.d#L905
Re: cast overly permissive with extern(C++ ) classes; should cast through `void*`
assumeUnique has a very specific use case; i was describing a more general way to handle various types of cast (and be explicit about whats allowed) as mentioned above On Tue, Feb 6, 2018 at 3:18 PM, Seb via Digitalmars-d wrote: > On Tuesday, 6 February 2018 at 23:15:07 UTC, Timothee Cour wrote: >>> >>> Like assumeUnique? >> >> https://dlang.org/library/std/exception/assume_unique.html >> >> what do you mean? u mean adding "unique" to the DSL list ? maybe! > > > > You asked: Actually how about introducing a library solution for explicit > casting. > That's why I thought about the existing assumeUnique: > > > ``` > immutable(T)[] assumeUnique(T)(ref T[] array) pure nothrow > { > auto result = cast(immutable(T)[]) array; > array = null; > return result; > } > ``` > > https://github.com/dlang/phobos/blob/v2.078.1/std/exception.d#L905
Re: cast overly permissive with extern(C++ ) classes; should cast through `void*`
> Like assumeUnique? https://dlang.org/library/std/exception/assume_unique.html what do you mean? u mean adding "unique" to the DSL list ? maybe! * for cast on extern C++ classes, why not use the same logic as what C++ uses for dynamic_cast? (simplified because we don't support multople inheritance) On Tue, Feb 6, 2018 at 2:54 PM, Seb via Digitalmars-d wrote: > On Tuesday, 6 February 2018 at 21:34:07 UTC, timotheecour wrote: >> >> On Tuesday, 6 February 2018 at 21:13:50 UTC, timotheecour wrote: >>> >>> [...] >> >> >> >> Actually how about introducing a library solution for explicit casting: >> * works with UFCS chains (unlike cast) >> * DSL is very intuitive, readable and extensible >> >> [...] > > > Like assumeUnique? > > https://dlang.org/library/std/exception/assume_unique.html
Re: Casts
well even old threads are useful to update when there's new information ; because they show up in search results so good to keep answers up to date (and provide link to newer info) On Tue, Feb 6, 2018 at 1:44 PM, jmh530 via Digitalmars-d wrote: > On Tuesday, 6 February 2018 at 21:41:14 UTC, Jonathan Marler wrote: >> >> >> Lol, I did the same thing once...I think I may have clicked a link on an >> old thread, and when I went back to the "page view" it must have gone back >> to the "page view" where the original thread was listed. I saw an >> interesting thread an responded and then someone let me know it was 7 years >> old...I had no idea!!! > > > Link didn't work for me either and didn't realize it until I checked the > date. Was able to use the internet wayback machine to get a snap from like > 2008.
Re: Bye bye, fast compilation times
another weird gotcha: auto s="foo".isEmail; writeln(s.toString); // ok writeln(s); // compile error On Tue, Feb 6, 2018 at 12:30 PM, Steven Schveighoffer via Digitalmars-d wrote: > On 2/6/18 3:11 PM, Walter Bright wrote: >> >> On 2/5/2018 9:35 PM, Dmitry Olshansky wrote: >>> >>> That’s really bad idea - isEmail is template so the burden of freaking >>> slow ctRegex >>> is paid on per instantiation basis. Could be horrible with separate >>> compilation. >> >> >> std.string.isEmail() in D1 was a simple function. Maybe regex is just the >> wrong solution for this problem. >> > > The regex in question I think is to ensure an email address like > abc@192.168.0.5 has a valid IP address. The D1 function doesn't support that > requirement. > > I admit, I've never used it, so I don't know why it needs to be so complex. > But I assume some people depend on that functionality. > > -Steve
cast overly permissive with extern(C++ ) classes; should cast through `void*`
should we force force casting through `void*` for extern(C++) classes ? i.e. `cast(Derived) cast(void*) base_instance;` currently, `cast(A) unrelad_cpp_instance` happily compiles but shouldn't, it's very error prone even though the cast syntax suggests type safety; especially a problem if we make a class become extern(C++) in the future [came up after discussing with jacob-carlborg]
Re: Annoyance with new integer promotion deprecations
I had filed that last week: https://issues.dlang.org/show_bug.cgi?id=18346 Issue 18346 - implicit conversion from int to char in `"foo" ~ 255` should be illegal we should deprecate it. On Mon, Feb 5, 2018 at 7:14 PM, Nick Sabalausky (Abscissa) via Digitalmars-d wrote: > On 02/05/2018 09:30 PM, Walter Bright wrote: >> >> On 2/5/2018 3:18 PM, Timon Gehr wrote: >>> >>> The overloading rules are fine, but byte should not implicitly convert to >>> char/dchar, and char should not implicitly convert to byte. >> >> >> Maybe not, but casting back and forth between them is ugly. > > > It *should* be ugly, it's conflating numerics with partial-characters. > > Which, depending on the situation, you should either A. not be doing at all, > or B. Be really freaking explicit about the fact that "yes, I know I'm > mixing numerics with partial-characters here and it's for this very good > reason XYZ." This isn't the age of ASCII. I can see how it could've been a > pain in ASCII-land, but D doesn't live there.
-cov LOC is inadequate for 1 liner branching; need a metric based on branching
just filed https://issues.dlang.org/show_bug.cgi?id=18377: `dmd -cov -run main.d` shows 100% coverage; this is misleading since a branch is not taken: ``` void main(){ int a; if(false) a+=10; } ``` how about adding a `-covmode=[loc|branch]` that would allow either reporting LOC coverage or branch coverage? branch coverage would report number of branches taken at least once / total number of branches. It would not only address the above issue, but it is IMO a much better metric for coverage, less sensitive to 'overcounting' of large blocks in main code branches (size of code block in a branch is irrelevant as far as testing is concerned); eg: ``` int fun(int x){ if(x<0) return fun2(); // accounts for 1 LOC and 1 branch // long block of non-branching code here... // accounts for 10 LOC and 1 branch } ``` NOTE: branches would include anything that allows more than 1 code path (eg: switch, if)
Re: The daily D riddle
why is `a.init` even legal? (instead of typeof(a).init) likewise the following compiles, but IMO should not: class A{ void fun(this a){}} (instead we should have typeof(this) How about deprecating these lax syntaxes? they serve no purpose (we should use typeof(...)) and can cause harm in generic code On Sat, Jan 27, 2018 at 10:39 PM, Ali Çehreli via Digitalmars-d wrote: > On 01/27/2018 10:25 PM, Shachar Shemesh wrote: >> >> What will the following code print? Do not use the compiler: >> >> import std.stdio; >> >> struct A { >> int a = 1; >> >> void initialize() { >> a = a.init; >> } >> } >> >> void main() { >> A a; >> a.initialize(); >> >> writeln(a.a); >> } >> >> I find this behavior unexpected. > > > I used the compiler to check my guess and I was wrong. The following makes > the difference: > > a = A.init.a; > > So we currently have: > > a.init (type's init value) > A.init.a(members' init value) > > If it were designed as you want, we would have the following: > > typeof(a).init (type's init value) > a.init (members init value) > > Well, too late I guess. :) > > Ali
Re: String Switch Lowering
* This has nothing to do with name mangling. Yes and no, these things are coupled. We can improve the situation by forcing the size of mangled and demangled symbols to be < threshold, eg `ldc -hash-threshold` would be 1 option. example: current mangled: _D8analysis3run__T9shouldRunVAyaa20_666c6f61745f6f70657261746f725f636865636bZQChFQCaKxSQDh6config20StaticAnalysisConfigZ__T9__lambda4TQEbZQpFNaNbNiNfQEqZQEu current demangled: pure nothrow @nogc @safe immutable(char)[] analysis.run.shouldRun!("float_operator_check").shouldRun(immutable(char)[], ref const(analysis.config.StaticAnalysisConfig)).__lambda4!(immutable(char)[]).__lambda4(immutable(char)[]) with a small threshold: mangled: _D8analysis3run__T9shouldRunℂ0abf2284dd3 demangled: pure nothrow @nogc @safe immutable(char)[] analysis.run.shouldRun.ℂ0abf2284dd3 The `ℂ` symbol indicating hashing was applied because symbol size exceed threshold. The demangled version also would have that. A separate file (dmd -mangle_map=file) could be produced in the rare case a user wants to see the full 17KB mangled and demangled symbols mapped by ℂ0abf2284dd3 On Sat, Jan 27, 2018 at 3:12 PM, H. S. Teoh via Digitalmars-d wrote: > On Sat, Jan 27, 2018 at 09:22:07PM +, timotheecour via Digitalmars-d > wrote: > [...] >> ``` >> 28 dscanner0x00010d59f428 @safe void >> std.getopt.getoptImpl!(std.getopt.config, immutable(char)[], bool*, >> immutable(char)[], bool*, immutable(char)[], bool*, immutable(char)[], >> bool*, immutable(char)[], bool*, immutable(char)[], bool*, >> immutable(char)[], bool*, immutable(char)[], bool*, immutable(char)[], >> bool*, immutable(char)[], bool*, immutable(char)[], bool*, >> immutable(char)[], bool*, immutable(char)[], bool*, immutable(char)[], >> bool*, immutable(char)[], bool*, immutable(char)[], immutable(char)[]*, >> immutable(char)[], immutable(char)[]*, immutable(char)[], bool*, >> immutable(char)[], immutable(char)[][]*, immutable(char)[], bool*, >> immutable(char)[], bool*, immutable(char)[], bool*, immutable(char)[], >> bool*).getoptImpl(ref immutable(char)[][], ref std.getopt.configuration, ref >> std.getopt.GetoptResult, ref std.getopt.GetOptException, >> void[][immutable(char)[]], void[][immutable(char)[]], std.getopt.config, >> immutable(char)[], bool*, immutable(char)[], bool*, immutable(char)[], >> bool*, immutable(char)[], bool*, immutable(char)[], bool*, >> immutable(char)[], bool*, immutable(char)[], bool*, immutable(char)[], >> bool*, immutable(char)[], bool*, immutable(char)[], bool*, >> immutable(char)[], bool*, immutable(char)[], bool*, immutable(char)[], >> bool*, immutable(char)[], bool*, immutable(char)[], bool*, >> immutable(char)[], immutable(char)[]*, immutable(char)[], >> immutable(char)[]*, immutable(char)[], bool*, immutable(char)[], >> immutable(char)[][]*, immutable(char)[], bool*, immutable(char)[], bool*, >> immutable(char)[], bool*, immutable(char)[], bool*) + 460 >> ``` >> >> https://dlang.org/blog/2017/12/20/ds-newfangled-name-mangling/ doesn't >> seem to help in cases like that > > This has nothing to do with name mangling. The mangling itself may be > relatively small (and probably is, judging from the amount of repetition > in the signature above), but what you're looking at is the *demangled* > identifier. That's going to be big no matter what, unless we > fundamentally change the way getopt() is implemented. > > I proposed a compile-time introspected getopt() replacement before, only > to get laughed at by Andrei. So I guess that means, don't expect to see > that in Phobos anytime soon. But I might post the code on github > sometime for those who would benefit from it. Basically, instead of > taking a ridiculously long argument list, you create a struct whose > members (together with some UDAs) define what the options are, any > associated help text, etc., and just call it with the struct type as > argument. It does its thing, and returns the struct populated with the > values retrieved from the command-line. There are a few more features, > but that's the gist of it. > > > T > > -- > There is no gravity. The earth sucks.
Re: String Switch Lowering
but this should be handled at the compiler level, with no change in standard library getopt, eg using a hashing scheme (cf `ldc -hashthres`) On Sat, Jan 27, 2018 at 2:38 PM, Kagamin via Digitalmars-d wrote: > IIRC several years ago somebody created a dub package with DbI getopt. I > think it wouldn't suffer from this issue.
functions allowed to overload on const int vs int vs immutable int? + spec is not accurate
this compiles, but equivalent in C++ (const int vs int) would give a compile error (error: redefinition of 'fun'); what's the rationale for allowing these overloads? ``` void fun(int src){ writeln2(); } void fun(immutable int src){ writeln2(); } void fun(const int src){ writeln2(); } void main(){ {int src=0; fun(src);} // matches fun(int) {immutable int src=0; fun(src);} // matches fun(immutable int) {const int src=0; fun(src);} // matches fun(const int) } ``` The spec does mention `match with conversion to const` taking precedence over `exact match` however this isn't precise: the following would be more precise instead (at least according to snippet above): `match with conversion to/from const/immutable/mutable` https://dlang.org/spec/function.html#function-overloading Functions are overloaded based on how well the arguments to a function can match up with the parameters. The function with the best match is selected. The levels of matching are: no match match with implicit conversions match with conversion to const exact match
Re: gRPC in D good idea for GSOC 2018?
for grpc, we should add to dproto (which is pretty good protobuf library for D but lacks grpc) instead of starting from scratch, see https://github.com/msoucy/dproto/issues/113 [your advice/opinions on integrating with grpc?] On Mon, Jan 22, 2018 at 12:24 PM, Adrian Matoga via Digitalmars-d wrote: > On Monday, 15 January 2018 at 19:28:08 UTC, Ali Çehreli wrote: >> >> I know a project where D could benefit from gRPC in D, which is not among >> the supported languages: >> >> https://grpc.io/docs/ >> >> Do you think gRPC support is worth adding to GSOC 2018 ideas? >> >> https://wiki.dlang.org/GSOC_2018_Ideas >> >> Ali > > > I can share a fresh experience from mentoring a student in a similar (also > RPC) project last summer. We built native D-Bus bindings in D based on > libasync. The student had had no previous experience with D or RPC, and > within ~2.5 months of focused work she implemented the following: > > 1. (de)serialization of all D-Bus data types, including the use of > compile-time reflection to recursively marshall structs, arrays, and > variants. Except Variant, for which we decided to make our own > D-Bus-specific tagged union type, all other D-Bus types are mapped to > built-in D types. > 2. A class to connect to the bus via libasync sockets, read the incoming > messages and dispatch them to the registered handlers, and send messages to > the bus. > 3. Proxy (client) and server class templates that generate all the code > necessary to make the remote calls look almost like local ones (the return > value/out arguments are passed to a delegate that handles the output instead > of being returned synchronously). > > So, more or less an equivalent of vibe.d's REST interface generator, only > with fewer customization points. > > There were still some opportunities for refactorings and optimizations, so I > wouldn't consider it production ready. Also, some planned features weren't > implemented, such as a more convenient way for handling signals or allowing > transports other than unix sockets on libasync. On the other hand, what we > have is almost 100% covered with unit tests. This not only made adding > successive layers quite pleasant, as little (if any) debugging of previously > written stuff was ever necessary, but also helps to keep the stuff working > as we modify it. > > Based on my experience, I'd say that implementing gRPC may be of a right > size for a GSoC project, as long as you split it into smaller > components/layers, prioritize them, and focus on having at least the basic > stuff usable and tested, instead of expecting it to cover all usage cases > and be heavily optimized. >
reduce mangled name sizes via link-time symbol renaming
could a solution like proposed below be adapted to automatically reduce size of long symbol names? It allows final object files to be smaller; eg see the problem this causes: * String Switch Lowering: http://forum.dlang.org/thread/p4d777$1vij$1...@digitalmars.com caution: NSFW! contains huge mangled symbol name! * http://lists.llvm.org/pipermail/lldb-dev/2018-January/013180.html "[lldb-dev] Huge mangled names are causing long delays when loading symbol table symbols") ``` main.d: void foo_test1(){ } void main(){ foo_test1(); } dmd -c libmain.a ld -r libmain.a -o libmain2.a -alias _D4main9foo_test1FZv _foobar -unexported_symbol _D4main9foo_test1FZv # or : via `-alias_list filename` #NOTE: dummy.d only needed because somehow dmd needs at least one object file or source file, a static library is somehow not enough (dmd bug?) dmd -of=main2 libmain2.a dummy.d nm main2 | grep _foobar # ok ./main2 # ok ``` NOTE: to automate this process it could find all symbol names > threshold and apply a mapping form long mangled names to short aliases (eg: object_file_name + incremented_counter), that file with all the mappings can be supplied for a demangler (eg for lldb/gdb debugging etc)
Re: __TIMESTAMP_UNIXEPOCH__ instead of useless __TIMESTAMP__?
On Wed, Jan 24, 2018 at 5:50 PM, rikki cattermole via Digitalmars-d wrote: > On 24/01/2018 7:18 PM, Timothee Cour wrote: >> >> __TIMESTAMP__ is pretty useless: >> `string literal of the date and time of compilation "www mmm dd hh:mm:ss >> "` >> eg:Wed Jan 24 11:03:56 2018 >> which is a weird non-standard format not understood by std.datetime. >> __DATE__ and __TIME__ are also pretty useless. >> >> Could we have __TIMESTAMP_UNIXEPOCH__ (or perhaps >> __TIMESTAMP_SYSTIME__ to get a SysTime) ? > > > That can be a library solution from __TIMESTAMP__. no, there's missing time zone information, so we can't reconstruct unix epoch nor utc time
__TIMESTAMP_UNIXEPOCH__ instead of useless __TIMESTAMP__?
__TIMESTAMP__ is pretty useless: `string literal of the date and time of compilation "www mmm dd hh:mm:ss "` eg:Wed Jan 24 11:03:56 2018 which is a weird non-standard format not understood by std.datetime. __DATE__ and __TIME__ are also pretty useless. Could we have __TIMESTAMP_UNIXEPOCH__ (or perhaps __TIMESTAMP_SYSTIME__ to get a SysTime) ? from that, users can convert to whatever format they want.
Re: gRPC in D good idea for GSOC 2018?
> Do we even have protobuf package? https://github.com/msoucy/dproto it could receive some attention, there are pending issues for RPC I've been using msgpackrpc since no gRPC was available. But would be nice to have gRPC. NOTE: capnproto is a very interesting newer alternative to protobuf; https://github.com/capnproto/capnproto-dlang shows: Missing RPC part of Cap'n Proto. helping out capnproto project (esp around RPC) could be another idea. we definitely need a good way to do RPC in D, otherwise hard to integrate D with other services. > I would consider them awful in a sense that there is no foundation to build > them on. At best it will be a self-serving artifact poorly fitting with > anything else. But it would enable using D in places that were not previously possible (integrating with services); we could imaging providing a (semi) stable interface for grpc in D code and change implementation to use better foundations later On Sun, Jan 21, 2018 at 9:54 PM, Dmitry Olshansky via Digitalmars-d wrote: > On Monday, 22 January 2018 at 04:40:53 UTC, Andrew Benton wrote: >> >> On Monday, 15 January 2018 at 19:28:08 UTC, Ali Çehreli wrote: >>> >>> I know a project where D could benefit from gRPC in D, which is not among >>> the supported languages: >>> >>> https://grpc.io/docs/ >>> >>> Do you think gRPC support is worth adding to GSOC 2018 ideas? >>> >>> https://wiki.dlang.org/GSOC_2018_Ideas >>> >>> Ali >> >> >> An http/2 and gRPC solutions is probably necessary with tools like >> linkerd, envoy, and istio if D wants to be competitive in service mesh and >> distributed applications. >> >> http/2 and/or gRPC are both excellent ideas for GSoC 2018. > > > I would consider them awful in a sense that there is no foundation to build > them on. At best it will be a self-serving artifact poorly fitting with > anything else. > > There is not even a standard way on handling IO as of yet. > Basically do we want fiber-aware IO or blocking IO or explicit async with > future/promise? > > Do we even have protobuf package? > >
Re: RFC: generic safety, specialized implementation?
this is very related to ICF (identical code folding), which some linkers do. in summary: gold and lld have options to do that, and it can be unsafe to do if code relies on each function having distinct address; > gold's --icf=safe option only enables ICF for functions that can be proven > not to have their address taken, so code that relies on distinct addresses > will still work. links: https://stackoverflow.com/questions/38662972/does-the-linker-usually-optimize-away-duplicated-code-from-different-c-templat https://stackoverflow.com/questions/15168924/gcc-clang-merging-functions-with-identical-instructions-comdat-folding https://releases.llvm.org/3.9.0/tools/lld/docs/ReleaseNotes.html On Fri, Jan 19, 2018 at 11:29 AM, H. S. Teoh via Digitalmars-d wrote: > On Fri, Jan 19, 2018 at 07:18:22PM +, Luís Marques via Digitalmars-d > wrote: > [...] >> void combulate(T)(T* item) >> { >> // ... lots of code here >> item.next = ...; >> } > [...] > [...] >> This catches the bug, but will have the disadvantage of generating >> code for the various types to which combulate is specialized, even >> though the body of the function template doesn't rely on anything that >> varies between the specializations. > > Yeah, I think this is one area where the way the compiler implements > templates could be improved. Quite often, in a complex template > (whether it's a template function or a template aggregate like a struct > or class) only a subset of the code actually depends on the template > arguments. Nevertheless, the compiler duplicates the entirety of the > code in the generated template. This leads to a lot of template bloat. > > I'm not sure how possible it is in the current compiler implementation, > but it would be nice if the compiler were a bit smarter about > instantiating templates. If an instantiated function (could be any > subset of the code, but granularity at the function level is probably > easiest to implement) would not result in code that differs from other > instantiations in the generated IR, only emit the code if it hasn't been > emitted yet; otherwise just alias that particular instantiation to the > previous instantiation. > > Perhaps an enhancement request could be filed for this. > > > [...] >> That is problematic in the context where I'm using this (embedded >> systems). So instead I've started using a mixed approach, with generic >> code that checks for the appropriate type, but delegates the actual >> work to a non-templated function (a fully specialized function >> template in my actual case), except in the cases where the actual code >> is small enough or the specialization significantly improves the >> performance. Something like this: > [...] >> So far this seems to be working well for me. Do you have experience >> writing this kind of code? Do you have any advice that might be >> relevant to this situation? > [...] > > Yeah, basically, this is just doing what I described above by hand. > I've done similar refactorings before, to reduce template bloat. It > certainly seems to work well. However, it would be nice if the compiler > automated such rote work for us. > > > T > > -- > Help a man when he is in trouble and he will remember you when he is in > trouble again.
Re: __ARGS__ : allow access to (stringified) arguments, as C's `#arg` macro
> Not sure I understand this feature. Is it something like: > > auto foo = 3; > auto bar = 4; > log(foo, bar); > > Would print? > > main.d:3 foo=3 > main.d:3 bar=4 main.d:3 foo=3, bar=4 (or whatever formatting is applied given supplied stringiified arguments, that's irrelevant as it can be customized inside user code) > If that's the case then this seems like yet another hack because we don't > have AST macros. I predicted you'd mention AST macros. While I agree AST macros would solve this and other problems, the likelihood of them appearing in D in the near future is slim. My proposal is pragmatic and can be implemented with a short PR in the near future, if people agree on this change. If/when AST macros come to D, we'd still be able to use them for that purpose. Currently there is *zero* good workaround: * either calling code is ugly (https://github.com/dlang/phobos/pull/4318) * or compile time is slowed down a lot (as I showed in my original post, via import(__FILE__)[__LINE__] + redoing the work the compiler already did) This proposal is simple to implement, useful, and exists in other languages (C++ and many others)
Re: __traits(documentation, X)
>> But in any case, the idea that comments affect the file you are compiling >> *right now*, and not some other tool-generated file makes me very nervous. >> Comments are supposed to not affect the code. Consider that with this >> feature, the documentation now becomes part of the API. That's already the case today that documentation can affect the code because `import(__FILE__)` is legal. Both string import and `__traits(documentation)` are tools and can be used for solving real problems or abused. Let's not invoke (impossible) ideals at the expense of pragmatism; D is supposed to be pragmatic, not dogmatic.
__ARGS__ : allow access to (stringified) arguments, as C's `#arg` macro
I wrote something like that to mimic C's `#arg` preprocessor (stringify argument) for debugging functions, eg: ``` // simplified here: void log(string file=__FILE__, int line=__LINE__, T) (T a){ enum arg_stringified=import(file)[line]; // more complex in practice writeln(arg_stringified, ":", a); } void main(){ log(1+3); // prints: `1+3:4` } ``` however: this slows down compilation a lot (in larger programs) and has potentially complex logic to deal with multiple arguments, and UFCS, as we need to redo the job of the parser to get access to individual elements stringified we can avoid slowing down compilation time by passing pass file,line at runtime and use readText, has the defect of not working if code was compiled and source changed at a later time. So I'd still like a solution that mimic's C's `#arg` preprocessor https://gcc.gnu.org/onlinedocs/gcc-4.0.0/cpp/Stringification.html ; it's sad that D is inferior to C in that respect. what i would like instead is __ARGS__: thus allowing: ``` void log(T...) (T a, string file=__FILE__, int line=__LINE__, string[] arg_names=__ARGS__){ writeln(file, ":", line, " ", zip(arg_names, a))); // or better formatting } ```
Re: Tuple DIP
On Sun, Jan 14, 2018 at 10:17 AM, Timon Gehr via Digitalmars-d wrote: > On 14.01.2018 19:14, Timothee Cour wrote: >> >> actually I just learned that indeed sizeof(typeof(tuple()))=1, but why >> is that? (at least for std.typecons.tuple) >> maybe worth mentioning that in the DIP (with rationale) > > > It's inherited from C, where all struct instances have size at least 1. > (Such that each of them has a distinct address.) Should definitely be mentioned in the DIP to open that up for discussion; it breaks assumptions like sizeof(Tuple)=sum_i : tuple (sizeof(Ti)); even if that were the case in std.typecons.tuple, having it as builtin could behave differently.
Re: Tuple DIP
actually I just learned that indeed sizeof(typeof(tuple()))=1, but why is that? (at least for std.typecons.tuple) maybe worth mentioning that in the DIP (with rationale) On Sun, Jan 14, 2018 at 8:18 AM, Timon Gehr via Digitalmars-d wrote: > On 14.01.2018 15:55, Q. Schroll wrote: >> >> On Friday, 12 January 2018 at 22:44:48 UTC, Timon Gehr wrote: >>> >>> [...] >>> This DIP aims to make code like the following valid D: >>> >>> --- >>> auto (a, b) = (1, 2); >>> (int a, int b) = (1, 2); >>> --- >>> [...] >> >> >> How is (1, 2) different from [1, 2] (static array)? > > > The first is a tuple, the second is a static array. This distinction exists > already, it is not proposed in this DIP. > > I.e., you could just as well ask "How is tuple(1, 2) different from [1, 2] > (static array)?". A (probably non-exhaustive) answer is that dynamic arrays > have a 'ptr' property; slicing a static array will produce a dynamic array; > tuples alias this to an 'expand' property that gives the components as an > AliasSeq; an empty tuple will take up 1 byte of space in a struct/class, but > an empty static array will not; empty static arrays have an element type, > 'void' is allowed as the element type of an empty static array; array > literals are _dynamic_ arrays by default, enforcing homogeneous element > types, while tuple literals give you heterogeneous _tuples_, ... > > None of this has much to do with the DIP though. > >> It makes no sense to me to have both and probably a bunch of conversion >> rules/functions. >> ... > > > The DIP proposes no new conversion rules, nor does it _introduce_ tuples. > You'll need to complain about the status quo elsewhere; blaming the DIP for > it makes no sense. > >> Why don't you consider extending (type-homogeneous) static arrays to >> (finite type enumerated) tuples? > > > Because tuples and arrays have significant differences as outlined above and > tuple literal syntax is essentially useless if it needs to be accompanied by > explicit type casts or annotations on every use. It's better to not add > tuple syntax at all than to overload square brackets in this ad-hoc manner. > Calling 'tuple(1, 2.0)' is less of a hassle than writing cast([int, > double])[1, 2.0]. This is just not good language design. > >> It solves >> - 1-tuples > > > There is already a solution. > >> - comma operator vs. tuple literal > > > The comma operator is gone. > >> instantly. > > > I think it introduces more problems than it solves. > >> You'd have T[n] as an alias for the tuple type consisting of n objects of >> type T. >> ... > > > So whether or not a tuple is instead a static array (according to the > differences above) depends on whether or not the types happen to be > homogeneous? > > I do understand very well the superficial aesthetic appeal, but this is > unfortunately just not a workable approach. > >> I've written something about that here: >> https://forum.dlang.org/post/wwgwwepihklttnqgh...@forum.dlang.org > > > (The DIP links to that thread.) > >> (sorry for my bad English in that post) >> ... > > > The English is fine. > >> The main reason I'd vote against the DIP: Parenthesis should only be used >> for operator precedence and function calls. > > > You do realize that this translates to "just because"? > (That, and you forgot about template instantiation, type constructor/typeof > application, if/for/while/switch/scope/... statements, type casts, basic > type constructor/new calls, ... (list wildly non-exhaustive).)
how to instrument dmd compiler to dump all references to a given symbol?
eg: how to instrument dmd compiler to dump all references to a given symbol? eg: for `A.a` it should output the locations marked with HERE any help/starting points would be appreciated! ``` Struct A{ int a; void fun(){ a++; // HERE alias b=a; b++; // HERE } } void fun(){ int a; // NOT HERE A b; b.a ++ ; // HERE } ```
Re: Tuple DIP
some people have suggested using `{a, b}` instead of `(a,b)` ; this would not work because of ambiguity, eg: `auto fun(){ return {}; }` already has a meaning, so the empty tuple would not work. so `()` is indeed better. On Fri, Jan 12, 2018 at 2:44 PM, Timon Gehr via Digitalmars-d wrote: > As promised [1], I have started setting up a DIP to improve tuple ergonomics > in D: > > https://github.com/tgehr/DIPs/blob/tuple-syntax/DIPs/DIP1xxx-tg.md > > > This DIP aims to make code like the following valid D: > > --- > auto (a, b) = (1, 2); > (int a, int b) = (1, 2); > --- > > --- > foreach((sum, diff); [(1, 2), (4, 3)].map!((a, b) => (a + b, a - b))) > { > writeln(sum, " ", diff); > } > /+ prints: > 3 -1 > 7 1 > +/ > --- > > Before going ahead with it, I'd like some preliminary community input: > > - I'm not yet completely satisfied with the DIP. > (See section "Limitations".) > Please let me know suggestions or further concerns you might have. > > > - There are good example use cases missing. While I'm confident I could > invent a few of them given a little time, I thought maybe I can > expedite the process and make the point more convincingly by asking > for use cases you encountered in your own code. The DIP already > contains an example due to bearophile. > > > [1] https://forum.dlang.org/post/or625h$2hns$1...@digitalmars.com
Re: Tuple DIP
the DIP says this replace std.typecons.TypeTuple however no mention is made of named arguments, so I don't see how they could be a replacement (nor how would that allow for a migration path) without mentioning a word on this, eg: what would be the equivalent of this ? ` writeln(tuple!("x", "y", "z")(2, 3, 4).y); ` On Fri, Jan 12, 2018 at 2:44 PM, Timon Gehr via Digitalmars-d wrote: > As promised [1], I have started setting up a DIP to improve tuple ergonomics > in D: > > https://github.com/tgehr/DIPs/blob/tuple-syntax/DIPs/DIP1xxx-tg.md > > > This DIP aims to make code like the following valid D: > > --- > auto (a, b) = (1, 2); > (int a, int b) = (1, 2); > --- > > --- > foreach((sum, diff); [(1, 2), (4, 3)].map!((a, b) => (a + b, a - b))) > { > writeln(sum, " ", diff); > } > /+ prints: > 3 -1 > 7 1 > +/ > --- > > Before going ahead with it, I'd like some preliminary community input: > > - I'm not yet completely satisfied with the DIP. > (See section "Limitations".) > Please let me know suggestions or further concerns you might have. > > > - There are good example use cases missing. While I'm confident I could > invent a few of them given a little time, I thought maybe I can > expedite the process and make the point more convincingly by asking > for use cases you encountered in your own code. The DIP already > contains an example due to bearophile. > > > [1] https://forum.dlang.org/post/or625h$2hns$1...@digitalmars.com
Re: Tuple DIP
it would also solve a long-standing issue of passing runtime optional arguments along with variadic templates, eg: current: ``` # current: bad, causes template bloat (1 template per call site) void log(string file=__FILE__, int line=__LINE__, T...)(T a); # usage: log(1, "foo"); # with this DIP void log(T...)(T a, string file=__FILE__, int line=__LINE__); log((1, "foo")); ``` still not ideal as syntax is not as nice, but at least it removes template bloat On Fri, Jan 12, 2018 at 2:44 PM, Timon Gehr via Digitalmars-d wrote: > As promised [1], I have started setting up a DIP to improve tuple ergonomics > in D: > > https://github.com/tgehr/DIPs/blob/tuple-syntax/DIPs/DIP1xxx-tg.md > > > This DIP aims to make code like the following valid D: > > --- > auto (a, b) = (1, 2); > (int a, int b) = (1, 2); > --- > > --- > foreach((sum, diff); [(1, 2), (4, 3)].map!((a, b) => (a + b, a - b))) > { > writeln(sum, " ", diff); > } > /+ prints: > 3 -1 > 7 1 > +/ > --- > > Before going ahead with it, I'd like some preliminary community input: > > - I'm not yet completely satisfied with the DIP. > (See section "Limitations".) > Please let me know suggestions or further concerns you might have. > > > - There are good example use cases missing. While I'm confident I could > invent a few of them given a little time, I thought maybe I can > expedite the process and make the point more convincingly by asking > for use cases you encountered in your own code. The DIP already > contains an example due to bearophile. > > > [1] https://forum.dlang.org/post/or625h$2hns$1...@digitalmars.com
Re: Tuple DIP
https://github.com/tgehr/DIPs/blob/tuple-syntax/DIPs/DIP1xxx-tg.md#proposal-6-placeholder-name-_ > Symbols with the name _ should not be inserted into the symbol table. why not use `?` instead of `_` ? no breaking change and should be unambiguous with (expr ? expr : expr) syntax On Fri, Jan 12, 2018 at 2:44 PM, Timon Gehr via Digitalmars-d wrote: > As promised [1], I have started setting up a DIP to improve tuple ergonomics > in D: > > https://github.com/tgehr/DIPs/blob/tuple-syntax/DIPs/DIP1xxx-tg.md > > > This DIP aims to make code like the following valid D: > > --- > auto (a, b) = (1, 2); > (int a, int b) = (1, 2); > --- > > --- > foreach((sum, diff); [(1, 2), (4, 3)].map!((a, b) => (a + b, a - b))) > { > writeln(sum, " ", diff); > } > /+ prints: > 3 -1 > 7 1 > +/ > --- > > Before going ahead with it, I'd like some preliminary community input: > > - I'm not yet completely satisfied with the DIP. > (See section "Limitations".) > Please let me know suggestions or further concerns you might have. > > > - There are good example use cases missing. While I'm confident I could > invent a few of them given a little time, I thought maybe I can > expedite the process and make the point more convincingly by asking > for use cases you encountered in your own code. The DIP already > contains an example due to bearophile. > > > [1] https://forum.dlang.org/post/or625h$2hns$1...@digitalmars.com
Re: Tuple DIP
https://github.com/tgehr/DIPs/blob/tuple-syntax/DIPs/DIP1xxx-tg.md#proposal-4-unpacking-assignments ``` (a, b) = t; // shouldn't it be: auto (a, b) = t; ``` ? On Sat, Jan 13, 2018 at 9:52 AM, Mengu via Digitalmars-d wrote: > On Friday, 12 January 2018 at 22:44:48 UTC, Timon Gehr wrote: >> >> As promised [1], I have started setting up a DIP to improve tuple >> ergonomics in D: >> >> [...] > > > how do we vote for / support this DIP?
Re: can't use ldc calypso on OSX; help needed
could anyone with knowledge on druntime shared library loading please help? (Martin Nowak, klickverbot, Syniurge, etc) I've made some progress but still failing whenever I try to use C++ std libs eg: `#include `, see * https://github.com/Syniurge/Calypso/issues/64 ( assert(handle !in _handleToDSO) in setDSOForHandle) * https://github.com/Syniurge/Calypso/issues/63 error: The module 'ℂcpp.std.type_info' is already defined in libcalypso-ldc-shared.dylib #63 again, this would make C++ interop much easier and more powerful than alternatives On Wed, Dec 13, 2017 at 9:12 PM, Timothee Cour wrote: > Has anyone used https://github.com/Syniurge/Calypso on OSX? I'm > running into a basic issue : > https://github.com/Syniurge/Calypso/issues/60 which makes any binary > crash immediately > > Making Calypso work would make integration with C++ libraries much > easier, so it's rather important for dlang.
can't use ldc calypso on OSX; help needed
Has anyone used https://github.com/Syniurge/Calypso on OSX? I'm running into a basic issue : https://github.com/Syniurge/Calypso/issues/60 which makes any binary crash immediately Making Calypso work would make integration with C++ libraries much easier, so it's rather important for dlang.
is there any plan to support shared libraries in OSX?
Supporting shared libraries seems like a pretty important issue, IMO more important than many things being worked on. I can't think of other languages not supporting them; it renders many use cases impossible, preventing more widespread adoption. Is it on the roadmap? It's been a very long standing issue. Recent issues I reported: https://issues.dlang.org/show_bug.cgi?id=18046 Issue 18046 - dmd -unittest doesn't work when linking against a shared library https://issues.dlang.org/show_bug.cgi?id=18055 Issue 18055 - exception handling cause EXC_BAD_ACCESS when linking against shared libraries using vibe There are other issues I've reported before regarding shared libraries as well. ldc has better support but we lose compile time speed of dmd.
Re: -unittest doesn't work when linking against shared libraries
>> They are on LDC; would be interesting to see whether the problem occurs >> there as well (I'm having issues with my Mac right now, so can't check >> myself until later). just updated bug report: same issue with ldc! >> But yes, you can't really expect any sort of runtime infrastructure to work >> with shared libraries on DMD/macOS right now. a lot of things work with shared libraries on OSX, it would be great if this worked too. It's not because shared libraries are not 100% fully officially supported that we should ignore this issue. Certain use cases are impossible without using shared libraries. On Fri, Dec 8, 2017 at 3:55 PM, Jonathan M Davis via Digitalmars-d wrote: > On Friday, December 08, 2017 10:45:29 Steven Schveighoffer via Digitalmars-d > wrote: >> On 12/7/17 9:15 PM, Jonathan M Davis wrote: >> > On Thursday, December 07, 2017 14:34:01 Timothee Cour via Digitalmars-d >> > >> > wrote: >> >> I have a simple test case to reproduce in >> >> https://issues.dlang.org/show_bug.cgi?id=18046 >> >> this seems like a serious bug; what would be a workaround when dealing >> >> with shared libraries? >> > >> > If you're trying to unit test a shared library, I'd suggest just turning >> > it into a static library for the tests. Alternatively, you can write >> > the unit tests in an application that links against the shared library, >> > but that means separating the tests from the code, which isn't ideal. >> >> I think you misunderstand. If there is a shared library being linked >> against, then the tests in your application don't run (see the bug >> report). Definitely a serious bug. I would be interested how they work >> on Linux with shared libraries, maybe it's a mac thing. > > Ah. I did misunderstand then. Yeah, that would be a big problem, and I don't > know how you'd work around that except not using shared libraries for the > unit test build, which may or may not be possible. > > - Jonathan M Davis >
using -unittest leads to undefined errors when linking against non-unittest code
this is obviously a serious error and is a blocker for running unittests. also filed: https://issues.dlang.org/show_bug.cgi?id=18049 managed to reduce it to a short example code with zero dependencies: dmd -lib -oflibfun.a fun.d dmd -main -unittest main.d libfun.a Undefined symbols for architecture x86_64: "_D3fun__T1ATtZQf8opEqualsMxFNaNbNiNfSQBj__TQBiTtZQBoZb", referenced from: _D3fun__T1ATtZQf11__xopEqualsFKxSQBf__TQBeTtZQBkKxQsZb in main.o ld: symbol(s) not found for architecture x86_64 -- fun.d: module fun; struct A(T){ bool opEquals(A!T) const { auto a=typeid(A!T); return true; } unittest { alias b = A!(ushort); } } enum ignore = A!int(); main.d: module main; import fun;
-unittest doesn't work when linking against shared libraries
I have a simple test case to reproduce in https://issues.dlang.org/show_bug.cgi?id=18046 this seems like a serious bug; what would be a workaround when dealing with shared libraries?
Re: Post about comparing C, C++ and D performance with a real world project
is there a link to source code (C++,C,D) nor compile / runtime commands used? hard to reach any conclusion without this On Thu, Dec 7, 2017 at 1:55 AM, Antonio Corbi via Digitalmars-d wrote: > Hello all, > > Jussi Pakkanen (one of the meson build system creators) has written a post > comparing C, C++ and D. Worth a read. > > http://nibblestew.blogspot.com.es/2017/12/comparing-c-c-and-d-performance-with.html > > Antonio.
Re: function for inverse relative path?
relativePath works with un-normalized paths, and I'd want the same for inverseRelativePath, eg: should work with: `/a//b/./c/bar.d` and `c//bar.d` => `/a//b` unfortunately buildNormalizedPath(rel) will prepend getcwd to `rel` so it's a tad more complex than just calling buildNormalizedPath on both arguments; which is way would be nice to have in std.path On Wed, Dec 6, 2017 at 8:55 PM, Jonathan M Davis via Digitalmars-d wrote: > On Wednesday, December 06, 2017 17:36:04 Timothee Cour via Digitalmars-d > wrote: >> what would be a robust way to do this `inverseRelativePath`, and >> should that be in std.path? >> >> ``` >> auto a="/a/b/c.d"; >> auto b="b/c.d"; >> assert(inverseRelativePath(a, b) == "/a"); >> assertThrown(inverseRelativePath(a, "c2.d")); >> ``` > > I've never heard of inverse relative paths, but it looks like all you're > doing is looking for a substring match at the end and returning the parts at > the front that don't match. If you're doing that, you could simply do > something like > > enforce(lhs.length >= rhs.length, "some error message"); > if(lhs[rhs.length .. $] == rhs) > return lhs[0 .. rhs.length]; > throw new Exception("some error message"); > > though if you want /a instead of /a/ in your example, some extra code would > have to be added for properly handling trailing slashes, and depending, you > might want to normalize paths first (though typically, that sort of thing is > left up to the caller). It might also need to be enforced that the left-hand > argument is an absolute path. > > - Jonathan M Davis >
Re: function for inverse relative path?
how about: ``` string inverseRelativePath(string root, string rel){ while(true){ if(rel.empty || rel==".") { return root; } auto a1=root.baseName; auto a2=rel.baseName; enforce(a1==a2, text(root, " ", rel)); root=root.dirName; rel=rel.dirName; } } unittest{ import std.exception; auto a="/a/b/c.d"; auto b="b/c.d"; assert(inverseRelativePath(a, b) == "/a"); assertThrown(inverseRelativePath(a, "c2.d")); } ``` On Wed, Dec 6, 2017 at 5:36 PM, Timothee Cour wrote: > what would be a robust way to do this `inverseRelativePath`, and > should that be in std.path? > > ``` > auto a="/a/b/c.d"; > auto b="b/c.d"; > assert(inverseRelativePath(a, b) == "/a"); > assertThrown(inverseRelativePath(a, "c2.d")); > ```
function for inverse relative path?
what would be a robust way to do this `inverseRelativePath`, and should that be in std.path? ``` auto a="/a/b/c.d"; auto b="b/c.d"; assert(inverseRelativePath(a, b) == "/a"); assertThrown(inverseRelativePath(a, "c2.d")); ```
Re: using vibe listenHTTP in a library on OSX causes: core.sync.exception.SyncError@(0): Unable to lock mutex.
on OSX 10.13.1 if that matters; IIRC it was working on previous OSX version. On Thu, Nov 30, 2017 at 4:46 PM, Timothee Cour wrote: > I get: core.sync.exception.SyncError@(0): Unable to lock mutex. > when calling listenHTTP via a library. It works when compiling > everything in a single application without using intermediate library. > > details: > > using: dmd:2.077 > > dub build > > dmd -ofmain -L-Ldir -L-ltest1 -Isource import/main.d > > ./main > Listening for requests on http://[::1]:8080/ > Listening for requests on http://127.0.0.1:8080/ > Please open http://127.0.0.1:8080/ in your browser. > core.sync.exception.SyncError@(0): Unable to lock mutex. > > > source/app.d: > ``` > void fun(){ > import vibe.vibe; > auto settings = new HTTPServerSettings; > settings.port = 8080; > settings.bindAddresses = ["::1", "127.0.0.1"]; > static void hello(HTTPServerRequest req, HTTPServerResponse res){ > res.writeBody("Hello, World!"); > } > listenHTTP(settings, &hello); > logInfo("Please open http://127.0.0.1:8080/ in your browser."); > runApplication(); > } > ``` > > dub.json: > ``` > { > "name": "test1", > "targetType": "staticLibrary", // same with dynamicLibrary > "targetName": "test1", > > "dependencies": { > "vibe-d": "==0.7.31", // same with 0.8.1 > }, > "description": "...", > "copyright": "...", > "authors": ["..."], > "license": "proprietary" > } > ``` > > main.d: > ``` > import app; > void main(){ > fun; > } > ```
using vibe listenHTTP in a library on OSX causes: core.sync.exception.SyncError@(0): Unable to lock mutex.
I get: core.sync.exception.SyncError@(0): Unable to lock mutex. when calling listenHTTP via a library. It works when compiling everything in a single application without using intermediate library. details: using: dmd:2.077 dub build dmd -ofmain -L-Ldir -L-ltest1 -Isource import/main.d ./main Listening for requests on http://[::1]:8080/ Listening for requests on http://127.0.0.1:8080/ Please open http://127.0.0.1:8080/ in your browser. core.sync.exception.SyncError@(0): Unable to lock mutex. source/app.d: ``` void fun(){ import vibe.vibe; auto settings = new HTTPServerSettings; settings.port = 8080; settings.bindAddresses = ["::1", "127.0.0.1"]; static void hello(HTTPServerRequest req, HTTPServerResponse res){ res.writeBody("Hello, World!"); } listenHTTP(settings, &hello); logInfo("Please open http://127.0.0.1:8080/ in your browser."); runApplication(); } ``` dub.json: ``` { "name": "test1", "targetType": "staticLibrary", // same with dynamicLibrary "targetName": "test1", "dependencies": { "vibe-d": "==0.7.31", // same with 0.8.1 }, "description": "...", "copyright": "...", "authors": ["..."], "license": "proprietary" } ``` main.d: ``` import app; void main(){ fun; } ```
Re: HTOD
On Wed, Aug 23, 2017 at 10:38 PM, lobo via Digitalmars-d wrote: > On Thursday, 24 August 2017 at 01:51:25 UTC, Timothee Cour wrote: >>> >>> [...] >> >> >> nim: >> it supports both targetting C++ (as well as C or javascript) and also >> calling C++ via foreign function interface, eg here are some links: >> https://github.com/nim-lang/Nim/wiki/Playing-with-CPP--VTABLE-from-Nim >> >> https://stackoverflow.com/questions/29526958/wrapping-nested-templated-types-in-nim >> https://forum.nim-lang.org/t/1056 >> >> for D, there's a project to support full C++ natively using clang library >> is calypso, unfortunalty I haven't been able to use it, either from OSX or >> ubuntu: it's blocked by https://github.com/Syniurge/Calypso/issues/41, >> hoping someone can help here! >> >> >> >> On Wed, Aug 23, 2017 at 3:57 PM, lobo via Digitalmars-d >> wrote: >>> >>> [...] > > > Thanks, I'll revisit Nim. As a team we're testing new languages as a larger > plan to switch from C++. Nim we struck off 6 months ago because we found it > not quite production ready. > > bye, > lobo Would love to hear more about your reasoning as I'm also occasionally re-visiting it, do you have any writeup?
Re: HTOD
> Do you know another language or tool that can call C++ natively? nim: it supports both targetting C++ (as well as C or javascript) and also calling C++ via foreign function interface, eg here are some links: https://github.com/nim-lang/Nim/wiki/Playing-with-CPP--VTABLE-from-Nim https://stackoverflow.com/questions/29526958/wrapping-nested-templated-types-in-nim https://forum.nim-lang.org/t/1056 for D, there's a project to support full C++ natively using clang library is calypso, unfortunalty I haven't been able to use it, either from OSX or ubuntu: it's blocked by https://github.com/Syniurge/Calypso/issues/41, hoping someone can help here! On Wed, Aug 23, 2017 at 3:57 PM, lobo via Digitalmars-d wrote: > On Wednesday, 23 August 2017 at 13:25:20 UTC, 12345swordy wrote: >> >> On Tuesday, 22 August 2017 at 19:55:53 UTC, Jacob Carlborg wrote: >>> >>> On 2017-08-22 19:47, 12345swordy wrote: >>> Use Clang frontend? >>> >>> >>> DStep [1] is doing that. It handles both GCC and Microsoft extensions. >>> >>> [1] https://github.com/jacob-carlborg/dstep >> >> >> "Doesn't translate C++ at all" >> >> That's very disappointing. IMO, it should at least aim for the c++ 11 >> feature via using clang. > > > Do you know another language or tool that can call C++ natively? I'm looking > for native C++ interop either built in or via tooling. > > bye, > lobo > >
chunkBy with inputRange buggy + proposal for mapCache to fix it
chunkBy can't backtrack on an inputrange (but ok with a forward range, eg by uncommenting a .array below). This can be fixed using the mapCache defined below. Is there another way without introducing mapCache ? main.d: ``` /+ D20170705T174411 $ echo 'abc' | dmd -version=with_mapCache -run main.d works $ echo 'abc' | dmd -version=with_map -run main.d RT error: Attempting to popFront an empty map. +/ import std.algorithm:chunkBy,map,cache; import std.range:walkLength; import std.array:array; import std.stdio; template mapCache(alias fun){ auto mapCache(T)(T a){ struct B{ T a; this(T a){ this.a=a; } void popFront(){ a.popFront; } auto front(){ return fun(a.front); } bool empty(){ return a.empty; } } return B(a); } } version(with_map) alias mymap=map; else version(with_mapCache) alias mymap=mapCache; void test(){ auto lines=stdin .byLineCopy //.array // when uncommenting this, both works .chunkBy!(b=>b[0]) .mymap!(a=>a[1].array) ; auto lines2=lines.array; // prints empty unless we uncomment above writeln(lines2); } void main(){ test; } ```
how to disable all default N-argument constructors for a struct?
How would I disable the following? ``` auto a1=A(1); auto a2=A(1, "b"); struct A{ int a; string b; // @disable default constructors with N(N>=1) arguments } ``` I'd like to force the user to set fields explicitly, so as to make it more safe to add / move fields
can't used dynamic shared libraries compiled with -defaultlib=libphobos2.so
How would I use a dynamic shared libraries compiled with -defaultlib=libphobos2.so? Is that supported? I'm running into https://issues.dlang.org/show_bug.cgi?id=17591
Nullable with auto-allocation on access
Is there a construct similar to Nullable that would auto-allocate upon access (set/get) if isNull is true? Use case: https://github.com/msoucy/dproto/issues/117 [getters and setters should work without calling `init` on Nullable fields, as in C++ #117] copied inline for easy reference: ``` in C++ we can write: message MyMessage{ optional Foo foo=1; } message Foo{ optional Bar bar=1; } message Bar{ optional string baz=1; } MyMessage a; a.mutable_foo()->mutable_bar()->set_baz("hello"); in dproto we need to call init on the intermediate fields foo, bar auto initialize_nullable(T:Nullable!U, U)(ref T a){ a=U.init; } MyMessage a; a.foo.initialize_nullable; a.foo.bar.initialize_nullable; a.foo.bar.baz="hello"; Would be nice to not have to call init and allow this: MyMessage a; a.foo.bar.baz="hello"; I believe this could be implemented via a modification of Nullable that would call init on 1st access to a setter. ```
proposal: reading multiple files with dmd -stdin and allow specifying (fake) file names
Proposal: support reading multiple files (with fake file names) from stdin. This is a natural extension of https://issues.dlang.org/show_bug.cgi?id=9287 (Issue 9287 - DMD should read from stdin when an input file is "-") Example use case: * simplify writing bug reports and trying out other peoples's bug reports * simplify writing unittests that depend on multiple files * IDE integration where more than 1 file are desired Proposed syntax: use a delimiter to seperate different modules, with a (fake) file name. ``` cat module_list.txt | dmd -stdin ``` with cat module_list.txt: ``` pragma(module, "bar/foo1.d"); module foo1; void test1(){} pragma(module, "bar/foo2.d"); module foo2; // A module name is required void test2(){} // etc ``` Now when sending a bug report involving multiple files, the user doesn't need to manually re-create the file hierarchy, all he needs is copy paste to the terminal after dmd -stdin, followed by ^D (or just cat module_list.txt | dmd -stdin) The other feature here is introduction of fake file names, which should behave exactly as the real file names except for initially reading them. This'll greatly simplify writing compiler unittests, or writing examples for dub / vibe programs that typically rely on a file hierarchy. This'll also allow documentation unittests involving multiple files. There are probably other use cases based on reading entire packages from a url, etc. Implementation: similar to https://github.com/dlang/dmd/pull/6880 : * read from stdin until EOF, then parse into a map string[string] (module_contents[fakefilename]), then insert a simple interception logic that searches inside this map before searching in the filesystem.
Re: anyone using msgpackrpc-d ? it's currently broken and doesn't seem maintained
> Msgpack rpc with vibe.d works. We used it. you mean with msgpackrpc-d? If so, it works, until you run into https://github.com/msgpack-rpc/msgpack-rpc-d/issues/16 (when server sends >= 4090 bytes, client hangs forever). > we switched towards http + asdf not sure what asdf is? On Wed, Jun 14, 2017 at 4:42 PM, David Nadlinger via Digitalmars-d wrote: > On Wednesday, 14 June 2017 at 19:55:49 UTC, Yawniek wrote: >> >> Msgpack rpc with vibe.d works. We used it. >> Its extremely fast, youll never get that speed with thrift. > > > I don't think Thrift is fundamentally much different in performance than > MessagePack, see e.g. https://github.com/thekvs/cpp-serializers. Do you have > data to suggest otherwise? (I of course originally wrote Thrift/D, but that > was long enough ago that I like to think I'm not particularly biased either > way.) > > As for the performance of the RPC server, I'd think that just hooking vibe.d > sockets into Thrift should give you similar performance to > msgpack-rpc/vibe.d. > > — David
dmd: why not use fully qualified names for types in error messages?
eg: Error: no property 'IF_gray' for type 'ImageFormat' => Error: no property 'IF_gray' for type 'foo.bar.ImageFormat' and also, why not show where the symbol is defined? would PR's for that be accepted? is that hard to implement?
Re: anyone using msgpackrpc-d ? it's currently broken and doesn't seem maintained
Thanks for the link; last answer was from 2014, and doesn't look like the problem was close to being solved. Any help with how to address this issue would be really appreciated! On Mon, Jun 12, 2017 at 6:35 PM, Domain via Digitalmars-d wrote: > On Monday, 12 June 2017 at 18:12:38 UTC, Timothee Cour wrote: >> >> any help on this would be most welcome: >> https://github.com/msgpack-rpc/msgpack-rpc-d/issues/16 >> >> Unfortunately I find the RPC support in D lacking. Having a good RPC >> integration for D is key for production use of D where one wants to >> integrate with other existing services (that could be written in other >> languages for eg) > > > http://forum.rejectedsoftware.com/groups/rejectedsoftware.vibed/thread/21312/
anyone using msgpackrpc-d ? it's currently broken and doesn't seem maintained
any help on this would be most welcome: https://github.com/msgpack-rpc/msgpack-rpc-d/issues/16 Unfortunately I find the RPC support in D lacking. Having a good RPC integration for D is key for production use of D where one wants to integrate with other existing services (that could be written in other languages for eg)