Re: Walter on Twitter
On Tuesday, 18 April 2023 at 03:02:16 UTC, Walter Bright wrote: On 4/15/2023 6:49 AM, Monkyyy wrote: By all means fund and promote *live coding* or teaching videos if you want to outreach, but shitposting on twitter wont do anything of value Mike has also suggested I do some live coding videos. That would be awesome! I'm sure many people would find it very interesting to see how you work - both your approach to solving problems and day-to-day techniques like your own Micro Emacs D editor.
Re: Need Advice: Union or Variant?
On Thursday, 17 November 2022 at 20:54:46 UTC, jwatson-CO-edu wrote: I have an implementation of the "[Little Scheme](https://mitpress.mit.edu/9780262560993/the-little-schemer/)" educational programming language written in D, [here](https://github.com/jwatson-CO-edu/SPARROW)". It has many problems, but the one I want to solve first is the size of the "atoms" (units of data). `Atom` is a struct that has fields for every possible type of data that the language supports. This means that a bool `Atom` unnecessarily takes up space in memory with fields for number, string, structure, etc. Here is the [definition](https://github.com/jwatson-CO-edu/SPARROW/blob/main/lil_schemer.d#L55): ```d enum F_Type{ CONS, // Cons pair STRN, // String/Symbol NMBR, // Number EROR, // Error object BOOL, // Boolean value FUNC, // Function } struct Atom{ F_Type kind; // What kind of atom this is Atom* car; // - Left `Atom` Pointer Atom* cdr; // - Right `Atom` Pointer double num; // - Number value string str; // - String value, D-string underlies boolbul; // - Boolean value F_Error err = F_Error.NOVALUE; // Error code } ``` Question: **Where do I begin my consolidation of space within `Atom`? Do I use unions or variants?** In general, I recommend [`std.sumtype`](https://dlang.org/phobos/std_sumtype), as it is one of the best D libraries for this purpose. It is implemented as a struct containing two fields: the `kind` and a `union` of all the possible types. That said, one difficulty you are likely to face is with refactoring your code to use the [`match`](https://dlang.org/phobos/std_sumtype#.match) and [`tryMatch`](https://dlang.org/phobos/std_sumtype#.tryMatch) functions, as `std.sumtype.SumType` does not expose the underlying kind field. Other notable alternatives are: * [`mir-core`](https://code.dlang.org/packages/mir-core)'s `mir.algebraic`: http://mir-core.libmir.org/mir_algebraic.html * [`taggedalgebraic`](https://code.dlang.org/packages/taggedalgebraic): https://vibed.org/api/taggedalgebraic.taggedalgebraic/
Re: DConf '22 Day One Videos
On Monday, 26 September 2022 at 12:48:54 UTC, Mike Parker wrote: I've finished editing uploading all of the videos for Day One of DConf. You can find them here: https://www.youtube.com/playlist?list=PLIldXzSkPUXVDzfnBlXcqZF6GB_ejjkEn I hope to be able to pick up the pace a bit after this week. Awesome! Thank you and everyone else for the hard work for making DConf possible!
Re: Binary Literals Not Going Anywhere
On Monday, 26 September 2022 at 04:40:02 UTC, Mike Parker wrote: He confirmed that they will not be deprecated. If you're using them today, you can keep using them tomorrow. Great!
Re: importC | Using D with Raylib directly | No bindings | [video]
On Monday, 8 August 2022 at 05:39:29 UTC, Ki Rill wrote: On Sunday, 7 August 2022 at 13:53:12 UTC, Steven Schveighoffer wrote: Also, IIRC, in the latest master of DMD, there is an attempt to run the system preprocessor automatically. But I'm not sure of the state of it, or if it's in the latest release (which we are having trouble publishing still). I've read about this somewhere. It would be nice to have the preprocessor run automatically and, maybe in the future, add those #defines for the user. I think this was recently documented here: https://dlang.org/spec/importc.html#preprocessor Give it a read, try it out and let us know how it works out!
Re: T... args!
On Thursday, 9 December 2021 at 00:36:29 UTC, Salih Dincer wrote: On Wednesday, 8 December 2021 at 23:47:07 UTC, Adam Ruppe wrote: On Wednesday, 8 December 2021 at 23:43:48 UTC, Salih Dincer wrote: I think you meant to say void foo(string[] args...) {} Not exactly... ```d alias str = immutable(char)[]; void foo(str...)(str args) { foreach(ref a; args) { a.writeln('\t', typeof(a).stringof); } str s; // "Amazing! ---v"; s.writeln(": ", typeof(s).stringof); } ``` Unlike [value template parameters][0] (which constitute of existing type + identifier), all other template parameter forms introduce a brand new identifier in the template scope that is completely unrelated to whatever other types you may have outside in your program (including the ones implicitly imported from `object.d` like `string`). The `str...` in your `foo` function introduces a [template sequence parameter][1] which shadows the `str` `alias` you have above. The `str s;` line declares a variable of the `str` type sequence, so it's essentially a tuple (*). See: See: ```d import std.stdio : writeln; alias str = immutable(char)[]; void foo(str...)(str args) { foreach(ref a; args) { a.writeln('\t', typeof(a).stringof); } str s; // "Amazing! ---v"; s.writeln(": ", typeof(s).stringof); } void main() { foo(1, true, 3.5); } ``` ``` 1 int truebool 3.5 double 0falsenan: (int, bool, double) ``` (*) Technically, you can attempt to explicitly instantiate `foo` with non type template arguments, but it will fail to compile, since: * The `args` function parameter demands `str` to be a type (or type sequence) * The `s` function local variable demands `str` to be a type (or type sequence) If you can either remove `args` and `s` or filter the sequence to keep only the types: ```d import std.meta : Filter; enum bool isType(alias x) = is(x); alias TypesOnly(args...) = Filter!(isType, args); void foo(str...)(TypesOnly!str args) { static foreach(s; str) pragma (msg, s); } void main() { static immutable int a = 42; foo!(int, double, string)(3, 4.5, "asd"); pragma (msg, ``); foo!(a, "asd", bool, foo, int[])(true, []); } ``` ``` int double string 42 asd bool foo(str...)(TypesOnly!str args) int[] ``` [0]: https://dlang.org/spec/template.html#template_value_parameter [1]: https://dlang.org/spec/template.html#variadic-templates
Re: Any workaround for "closures are not yet supported in CTFE"?
On Wednesday, 8 December 2021 at 17:05:49 UTC, Timon Gehr wrote: On 12/8/21 9:07 AM, Petar Kirov [ZombineDev] wrote: [...] Nice, so the error message is lying. Closure support deserves way more love in the compiler. I'm quite surprised that that hack worked, given that various very similar rearrangements that I tried before didn't. This is a bit more complete: ```d import std.stdio, std.traits, core.lifetime; auto partiallyApply(alias fun,C...)(C context){ return class(move(context)){ C context; this(C context) { foreach(i,ref c;this.context) c=move(context[i]); } auto opCall(ParameterTypeTuple!fun[context.length..$] args) { return fun(context,forward!args); } }.opCall; } // [snip] ``` Thanks, I was struggling to find a good name for this building block. `partiallyApply` is a natural fit. Also thanks for the move / forwarding icing.
Re: Any workaround for "closures are not yet supported in CTFE"?
On Wednesday, 8 December 2021 at 12:17:42 UTC, Stanislav Blinov wrote: On Wednesday, 8 December 2021 at 08:07:59 UTC, Petar Kirov [ZombineDev] wrote: ```d interface ICallable { void opCall() const; } alias Action = void delegate(); struct A { Action[] dg; } ``` At this point why not just call a spade a spade and store an array of ICallables directly? :) I mean, why store fat pointers to fat pointers? Initially that's exactly what I tried, and it worked if the result was stored as `static const` / `static immutable`, but it didn't when using the `enum`: ``` onlineapp.d(39): Error: variable `onlineapp.main.a` : Unable to initialize enum with class or pointer to struct. Use static const variable instead. ``` ```d interface ICallable { void opCall() const; } auto makeDelegate(alias fun, Args...)(auto ref Args args) { return new class(args) ICallable { Args m_args; this(Args p_args) { m_args = p_args; } void opCall() const { fun(m_args); } }; } alias Action = void delegate(); ICallable createDelegate(string s) { import std.stdio; return makeDelegate!((string str) => writeln(str))(s); } struct A { ICallable[] dg; } A create() { A a; a.dg ~= createDelegate("hello"); a.dg ~= createDelegate("buy"); return a; } void main() { enum a = create(); foreach(dg; a.dg) dg(); } ``` I didn't have time to fully investigate the issue and report this compiler limitation.
Re: Any workaround for "closures are not yet supported in CTFE"?
On Wednesday, 8 December 2021 at 07:55:55 UTC, Timon Gehr wrote: On 08.12.21 03:05, Andrey Zherikov wrote: On Tuesday, 7 December 2021 at 18:50:04 UTC, Ali Çehreli wrote: I don't know whether the workaround works with your program but that delegate is the equivalent of the following struct (the struct should be faster because there is no dynamic context allocation). Note the type of 'dg' is changed accordingly: The problem with struct-based solution is that I will likely be stuck with only one implementation of delegate (i.e. opCall implementation). Or I'll have to implement dispatching inside opCall based on some "enum" by myself which seems weird to me. Do I miss anything? This seems to work, maybe it is closer to what you are looking for. ```d import std.stdio, std.traits, core.lifetime; struct CtDelegate(R,T...){ void* ctx; R function(T,void*) fp; R delegate(T) get(){ R delegate(T) dg; dg.ptr=ctx; dg.funcptr=cast(typeof(dg.funcptr))fp; return dg; } alias get this; this(void* ctx,R function(T,void*) fp){ this.ctx=ctx; this.fp=fp; } R opCall(T args){ return fp(args,ctx); } } auto makeCtDelegate(alias f,C)(C ctx){ static struct Ctx{ C ctx; } return CtDelegate!(ReturnType!(typeof(f)),ParameterTypeTuple!f[0..$-1])(new Ctx(forward!ctx), (ParameterTypeTuple!f[0..$-1] args,void* ctx){ auto r=cast(Ctx*)ctx; return f(r.ctx,forward!args); }); } struct A{ CtDelegate!void[] dg; } auto createDelegate(string s){ return makeCtDelegate!((string s){ s.writeln; })(s); } A create(){ A a; a.dg ~= createDelegate("hello"); a.dg ~= createDelegate("buy"); return a; } void main(){ static a = create(); foreach(dg; a.dg) dg(); } ``` Incidentally, yesterday I played with a very similar solution. Here's my version: https://run.dlang.io/gist/PetarKirov/f347e59552dd87c4c02d0ce87d0e9cdc?compiler=dmd ```d interface ICallable { void opCall() const; } auto makeDelegate(alias fun, Args...)(auto ref Args args) { return new class(args) ICallable { Args m_args; this(Args p_args) { m_args = p_args; } void opCall() const { fun(m_args); } }; } alias Action = void delegate(); Action createDelegate(string s) { import std.stdio; return !((string str) => writeln(str))(s).opCall; } struct A { Action[] dg; } A create() { A a; a.dg ~= createDelegate("hello"); a.dg ~= createDelegate("buy"); return a; } void main() { enum a = create(); foreach(dg; a.dg) dg(); } ```
Re: Skia library for D, porting from SkiaSharp API.
On Monday, 6 December 2021 at 09:08:20 UTC, zoujiaqing wrote: SkiaD is a cross-platform 2D graphics API for D based on Mono's SkiaSharp. It provides a comprehensive 2D API that can be used across mobile, server and desktop models to render images. https://github.com/gearui/skiad Thanks for sharing! About two years I started creating [D bindings][1] for the C API of Skia, but I didn't have time to finish that project. Actually, if I remember correctly, all of the C API is covered, but I haven't tested it. My main motivation was to play with creating a Flutter-like GUI library, but I had to put this task on hold. Initially, one of the challenges was figuring out a good model for building and distributing Skia for users of the Dub package. Nowadays, if I were to resume work on this, I would most likely use a [Nix][2] + Dub combo, since Nix solves the problem of building third-party libraries and including them in another project in a very clean way (among many other cool properties). [1]: https://github.com/PetarKirov/skia-d [2]: https://nixos.org/
Re: GDC has just landed v2.098.0-beta.1 into GCC
On Tuesday, 30 November 2021 at 19:37:34 UTC, Iain Buclaw wrote: Hi, The latest version of the D language has [now landed](https://gcc.gnu.org/git/?p=gcc.git;a=commit;h=5fee5ec362f7a243f459e6378fd49dfc89dc9fb5) in GCC. [..] Amazing achievement! Congrats Iain!
Re: Was this supposed to be allowed?
On Wednesday, 15 September 2021 at 13:52:40 UTC, z wrote: ```D float[2] somevalue = somefloat3value[] + cast(Unqual!float[2]) [somesharedfloatarray1[i],somesharedfloatarray2[ii]]; ``` Older LDC/DMD releases never complained but now that i upgraded DMD, DMD-compiled builds suffer from runtime assert error `core.internal.array.operations.arrayOp!(float[], float[], float[], "+", "=").arrayOp at .\src\druntime\import\core\internal\array\operations.d(45) : Mismatched array lengths for vector operation ` Explicitly specifying `somefloat3value[0..2]` now works, and it seems that this assert check is an addition to a recent DMD version's `druntime`, does it means that this was a recent change in the language+runtime or just a retroactive enforcement of language rules that didn't use to be enforced? Big thanks. The history is roughly as follows: * between dmd 2.065 and 2.076 (including), this used to fail at runtime with message "Array lengths don't match for vector operation: 2 != 3" * dmd 2.077 included [druntime PR 1891][1] which was a ground-up re-implementation of the way array operations are implemented and in general a very welcome improvement. Unfortunately that PR didn't include checks to ensure that all arrays have equal length (or perhaps it had insufficient checks, I didn't dig into the details). * 2020-08-04 The issue was reported: https://issues.dlang.org/show_bug.cgi?id=21110 * 2021-08-09 A PR that fixes the issue was merged: https://github.com/dlang/druntime/pull/3267 * 2021-08-09 The fix was released in 2.097.2 In summary, the validation was always supposed to be there, but between 2.077.0 and 2.097.1 it wasn't. [1]: https://github.com/dlang/druntime/pull/1891
Re: github copilot and dlang
On Monday, 5 July 2021 at 15:56:38 UTC, Antonio wrote: Has someone tried github copilot (https://copilot.github.com/) with dlang? Access to the preview could be requested and, I think, main dlang team members could bypass the waitlist easily. I suspect that the "low" volume of dlang code (used to train OpenAI) compared to other languages could impact in the support (if there is any). Anyway, it could be really interesting to see how Copilot faces templates, traits, ... I was wondering the same, but I haven't gotten the chance to try it - still on wait list since last week. On the topic of GH waitlists, I'm still waiting for access to their Codespaces feature since December of last year. I'd love to build a codespace template for D, along the lines of the setup-dlang GH action.
Re: On the D Blog--Symphony of Destruction: Structs, Classes, and the GC
On Thursday, 18 March 2021 at 09:21:27 UTC, Per Nordlöw wrote: On Thursday, 4 March 2021 at 13:54:48 UTC, Mike Parker wrote: The blog: https://dlang.org/blog/2021/03/04/symphony-of-destruction-structs-classes-and-the-gc-part-one/ Btw, what is the motive behind D's GC not being able to correctly handle GC allocations in class destructors. Is it by design or because of limitations in D's current GC implementation? Just implementation deficiency. I think it is fixable with some refactoring of the GC pipeline. One approach would be, (similar to other language implementations - see below), that GC-allocated objects with destructors should be placed on a queue and their destructors be called when the GC has finished the collection. Afterwards, the GC can release their memory during the next collection. And how does this relate to exception-throwing destructors in other managed languages such as C# and Go; are they forbidden or allowed and safe thanks to a more resilient GC? TL;DR * Go doesn't have exceptions or destructors. You can attach a finalizer function to an object via [0] which will be called before the object will be collected. After the associated finalizer is called, the object is marked as reachable again and the finalizer function is unset. Since all finalizers are called in a separate goroutine, it is not an issue to allocate memory from them, as technically this happens separately from the actual garbage collection. * There is something like destructors (aka finalizers) in C#, but they can't be used to implement the RAII design pattern. They are even less deterministic than destructors of GC-allocated classes in D, as they're only called automatically by the runtime and by an arbitrary thread. Their runtime designed in such a way that memory allocation in destructors is not a problem at all, however the default policy is that thrown exceptions terminate the process, though that could be configured differently. --- Instead of destructors, the recommended idiom in Go is to wrap resources in wrapper structs and implement a Close() method for those types, which the user of the code must not forget to call manually and sometimes check for error. They have `defer`, which is similar to D's `scope (exit)`. Similar to C#, finalizers in Go are not reliable and should probably be only used as a safety net to detect whether an object was forgotten to be closed manually. If a finalizer takes a long time to complete a clean-up task, it is recommended that it spawns a separate goroutine. --- C# has 2 concepts: finalizers and the IDisposable interface. C# finalizers [1][2] are defined using the C++ destructor syntax `~T()` (rather than D's `~this()`) which is lowered to a method that overrides the Object.Finalize() base method like so: class Resource { ~Resource() { /* custom code */ } } // user code // gets lowered to: class Resource { protected override Finalize() { try { /* custom code */ } finally { base.Finalize(); } } } Which means that finalization happens automatically from the most-derived class to the least derived one. This lowering also implies that the implementation is tolerant to exceptions. It is an compile-time error to manually define a `Finalize` method. Finalizers can only be defined by classes (reference types) and not structs (value types). Finalizers are only be called automatically (there's no analog to D's `destroy` or C++'s `delete`) and the only way to force that is using `System.GC.Collect()`, which is almost always bad idea. Finalizers used to be called at the end of the application when targeting .NET Framework, but the docs say that this is no longer the case with the newer the .NET Core, though this may have been addressed after the docs were written. The implementation may call finalizers from any thread, so your code must be prepared to handle that. Given that finalizers are unsuitable for deterministic resource management, it is strongly recommended that class authors should implement the IDisposable [3] interface. Users of classes that implement IDisposable can either manually call IDisposable.Dispose() or they can use the `using` statement [4], which is lowered something like this: using var r1 = new Resource1(); using var r2 = new Resource1(); /* some code */ // vvv { Resource1 r1 = new Resource1(); try { { Resource2 r2 = expression; try { /* some code */ } finally { if (r2 != null) ((IDisposable)r2).Dispose(); } } } finally { if (r1 != null) ((IDisposable)r1).Dispose(); } } IDisposable.Dispose() can be called multiple times (though this is discouraged), so your implementation of this interface must be able to handle this. Finalizer should call the Dispose() function as a safety net. [0]:
Re: Visual D 1.1.0 released
On Wednesday, 10 March 2021 at 07:29:44 UTC, Rainer Schuetze wrote: On 06/03/2021 12:55, Imperatorn wrote: On Saturday, 6 March 2021 at 06:59:28 UTC, Rainer Schuetze wrote: On 05/03/2021 12:26, Imperatorn wrote: On Friday, 5 March 2021 at 10:57:05 UTC, Kagamin wrote: On Thursday, 4 March 2021 at 13:42:47 UTC, Imperatorn wrote: https://filebin.net/19gupoeedfdjx5tx One GIF is the behaviour in C# I would like to have in D as well with static if, and the other is displaying typeid on hover. The second is a debug session. Visual Studio doesn't show type information in debug session for C# either, only variable name and value. True, but could it? Visual D already does that with the help of the semantic highlighting: if an identifier is classified as a type or compile time value, it suppresses the debugger data tool tip and presents the usual one. Oh, I see. What about dub integration. How much effort would it be to have something similar to what code-d has in vsc? Guesstimation? I'm not much of a dub user. Last time I checked, using it as a package manager was fine, but not as a build tool. Dependency checks were incomplete and rather slow. The visuald project generation is pretty dated and doesn't support multiple configurations which kind of breaks the usual VS workflow. I think for better integration dub project generation needs to be improved (and extended to vcxproj files), or Visual D has to do it itself from "dub describe" (if that's possible). The latter would also allow seamless updates of the project in the background. Then, integration of package management can be considered. As far as I remember (circa 2015) Mono-D [0] was the IDE with the best Dub support - you could just open dub.json files as if they were project files (sln/csproj). This was by far the most seamless experience back when I was using IDEs more heavily. Visual-D could also use Dub as a library, similar to [1][2]. Also it would be nice to integrate code.dlang.org, just like NuGet is integrated for .NET in VS. [0]: https://wiki.dlang.org/Mono-D [1]: https://github.com/Pure-D/workspace-d [2]: https://github.com/atilaneves/reggae/tree/master/src/reggae/dub/interop
Re: Checking for manifest constants
On Friday, 5 March 2021 at 08:23:09 UTC, Bogdan wrote: I was using a trick with dmd to check for manifest constants which worked until dmd v2.094. Yesterday I tried it on the latest compiler and it failed with: source/introspection/manifestConstant.d(37,28): Error: need this for name of type string source/introspection/type.d(156,13): Error: value of this is not known at compile time any ideas how to fix it? or, is it a bug with dmd? ``` /// Check if a member is manifest constant bool isManifestConstant(T, string name)() { mixin(`return is(typeof(T.init.` ~ name ~ `)) && !is(typeof(` ~ name ~ `));`); } /// ditto bool isManifestConstant(alias T)() { return is(typeof(T)) && !is(typeof()); } enum globalConfig = 32; int globalValue = 22; unittest { struct Test { enum config = 3; int value = 2; } static assert(isManifestConstant!(Test.config)); static assert(isManifestConstant!(Test, "config")); static assert(isManifestConstant!(globalConfig)); static assert(!isManifestConstant!(Test.value)); static assert(!isManifestConstant!(Test, "value")); static assert(!isManifestConstant!(globalValue)); } void main() {} ``` I suggest this: enum globalConfig = 32; int globalValue = 22; immutable globaImmutablelValue = 22; enum isManifestConstant(alias symbol) = __traits(compiles, { enum e = symbol; }) && !__traits(compiles, { const ptr = }); unittest { struct Test { enum config = 3; int value = 2; } static assert(isManifestConstant!(Test.config)); static assert(isManifestConstant!(mixin("Test.config"))); static assert(isManifestConstant!(globalConfig)); static assert(isManifestConstant!(mixin("globalConfig"))); static assert(!isManifestConstant!(Test.value)); static assert(!isManifestConstant!(mixin("Test.value"))); static assert(!isManifestConstant!(globalValue)); static assert(!isManifestConstant!(mixin("globalValue"))); static assert(!isManifestConstant!(globaImmutablelValue)); static assert(!isManifestConstant!(mixin("globaImmutablelValue"))); }
Re: DIP 1034--Add a Bottom Type (reboot)--Formal Assessment Concluded
On Tuesday, 16 February 2021 at 07:07:09 UTC, Mike Parker wrote: When I emailed Walter and Atila to officially launch the Formal Assessment of DIP 1034, "Add a Bottom Type (reboot)", I expected it would be three or four weeks before I received their final decision. So I was surprised when Walter replied two days later with the following response: "Accepted with pleasure and enthusiasm. This is what DIPs should be like. I intuitively felt that a bottom type was right for D, but failed to express it in DIP1017. Dennis has done it right." Atila was on vacation at the time, but as soon as he got back he responded: "Seconded." Congratulations to Dennis Korpel for a job well done, and thanks to everyone who provided feedback on this DIP from the Draft Review through to the Final Review. Congratulations, Denis! Having used the `never` type in other languages (e.g. TypeScript), I'm very much looking forward to having it in D!
Re: GC.addRange in pure function
On Friday, 12 February 2021 at 12:17:13 UTC, Per Nordlöw wrote: On Tuesday, 9 February 2021 at 03:05:10 UTC, frame wrote: On Sunday, 7 February 2021 at 14:13:18 UTC, vitamin wrote: Why using 'new' is allowed in pure functions but calling GC.addRange or GC.removeRange isn't allowed? Would making `new T[]` inject a call to `GC.addRange` based on `T` (and maybe also T's attributes) be a step forward? `GC.addRange` is only used for memory allocated outside of the GC that can hold references to GC allocated objects. Since `new T[]` uses the GC, all the information is typeinfo is already there (*), so `GC.addRange` is unnecessary and even wrong, because when the GC collects the memory it won't call `GC.removeRange` on it Implementation-wise, metadata about GC-allocated memory is held in the GC internal data structures, whereas the GC roots and ranges are stored in separate malloc/free-managed containers. (*) Currently `new T[]` is lowered to an `extern (C)` runtime hook and the compiler passes to it typeid(T). After this the call chain is: _d_newarray_d_newarray{T,iT,mTX,miTX} -> _d_newarrayU -> __arrayAlloc -> GC.qalloc -> ConservativeGC.mallocNoSync -> Gcx.alloc -> {small,big}Alloc -> setBits
Re: GC.addRange in pure function
On Friday, 12 February 2021 at 19:48:01 UTC, vitamin wrote: On Wednesday, 10 February 2021 at 16:25:44 UTC, Petar Kirov [ZombineDev] wrote: On Wednesday, 10 February 2021 at 13:44:53 UTC, vit wrote: [...] TL;DR Yes, you can, but it depends on what "without problem" means for you :P [...] Thanks, Yes, I am implementing container (ref counted pointer). When allcoator is Mallcoator (pure allocate and deallocate) and constructor of Type inside rc pointer has pure constructor and destructor, then only impure calls was GC.addRange and GC.removeRange. Now there are marked as pure. Great, that's the exact idea!
Re: GC.addRange in pure function
On Wednesday, 10 February 2021 at 16:25:44 UTC, Petar Kirov [ZombineDev] wrote: [..] A few practical examples: Here it is deemed that the only observable side-effect of `malloc` and friends is the setting of `errno` in case of failure, so these wrappers ensure that this is not observed. Surely there are low-level ways to observe it (and also the act of allocating / deallocating memory on the C heap), but this definition purity what the standard library has decided it was reasonable: https://github.com/dlang/druntime/blob/master/src/core/memory.d#L1082-L1150 These two function calls in Array.~this() can be marked as `pure`, as the Array type as a whole implements the RAII design pattern and offers at least basic exception-safety guarantees: https://github.com/dlang/phobos/blob/81a968dee68728f7ea245b6983eb7236fb3b2981/std/container/array.d#L296-L298 (The whole function is not marked pure, as the purity depends on the purity of the destructor of the template type parameter `T`.)
Re: GC.addRange in pure function
On Wednesday, 10 February 2021 at 13:44:53 UTC, vit wrote: On Wednesday, 10 February 2021 at 12:17:43 UTC, rm wrote: On 09/02/2021 5:05, frame wrote: On Sunday, 7 February 2021 at 14:13:18 UTC, vitamin wrote: Why using 'new' is allowed in pure functions but calling GC.addRange or GC.removeRange isn't allowed? Does 'new' violate the 'pure' paradigm? Pure functions can only call pure functions and GC.addRange or GC.removeRange is only 'nothrow @nogc'. new allocates memory via the GC and the GC knows to scan this location. Seems like implicit GC.addRange. Yes, this is my problem, if `new` can create object in pure function, then GC.addRange and GC.removeRange is may be pure too. Can I call GC.addRange and GC.removeRange from pure function without problem? (using assumePure(...)() ). TL;DR Yes, you can, but it depends on what "without problem" means for you :P # The Dark Arts of practical D code === According to D's general approach to purity, malloc/free/GC.* are indeed impure as they read and write global **mutable** state, but are still allowed in pure functions **if encapsulated properly**. The encapsulation is done by @trusted wrappers which must be carefully audited by humans - the compiler can't help you with that. The general rule that you must follow for such *callable-from-pure* code (technically it is labeled as `pure`, e.g.: pragma(mangle, "malloc") pure @system @nogc nothrow void* fakePureMalloc(size_t); but I prefer to make the conceptual distinction) is that the effect of calling the @trusted wrapper must not drastically leak / be observed. What "drastically" means depends on what you want `pure` to mean in your application. Which side-effects you want to protect against by using `pure`? It is really a high-level concern that you as a developer must decide on when writing/using @trusted pure code in your program. For example, generally everyone will agree that network calls are impure. But what about logging? It's impure by definition, since it mutates a global log stream. But is this effect worth caring about? In some specific situations it maybe ok to ignore. This is why in D you can call `writeln` in `pure` functions, as long as it's inside a `debug` block. But given that you as a developer can decide whether to pass `-debug` option to the compiler, essentially you're in control of what `pure` means for your codebase, at least to some extent. 100% mathematical purity is impossible even in the most strict functional programming language implementations, since our programs run on actual hardware and not on an idealized mathematical machine. For example, even the act of reading immutable data can be globally observed as by measuring the memory access times - see Spectre [1] and all other microarchitecture side-channel [1] vulnerabilities. [1]: https://en.wikipedia.org/wiki/Spectre_(security_vulnerability) [2]: https://en.wikipedia.org/wiki/Side-channel_attack That said, function purity is not useless at all, quite the contrary. It is about making your programs more deterministic and easy to reason about. We all want less bugs in our code and less time spent chasing hard to reproduce crashes, right? `pure` is really about limiting, containing / compartmentalizing and controlling the the (in-deterministic) global effects in your program. Ideally you should strive to structure your programs as a pure core, driven by an imperative, impure shell. E.g. if you're working on an accounting application, the core is the part that implements the main domain / business logic and should be 100% deterministic and pure. The imperative shell is the part that reads spreadsheet files, exports to pdf, etc. (actually just the actual file I/O needs to be impure - the actual decoding / encoding of data structures can be perfectly pure). Now, back to practice and the question of memory management. Of course allocating memory is globally observable effect and even locally one can compare pointers, as Paul Backus mentioned, as D is a systems language. However, as a practical concession, D's concept of pure-ity is about ensuring high-level invariants and so such low-level concerns can be ignored, as long as the codebase doesn't observe them. What does it mean to observe them? Here's an example: --- void main() { import std.stdio : writeln; observingLowLevelSideEffects.writeln; // `false`, but could be `true` notObservingSideEffects.writeln; // always `true` } // BAD: bool observingLowLevelSideEffects() pure { immutable a = [2]; immutable b = [2]; return a.ptr == b.ptr; } // OK bool notObservingSideEffects() pure { immutable a = [2]; immutable b = [2]; return a == b; } --- `observingLowLevelSideEffects` is bad, as according to the language rules, the compiler is free to make `a` and `b` point to the same immutable array, the result of the function
Re: Dimensions in compile time
On Monday, 8 February 2021 at 13:09:53 UTC, Rumbu wrote: On Monday, 8 February 2021 at 12:19:26 UTC, Basile B. wrote: On Monday, 8 February 2021 at 11:42:45 UTC, Vindex wrote: size_t ndim(A)(A arr) { return std.algorithm.count(typeid(A).to!string, '['); } Is there a way to find out the number of dimensions in an array at compile time? yeah. --- template dimensionCount(T) { static if (isArray!T) { static if (isMultiDimensionalArray!T) { alias DT = typeof(T.init[0]); enum dimensionCount = dimensionCount!DT + 1; } else enum dimensionCount = 1; } else enum dimensionCount = 0; } /// unittest { static assert(dimensionCount!char == 0); static assert(dimensionCount!(string[]) == 1); static assert(dimensionCount!(int[]) == 1); static assert(dimensionCount!(int[][]) == 2); static assert(dimensionCount!(int[][][]) == 3); } --- that can be rewritten using some phobos traits too I think, but this piece of code is very old now, more like learner template. dimensionCount!string should be 2. My take without std.traits: template rank(T: U[], U) { enum rank = 1 + rank!U; } template rank(T: U[n], size_t n) { enum rank = 1 + rank!U; } template rank(T) { enum rank = 0; } Here's the version I actually wanted to write: --- enum rank(T) = is(T : U[], U) ? 1 + rank!U : 0; --- But it's not possible, because of 2 language limitations: 1. Ternary operator doesn't allow the different branches to be specialized like `static if` even if the condition is a compile-time constant. 2. `is()` expressions can only introduce an identifier if inside a `static if`. Otherwise, I'd consider this the "idiomatic" / "typical" D solution, since unlike C++, D code rarely (*) overloads and specializes templates. (*) Modern Phobos(-like) code. --- template rank(T) { static if (is(T : U[], U)) enum rank = 1 + rank!U; else enum rank = 0; } unittest { static assert( rank!(char) == 0); static assert( rank!(char[]) == 1); static assert( rank!(string) == 1); static assert( rank!(string[]) == 2); static assert( rank!(string[][]) == 3); static assert( rank!(string[][][]) == 4); } --- Otherwise, the shortest and cleanest solution IMO is this one: --- enum rank(T : U[], U) = is(T : U[], U) ? 1 + rank!U : 0; enum rank(T) = 0; unittest { static assert( rank!(char) == 0); static assert( rank!(char[]) == 1); static assert( rank!(string) == 1); static assert( rank!(string[]) == 2); static assert( rank!(string[][]) == 3); static assert( rank!(string[][][]) == 4); static assert( rank!(char) == 0); static assert( rank!(char[1]) == 1); static assert( rank!(char[1][2]) == 2); static assert( rank!(char[1][2][3]) == 3); static assert( rank!(char[1][2][3][4]) == 4); } --- - Use eponymous template syntax shorthand - Static arrays are implicitly convertible to dynamic arrays, so we can merge the two implementations.
Re: Article: Why I use the D programming language for scripting
On Monday, 1 February 2021 at 18:11:43 UTC, sighoya wrote: On Monday, 1 February 2021 at 13:37:38 UTC, Petar Kirov [ZombineDev] wrote: Any dlang slack member can invite new members by their email (I think even temporary email addresses are fine). On which email shall I send you an invite? Is there any PM mechanism on board, no? You mean does Slack have private messages? Yes it does.
Re: Article: Why I use the D programming language for scripting
On Monday, 1 February 2021 at 12:41:19 UTC, Paul Backus wrote: On Monday, 1 February 2021 at 12:11:46 UTC, Petar Kirov [ZombineDev] wrote: On Monday, 1 February 2021 at 11:10:28 UTC, Paul Backus wrote: Unfortunately, you can't pass more than one command-line argument on a #! line. It is possible, using `/usr/bin/env -S command arg1 arg2` , as of coreutils 8.30. I have been using it at work and it's working perfectly. This functionality was already supported by FreeBSD [1] for ~15 years, but the coreutils developers implemented it just ~3 years ago [2]. This is great, thanks! I was missing this feature often, I'm glad I found it recently ;) I just checked, and it's available in Debian stable, so most distros should have it by now. Yes, I think it's safe to use on Linux nowadays. In the worst case a user may need to upgrade their coreutils. I just mention this because a teammate was still on Ubuntu 16.04 or 18.04 so he had to upgrade [1]. [1]: https://packages.ubuntu.com/search?keywords=coreutils=names=all=all
Re: Article: Why I use the D programming language for scripting
On Monday, 1 February 2021 at 12:49:28 UTC, drug wrote: On 2/1/21 3:28 PM, Petar Kirov [ZombineDev] wrote: I just created #article-proofreading - everyone is welcome to join! How can I join? I used slack once for a short period some time ago. Any dlang slack member can invite new members by their email (I think even temporary email addresses are fine). On which email shall I send you an invite?
Re: Article: Why I use the D programming language for scripting
On Monday, 1 February 2021 at 12:26:02 UTC, drug wrote: On 2/1/21 3:14 PM, Petar Kirov [ZombineDev] wrote: [..] Perhaps we can create a channel on the dlang Slack for proofreading articles and blog posts, so that more people can have a chance to review an article before publishing. That's a really good idea! I just created #article-proofreading - everyone is welcome to join!
Re: Article: Why I use the D programming language for scripting
On Monday, 1 February 2021 at 11:10:28 UTC, Paul Backus wrote: On Monday, 1 February 2021 at 09:36:15 UTC, Jacob Carlborg wrote: On Sunday, 31 January 2021 at 20:36:43 UTC, aberba wrote: It's finally out! https://opensource.com/article/21/1/d-scripting FYI, the code will compile faster if you use `dmd -run` instead of `rdmd`. If you have multiple files that need to be compiled you can use `dmd -i -run`. -- /Jacob Carlborg Unfortunately, you can't pass more than one command-line argument on a #! line. It is possible, using `/usr/bin/env -S command arg1 arg2` , as of coreutils 8.30. I have been using it at work and it's working perfectly. This functionality was already supported by FreeBSD [1] for ~15 years, but the coreutils developers implemented it just ~3 years ago [2]. The main disadvantage is that it's obviously not very portable, e.g. all users/developers need to use a modern linux distro, but for some teams this requirement already there for other reasons, so it's not a problem. Example: https://gist.github.com/PetarKirov/72168d8dc909c670444ca649ec28f80f This was extracted from a larger project, so it may not be useful on its own, but hopefully it should be enough to showcase the usage. Also, if you can use rund [3], it's likely a much cleaner option. [1]: https://www.freebsd.org/cgi/man.cgi?env [2]: https://lists.gnu.org/r/coreutils/2017-05/msg00020.html [3]: https://github.com/dragon-lang/rund
Re: Article: Why I use the D programming language for scripting
On Monday, 1 February 2021 at 11:52:18 UTC, aberba wrote: On Monday, 1 February 2021 at 11:29:02 UTC, Bastiaan Veelo wrote: On Sunday, 31 January 2021 at 20:47:13 UTC, Steven Schveighoffer wrote: On 1/31/21 3:36 PM, aberba wrote: It's finally out! https://opensource.com/article/21/1/d-scripting Hm... right off I see the shebang is not the first line in the example. It has to be. Please fix, Aberba, right now the examples don't work because of this... -- Bastiaan. Yes, noted. I don't have direct access to edit it myself. I have to wait till the editors make the changes (depending on their TZ) I should really get someone here to proofread it next time Sorry about that. Perhaps we can create a channel on the dlang Slack for proofreading articles and blog posts, so that more people can have a chance to review an article before publishing.
Re: DIP 1036--String Interpolation Tuple Literals--Community Round 2 Begins
On Wednesday, 27 January 2021 at 10:37:22 UTC, Mike Parker wrote: The second round of Community Review for DIP 1036, "String Interpolation Tuple Literals", is now under way. Please discuss the DIP (its merits, its implementation, peripheral topics, etc.) in the Discussion Thread and save all review feedback (critiques on the content of the DIP: what to change, how to improve it, etc.) for the Feedback Thread. Discussion Thread: https://forum.dlang.org/post/ucqyqkvaznbxkasvd...@forum.dlang.org Feedback Thread: https://forum.dlang.org/post/qglydztoqxhhcurvb...@forum.dlang.org Corrected links: Discussion Thread: https://forum.dlang.org/post/uhueqnulcsskznsyu...@forum.dlang.org Feedback Thread: https://forum.dlang.org/post/bvrejaayzpgbykacx...@forum.dlang.org
Re: std.expreimantal.allocator deallocate
On Sunday, 24 January 2021 at 14:56:25 UTC, Paul Backus wrote: On Sunday, 24 January 2021 at 11:00:17 UTC, vitamin wrote: It is Ok when I call deallocate with smaller slice or I need track exact lengtht? It depends on the specific allocator, but in general, it is only guaranteed to work correctly if the slice you pass to deallocate is exactly the same as the one you got from allocate. To add to that, if an allocator defines `resolveInternalPointer` [0][1] you could be able to get the original slice that was allocated (and then pass that to `deallocate`, but not all allocators define `resolveInternalPointer` and also even if they do define it, they're not required to maintain complete book-keeping as doing so could have bad performance implications (i.e. calling say `a.resolveInternalPointer(a.allocate(10)[3 .. 6].ptr, result)` can return `Ternary.unknown`. [0]: https://dlang.org/phobos/std_experimental_allocator.html#.IAllocator.resolveInternalPointer [1]: https://dlang.org/phobos/std_experimental_allocator_building_blocks.html
Re: std.expreimantal.allocator deallocate
On Sunday, 24 January 2021 at 16:16:12 UTC, vitamin wrote: On Sunday, 24 January 2021 at 14:56:25 UTC, Paul Backus wrote: On Sunday, 24 January 2021 at 11:00:17 UTC, vitamin wrote: It is Ok when I call deallocate with smaller slice or I need track exact lengtht? It depends on the specific allocator, but in general, it is only guaranteed to work correctly if the slice you pass to deallocate is exactly the same as the one you got from allocate. thanks, is guaranteed this: void[] data = Allocator.allocate(data_size); assert(data.length == data_size) or can be data.length >= data_size ? Yes, it is guaranteed [0]. Even though some allocator implementations will allocate a larger block internally to back your requested allocation size, `allocate` [1] must return the same number of bytes as you requested, or a `null` slice. If an allocator has a non-trivial `goodAllocSize(s)` [2] function (i.e. one that is not the identity function `s => s`) and you you allocate say N bytes, while allocator.goodAllocSize(N) returns M, M > N, it means that most likely calling `expand` [3] will succeed - meaning it will give you the excess memory that it has internally for free. I say "most likely", because this is the intention of the allocator building blocks spec, even though it's not specified. In theory, `expand` could fail in such situation either because of an allocator implementation deficiency (which would technically not be a bug), or because `allocate` was called concurrently by another thread and the allocator decided to give the excess space to someone else. [0]: https://dlang.org/phobos/std_experimental_allocator_building_blocks.html [1]: https://dlang.org/phobos/std_experimental_allocator.html#.IAllocator.allocate [2]: https://dlang.org/phobos/std_experimental_allocator.html#.IAllocator.goodAllocSize [3]: https://dlang.org/phobos/std_experimental_allocator.html#.IAllocator.expand
Re: Value of type enum members
On Tuesday, 19 January 2021 at 20:27:30 UTC, Andrey Zherikov wrote: Could someone please explain why there is a difference in values between compile-time and run-time? struct T { int i; this(int a) {i=a;} } enum TENUM : T { foo = T(2), bar = T(3), } void main() { pragma(msg, TENUM.foo);// T(2) pragma(msg, TENUM.bar);// T(3) writeln(TENUM.foo);// foo writeln(TENUM.bar);// bar } TL;DR `pragma(msg, x)` prints the value of `x` usually casted to the enumeration (base) type, while `std.conv.to!string(x)` prints the name of the enum member corresponding to the value of `x`. Both `pragma(msg, ...)` and `std.conv.to!string(..)` (what is used under the hood by `writeln(..)`) take somewhat arbitrary decisions about formating enum members, neither of which is the "right" one, as there's no rule saying which is better. In general, `std.conv.to!string(..)` tries to use format that is meant to be friendly to the end-users of your program, while `pragma(msg, ...)` is a CT debugging tool and it tries to stay close to the compiler's understanding of your program. For example: void main() { import std.stdio; enum E1 { a = 1, b, c } enum E2 { x = "4" } enum E3 : string { y = "5" } // 1.0 2L cast(E1)3 4 5 pragma(msg, 1.0, " ", long(2), " ", E1.c, " ", E2.x, " ", E3.y); // 1 2 c x y writeln(1.0, " ", long(2), " ", E1.c, " ", E2.x, " ", E3.y); } End-users generally don't care about the specific representations of numbers in your program, while on the other hand that's a crucial detail for the compiler and you can see this bias in the output.
Re: Discussion Thread: DIP 1039--Static Arrays with Inferred Length--Community Review Round 1
On Tuesday, 12 January 2021 at 23:19:45 UTC, Q. Schroll wrote: On Tuesday, 12 January 2021 at 20:04:00 UTC, Paul Backus wrote: On Tuesday, 12 January 2021 at 19:49:10 UTC, jmh530 wrote: I'd rather put the import at the top of the file, or in a version(unittest) block than that. The problem with those approaches is that if you have an example unittest, then when a user tries to run it then they have to put the import in themselves. Seems like the obvious solution is to put the import inside the unittest. I'd say that example unit tests shouldn't have anything available except the current module. That a unittest is just a function is wrong in many ways. By default, it shouldn't have access to imports outside of it and it shouldn't have access to non-public (private, package) symbols. Agreed. Hence why we had to workaround those language limitations in phobos with this: https://github.com/dlang/tools/blob/master/tests_extractor.d
Re: Discussion Thread: DIP 1039--Static Arrays with Inferred Length--Community Review Round 1
On Tuesday, 12 January 2021 at 18:19:14 UTC, jmh530 wrote: On Tuesday, 12 January 2021 at 17:27:50 UTC, Q. Schroll wrote: On Monday, 11 January 2021 at 21:17:20 UTC, jmh530 wrote: On Monday, 11 January 2021 at 14:42:57 UTC, Nick Treleaven wrote: [snip] Just a suffix like `[1,2]$` or `[1]s`. Then just use `auto var =` with it as normal. Gotcha. I think I would use that more than the current DIP (though I prefer [1]s to [1]$). You can do it today if you don't mind putting the marker in front: https://run.dlang.io/is/E6ne4k (Its operator abuse. What would you expect?) Interesting approach! However, it doesn't really resolve my underlying issue, which was that I would still need to import that s struct. To play the devil's advocate, it shouldn't be hard to change the compiler config file to auto-import any module of your choice (it config file would simply append it to the compiler command line). That said, for me the unnecessary template instances generated for each different type and array length is a bigger reason for me preferring this DIP proposal, over the library approach.
Re: Printing shortest decimal form of floating point number with Mir
On Monday, 4 January 2021 at 12:35:12 UTC, John Colvin wrote: On Monday, 4 January 2021 at 09:21:02 UTC, Ola Fosheim Grøstad wrote: On Monday, 4 January 2021 at 09:18:50 UTC, Ola Fosheim Grøstad wrote: On Monday, 4 January 2021 at 05:55:37 UTC, Ola Fosheim Grostad wrote: On Monday, 4 January 2021 at 04:37:22 UTC, 9il wrote: [...] But it is a bug even if there was no C++... An alias should work by simple substitution, if it does not, then it is no alias... Here is an even simpler example that does not work: struct Foo(T){} void foo(T)(T!int x) {} alias FooInt = Foo!int; void main() { foo(FooInt()); } Oh, now wait, it does: struct Foo(T){} void foo(alias T)(T!int x) {} alias FooInt = Foo!int; void main() { foo(FooInt()); } My mistake. What's the simplest example that doesn't work and is that simple example just indirection through an alias or is it actually indirection through a template that *when instantiated* turns out to be just an alias? I have a suspicion that what you're asking for here is the type-inference to have x-ray vision in to uninstantiated templates that works for a few simple cases. Am I wrong? To be clear, a really useful special case can be really useful and worthwhile, but I'm not convinced this is the principled "type system bug" you are saying it is. I don't have time to post an example, but x-ray vision is far from what is asked for, just following basic rules established in type system theory decades ago. In practice I've had many instances where TypeScript would correctly perform generic type unification while dmd gives up at the first bump in the road.
Re: Truly algebraic Variant and Nullable
On Sunday, 15 November 2020 at 04:54:19 UTC, 9il wrote: Truly algebraic Variant and Nullable with an order-independent list of types. Nullable is defined as ``` alias Nullable(T...) = Variant!(typeof(null), T); ``` Variant and Nullable with zero types are allowed. `void` type is supported. Visitors are allowed to return different types. Cyclic referencing between different variant types are supported. More features and API: http://mir-core.libmir.org/mir_algebraic.html Cheers, Ilya The work has been sponsored by Kaleidic Associates and Symmetry Investments. I have been using SumType [1] for a while in some of my projects and I'm quite happy with it. The author has been very responsive to feedback and the quality bar of his work is definitely higher than that of many other D libraries (e.g. support for @safe/pure/@nogc/nothrow, immutable, betterC and DIP1000, etc.). That said, I'm also a fan of your work with Mir! mir.algorithm (which I'm most familiar with) is a text book example of high-quality generic algorithm design. How does your work compare to sumtype? Would mir.algebraic offer any benefits, which would make it worth switching over? IMO, algebraic types (sum and tuple types) should be a core language feature part of druntime and should have have corresponding syntax sugar: // Same as Tuple!(T, "open", T, "high", T, "low", T, "close"): alias OhlcTuple(T) = (T open, T high, T low, T close); // Same as: // Union!(long, double, bool, typeof(null), string, // This[], This[string]; alias Json = | long | double | bool | typeof(null) | string | Json[] | Json[string]; // Syntax sugar for nullable/optional types - // T? == Nullable!T == Union!(typeof(null), T): alias ResponseParser = OhlcTuple!double? delegate(Json json); If we can work together to consolidate on a single API, I think it would be better for the language ecosystem. [1]: https://code.dlang.org/packages/sumtype
Re: Preparing for Google Summer of Code 2021
On Monday, 23 November 2020 at 10:24:28 UTC, Andre Pany wrote: On Sunday, 15 November 2020 at 10:46:01 UTC, Mike Parker wrote: [...] I created two issues in the repository (https://github.com/dlang/projects) but I do not know, how to set the gsoc2020 label. I assume others may have edit authorizations on the repository and therefore are able to set the labels. For now, I prefixed the issue titles with gsoc2020: https://github.com/dlang/projects/issues/75 https://github.com/dlang/projects/issues/76 Kind regards André Thanks, I've just added the gsoc2020 label for these issues. I will ping someone to give you permissions for the repo ;)
Re: Git-repo-root relative path
On Monday, 16 November 2020 at 10:21:27 UTC, Per Nordlöw wrote: I need a function that gets the relative path of a file in a Git-repo and preferrably also its status. I'm not sure I understand the question. I have written two programs, hopefully one of them does what you want :D Either via an external call to `git` or optionally via `libgit` (if available). Which DUB packages do you prefer? For such small tasks, the easiest is to just use the shell. 1st answer: Initially I thought that you want to convert the current working directory (I don't know why - I didn't read well the question apparently :D) to a path relative to the root git repo path. Here's my solution to that problem: ```d import std.exception : enforce; import std.format : format; import std.file : getcwd; import std.path : asRelativePath; import std.process : executeShell; import std.stdio : writeln; import std.string : stripRight; void main() { auto cwd = getcwd(); const gitRootPathResult = executeShell("git rev-parse --show-toplevel"); enforce( gitRootPathResult.status == 0, "`git` is not installed, or '%s' is not a git repo".format(cwd) ); // Trim trailing whitespace from the shell invocation const gitRoot = gitRootPathResult.output.stripRight; debug writeln("Git root path: ", gitRoot); gitRoot .asRelativePath(getcwd()) .writeln; } ``` Example usage: ``` $ cd ~/code/repos/dlang/dlang/dmd/src/dmd/backend/ $ dmd -run ~/code/cwd_to_git_relative_path.d ../../.. # Sanity check: $ dmd -debug -run ~/code/cwd_to_git_relative_path.d Git root path: /home/zlx/code/repos/dlang/dlang/dmd ../../.. $ cd '../../..' && pwd /home/zlx/code/repos/dlang/dlang/dmd ``` 2nd answer: Reading a second time, I don't understand what you meant by "gets the relative path of a file in a Git-repo". Did you mean that it receives an absolute path (or relative to current working directory) to a file and converts it to a path relative to a git repo? If so, here's my solution and for determining the status of a file: https://gist.github.com/PetarKirov/b4c8b64e7fc9bb7391901bcb541ddf3a
Re: Release D 2.094.0
On Thursday, 1 October 2020 at 16:47:37 UTC, Meta wrote: On Thursday, 1 October 2020 at 16:19:48 UTC, Steven Schveighoffer wrote: On 10/1/20 10:36 AM, Meta wrote: On Thursday, 1 October 2020 at 09:49:36 UTC, Mathias LANG wrote: Author here. The most complete way to know would be to read the changelog: https://dlang.org/changelog/2.094.0.html#preview-in The TL;DR is that, in addition to `const scope`, `in` now automatically behaves as `ref` when "it makes sense" such as large value types or presence of destructors / postblit (more details in the changelog!) and will accept rvalues, unlike other ref parameters. Why was this added when we already have `auto ref`? Yes, it makes the function a template, but if `in` can automatically choose whether the variable is ref or not, then auto ref could easily do the same. There is a difference. `in` is choosing it based on the type, not whether it's an rvalue or lvalue. auto ref doesn't care whether it's an int or a 1k-sized struct, if it's an lvalue, it's ref, and if it's an rvalue, it's non-ref. This seems ridiculous to me. We now have ANOTHER way of asking the compiler to choose for us whether to pass by ref or by value, completely mutually exclusive of auto ref. Where was the DIP (apologies if I just didn't see it)? Did Walter approve this? How do we explain the difference between in and auto ref with (as Andrei would say) a straight face? `auto ref` is a mistake and shouldn't have existed. Thanks to Mathias, `in` parameters are finally working the way most sane people expect them to work. I can't quite explain `auto ref` with straight face, while to explain `in` I just need to say "unless you're mutating or aliasing the parameter always mark it as `in`". Not only that, but every auto-ref parameter is another template parameter varying on the usage. So calling on an lvalue and rvalue will generate 2 separate mostly-identical functions. With -preview=in, only one function is generated per type. That's a QOI problem IMO. No, it's not. According the spec, `auto ref` parameters can only be used for templates (making them useless for virtual functions and delegates) and the compiler is required to generate a different functions depending on whether the parameter is an lvalue or an rvalue, which completely misses the point. There's code out there that expect two instances to be generated and distinguishes which one it's in using `static if (__traits(isRef, param))`. You can't change this behavior without breaking code, so it's not a QoI problem. On the other hand, now the `in` parameter storage class finally has the opposite meaning of `out`. Makes code more elegant to write, easier to explain and teach.
Re: Sociomantic Tsunami now under new community maintainership
On Wednesday, 30 September 2020 at 08:31:25 UTC, Iain Buclaw wrote: [..] Great news, thank you Iain and everyone else who was responsible! I think an overview of those D projects would make for a great DConf talk!
Re: Symmetry Investments and the D Language Foundation are Hiring
On Tuesday, 1 September 2020 at 16:45:55 UTC, drug wrote: On 9/1/20 7:34 PM, Petar Kirov [ZombineDev] wrote: On Tuesday, 1 September 2020 at 12:59:00 UTC, Mathias LANG wrote: On Tuesday, 1 September 2020 at 09:09:36 UTC, Jacob Carlborg wrote: [...] Agreed. A server approach would probably scale much better, if the intent is to speed up the developer's CTR cycle. But in any case, thanks to Symmetry for doing this! This is huge. We need both a fs-watcher daemon support and "offline" incremental build support, based on SHA-256 (note that git is moving away from SHA1 to SHA256 [1]). I'd say SHA-256 is cheap enough these days [2] that I don't see a reason not to use it even for "online" fs-watcher daemon compilation. [1]: https://git-scm.com/docs/hash-function-transition/ [2]: https://bench.cr.yp.to/impl-hash/sha256.html We can easily use the following option: ``` dub build --hash=sha1 dub build --hash=sha256 ``` and let the user to make the final choice I understand your idea to make this configurable, but this will introduce more complexity than necessary because then $HOME/.dub/packages will have build artifacts based on both SHA1 and SHA256, and so dub will need to support mixed mode file integrity checking.
Re: Symmetry Investments and the D Language Foundation are Hiring
On Tuesday, 1 September 2020 at 12:59:00 UTC, Mathias LANG wrote: On Tuesday, 1 September 2020 at 09:09:36 UTC, Jacob Carlborg wrote: On Sunday, 30 August 2020 at 14:13:36 UTC, Mike Parker wrote: Looking for a full-time or part-time gig? Not only is Symmetry Investments hiring D programmers, they are also generously funding two positions for ecosystem work under the D Language Foundation. And they've put up a bounty for a new DUB feature. Read all about it here: https://dlang.org/blog/2020/08/30/symmetry-investments-and-the-d-language-foundation-are-hiring/ As an alternative to use SHA-1 hashing. There's the option to have a daemon running the background listing on filesystem events. BTW, is timestamps vs SHA-1 hashing really the most pressing issue with Dub? -- /Jacob Carlborg Agreed. A server approach would probably scale much better, if the intent is to speed up the developer's CTR cycle. But in any case, thanks to Symmetry for doing this! This is huge. We need both a fs-watcher daemon support and "offline" incremental build support, based on SHA-256 (note that git is moving away from SHA1 to SHA256 [1]). I'd say SHA-256 is cheap enough these days [2] that I don't see a reason not to use it even for "online" fs-watcher daemon compilation. [1]: https://git-scm.com/docs/hash-function-transition/ [2]: https://bench.cr.yp.to/impl-hash/sha256.html
Re: Introduction to programming with compile time sequences in D
On Tuesday, 1 September 2020 at 16:00:14 UTC, data pulverizer wrote: On Saturday, 29 August 2020 at 04:41:36 UTC, Petar Kirov [ZombineDev] wrote: On Friday, 28 August 2020 at 11:05:09 UTC, data pulverizer wrote: On Tuesday, 25 August 2020 at 15:58:46 UTC, Petar Kirov [ZombineDev] wrote: [...] Just to keep you updated, I've begun to write a fresh section on templates for dlang-tour quite separate from the blog article - something more appropriate for the website, though I'll take some elements of what I've already done. Once it's finished I'll do a pull request. Thanks Sounds great, thank you! I've finished writing the fresh section and checked it through a few times. It is pretty detailed and long and before releasing it and doing a pull request I'll check it through once or twice more over the next day or two. The document content structure is: ``` # Templates and Compile Time Programming in D ## Contents * Introduction * Templates in D * Function templates * Longhand and shorthand declarations * Inference of parameters * Access patterns for template internals * Alias template parameter * Variadic template functions * The is() directive * More on template constraints and partial specialization * Struct, class, and interface templates * Struct template longhand and shorthand declarations * Variadic template objects * Template constraints and specialization * Enumeration templates * Alias templates * static if * Traits * Metaprogramming in D * Introduction * AliasSeq!(T) compile time sequences * Append, prepend and concatenating compile time lists * Replacing items in compile time sequences * Replacing multiple items with an individual type * String mixins * static foreach * Replacing multiple items by a tuple of items * Contained type sequences * Template mixins, CTFE, and import * Template mixins * CTFE * import("file.d") ``` Looking great so far! I have written lots of code examples in the text but there are also lots of runnable files that can be in code demo "play" sections. The only thing is when I look at the current coding area it is quite large and separate from the text. Sometime in the future, I guess it would be great to have it inline with the text and smaller like the regular fenced code so that multiple files can be included in separate scrollable "play" areas. Thanks Yeah, I think we should add the following feature: Whenever there's a snippet of code (fenced code block in markdown), a button should appear under, which when clicked would replace the content of the text editor with the code snippet. There two challenges with this: 1. Many of the code snippets that appear throughout the articles are not meant to be runnable, so we would need a way to provide the necessary scaffolding (e.g. wrap them in `void main() { /+ ... +/ }`) 2. It may surprising and inconvenient for users to have the code they have potentially modified disappear. This could be solved by adding proper support for multiple open files to the editor (along the lines of commercial solutions like codesandbox, github workspaces, etc.). What would happen is that click [Edit] on a code-block would simply open another file.
Re: Introduction to programming with compile time sequences in D
On Friday, 28 August 2020 at 11:05:09 UTC, data pulverizer wrote: On Tuesday, 25 August 2020 at 15:58:46 UTC, Petar Kirov [ZombineDev] wrote: [...] Just to keep you updated, I've begun to write a fresh section on templates for dlang-tour quite separate from the blog article - something more appropriate for the website, though I'll take some elements of what I've already done. Once it's finished I'll do a pull request. Thanks Sounds great, thank you!
Re: Introduction to programming with compile time sequences in D
On Tuesday, 25 August 2020 at 16:10:21 UTC, data pulverizer wrote: On Tuesday, 25 August 2020 at 16:01:25 UTC, data pulverizer wrote: On Tuesday, 25 August 2020 at 14:02:33 UTC, Petar Kirov [ZombineDev] wrote: ... You can find a full example of this here: https://run.dlang.io/gist/run-dlang/80e120e989a6b0f72fd7244b17021e2f There is an issue with `AliasTuple` though, you can't directly print its collections with pragma: ```d alias coll = AliasSeq!(s1, s2, s3, s7); pragma(msg, coll); ``` I get the following error: ```d onlineapp.d(29): Error: cannot interpret AliasTuple!1 at compile time onlineapp.d(29): Error: cannot interpret AliasTuple!(1, 2) at compile time ... tuple((__error), (__error), (__error), (__error)) ``` p.s. I did include a `Tuple` - like implementation in my article at the later sections, but it was based on a template struct. Yeah, I agree that using structs offers better ergonomics. Such design also enables cool things like UFCS and lambda functions. You can find an example of this here: https://gist.github.com/PetarKirov/a808c94857de84858accfb094c19bf77#file-rxd-meta2-d-L65-L123
Re: Introduction to programming with compile time sequences in D
On Tuesday, 25 August 2020 at 16:04:36 UTC, data pulverizer wrote: On Tuesday, 25 August 2020 at 15:58:46 UTC, Petar Kirov [ZombineDev] wrote: On Tuesday, 25 August 2020 at 15:30:17 UTC, data pulverizer wrote: I think your article is quite valuable is it covers many aspects of template programming in D, while being quite approachable as well. May I suggest contributing it in some form to https://tour.dlang.org? Contributing is as easy as opening a pull request to this repo: https://github.com/dlang-tour/english. Just check the format of some of the other *.md and *.yml files there and you'll figure it out. We already have a section on templates there, but I think it's way too brief and doesn't do justice to the D's extensive template features. Perhaps it could be organized as a fully separate section with different articles, corresponding to each paragraph in your article. I'd be happy to work on that! Great! Me and the other maintainers and contributors would be happy to assist you. If you want to just start a discussion on how to proceed, perhaps you can open an issue where we can discuss things in more detail. By the way, are you on the https://dlang.slack.com ? If not you should definetely join! I am not able to check all of GitHub notifications these days, so you can ping me and or other people there on Slack if we don't respond in time on GitHub. We have a #tour channel that we can use for discussions.
Re: Introduction to programming with compile time sequences in D
On Tuesday, 25 August 2020 at 16:01:25 UTC, data pulverizer wrote: On Tuesday, 25 August 2020 at 14:02:33 UTC, Petar Kirov [ZombineDev] wrote: ... You can find a full example of this here: https://run.dlang.io/gist/run-dlang/80e120e989a6b0f72fd7244b17021e2f There is an issue with `AliasTuple` though, you can't directly print its collections with pragma: ```d alias coll = AliasSeq!(s1, s2, s3, s7); pragma(msg, coll); ``` I get the following error: ```d onlineapp.d(29): Error: cannot interpret AliasTuple!1 at compile time onlineapp.d(29): Error: cannot interpret AliasTuple!(1, 2) at compile time ... tuple((__error), (__error), (__error), (__error)) ``` Yeah, I wrote a quick implementation, just for this example. For sure there are better ways to implement it.
Re: Introduction to programming with compile time sequences in D
On Tuesday, 25 August 2020 at 15:30:17 UTC, data pulverizer wrote: On Tuesday, 25 August 2020 at 14:02:33 UTC, Petar Kirov [ZombineDev] wrote: Nice article! I haven't had the chance to read it fully, so far [snip] I though of writing at the beginning that it was long and that readers could dip in and out of the article as they wished but decided that people could decide that for themselves and that placing a length warning might be counter-productive. Thanks I think your article is quite valuable is it covers many aspects of template programming in D, while being quite approachable as well. May I suggest contributing it in some form to https://tour.dlang.org? Contributing is as easy as opening a pull request to this repo: https://github.com/dlang-tour/english. Just check the format of some of the other *.md and *.yml files there and you'll figure it out. We already have a section on templates there, but I think it's way too brief and doesn't do justice to the D's extensive template features. Perhaps it could be organized as a fully separate section with different articles, corresponding to each paragraph in your article.
Re: Introduction to programming with compile time sequences in D
On Tuesday, 25 August 2020 at 02:11:42 UTC, data pulverizer wrote: I have a draft new blog article "Introduction to programming with compile time sequences in D", it's on Github and I would appreciate feedback before it goes live https://gist.github.com/dataPulverizer/67193772c52e7bd0a16414cb01ae4250 Comment welcome. Many thanks Nice article! I haven't had the chance to read it fully, so far now I have just one quick suggestion regarding removing items from sequences [0]. I think it would be much simpler (and likely more efficient) to avoid both recursion and static foreach and simply use slicing + concatenation. Here's an example: template removeFromSeqAt(size_t idx, seq...) { static if (seq.length > 0 && idx < seq.length) alias removeFromSeqAt = AliasSeq!(seq[0 .. idx], seq[idx + 1 .. $]); else static assert (0); } You can find a full example of this here: https://run.dlang.io/gist/run-dlang/80e120e989a6b0f72fd7244b17021e2f [0]: https://gist.github.com/dataPulverizer/67193772c52e7bd0a16414cb01ae4250#removing-items-from-a-compile-time-sequence
Re: Reading IDX Files in D, an introduction to compile time programming
On Friday, 21 August 2020 at 20:33:51 UTC, H. S. Teoh wrote: On Fri, Aug 21, 2020 at 01:18:30PM -0700, Ali Çehreli via Digitalmars-d-announce wrote: [...] In my case I found a limitation: I cannot "iterate a directory" and import all file contents in there (the limitation is related to a C library function not having source code so it cannot be evaluated). The actual limitation is that string imports do not allow reading directory contents (the C function can be replaced if such were allowed). Generally, I don't expect directory traversal to ever be allowed at compile-time, since it opens the door to a huge can o' security worms. :-P I feel like limiting CTFE just gives a false sense of security and destroys many interesting use cases. If a part of my build system will do directory traversal to build the list of files to import, what difference would it make to not have this as a single build step. The argument that somehow dmd -run gen_code.d | dmd - Is more secure than just: dmd file.d # file.d is allowed to access the FS at CT makes no sense to me. See Jai for example. You can run absolutely *any* code at compile time. 5 years ago Jai's creator made a demo of running an OpenGL game at CT [1]. In the same demo he also used CTFE to validate calls to printf. He made the case that while many compilers go the route of hard-coding checks for printf style functions in the compiler, he thinks that users should be able to implement arbitrary checks in their code. And 5 years later, instead of D expanding the frontiers of what's possible via CTFE, printf checking was hard coded in the compiler [2]. [1]: https://www.youtube.com/watch?v=UTqZNujQOlA [2]: https://github.com/dlang/dmd/pull/10812/files I don't need say that unlimited CTFE has been a huge success for Jai. What I wish is that we can learn from this stop bringing arguments that C people would bring for D's CTFE ("My Makefile calls a Python script to generate C code and it's doing just fine, so I don't think one should be allowed to run code at compile time, as it will make the code just harder to follow"). As another example, types in Zig are first class citizens [3] and can be manipulated with CTFE just like any other value. "Type functions" in D should just be regular D functions taking types as parameters and returning types. [3]: https://ziglang.org/documentation/master/#Introducing-the-Compile-Time-Concept
Re: Why is time_t defined as a 32-bit type on Windows?
On Friday, 7 August 2020 at 05:37:32 UTC, Andrej Mitrovic wrote: On Wednesday, 5 August 2020 at 16:13:19 UTC, Andrej Mitrovic wrote: ``` C:\dev> rdmd -m64 --eval="import core.stdc.time; writeln(time_t.sizeof);" 4 ``` According to MSDN this should not be the case: https://docs.microsoft.com/en-us/cpp/c-runtime-library/reference/time-time32-time64?view=vs-2019 time is a wrapper for _time64 and **time_t is, by default, equivalent to __time64_t**. But in Druntime it's defined as a 32-bit type: https://github.com/dlang/druntime/blob/349d63750d55d078426d4f433cba512625f8a3a3/src/core/sys/windows/stdc/time.d#L42 I filed it as an issue to get more eyes / feedback: https://issues.dlang.org/show_bug.cgi?id=21134 As far as I gather, this was changed with MSVC 2005 [0], so perhaps the relevent change wasn't applied to the druntime windows bindings. Also keep in mind that we revamped a large portion of the Windows bindins in 2015 [1], whose code was based on MinGW IIRC. In versions of Visual C++ and Microsoft C/C++ before Visual Studio 2005, time_t was a long int (32 bits) and hence could not be used for dates past 3:14:07 January 19, 2038, UTC. time_t is now equivalent to __time64_t by default, but defining _USE_32BIT_TIME_T changes time_t to __time32_t and forces many time functions to call versions that take the 32-bit time_t. For more information, see Standard Types and comments in the documentation for the individual time functions. (^ Source [0]) [0]: https://docs.microsoft.com/en-us/cpp/c-runtime-library/time-management?view=vs-2019 [1]: https://github.com/dlang/druntime/pull/1402 Edit: I see you're discussing core.stdc.time, which actually wasn't part of the changes in [1]. In any case, druntime should offer both time_t, __time32_t, and __time64_t, and have time_t time() default to 64-bit. I do wonder what exactly is exported from UCRT as time(), as from the docs it looks it should be just a macro, but if anyone had used time() one Windows (from D) and didn't get linker errors or memory corruption, then I suppose they're still defaulting it to 32-bit to avoid ABI breakages.
Re: core.thread vs std.concurrency - which of them to use?
On Thursday, 6 August 2020 at 01:13:28 UTC, Victor L Porton wrote: When to use core.thread and when std.concurrency for multithreading in applications? Is one of them a preferred way? Druntime's core.thread sets the foundation for D's multi-threading (or at least the non-betterC foundation). On top of it Phobos' std.concurrency and std.parallelism provide a higher-level abstractions. Which ones you should use depends on your application: * If you want to build a web application, event loop is the way to go -> look at libraries like vibe-d / vibe-core / event-code (these I've ordered from high-level to lowe-level). * If you want to speed up a computation, then you're likely looking for data-parallelism -> look into std.parallelism and only if you need more control you can consider core.thread * If you need concurrency, either logical (represent different "processes" like web requests, AI agents in a simulation, or say simply remembering different states of a graph iteration) or physical (using multiple cores to do things concurrently, but not necessarily based on data-parallelism) look into std.concurrency. * If want to build a library (e.g. event loop, task system/future/promises/ reactive extensions, actor model, SCP, etc.) then you need to understand how things work under the hood and so I'd say that reading core.thread's source code would be valuable. Cheers, Petar
Re: Strong typing and physical units
On Tuesday, 28 July 2020 at 04:40:33 UTC, Cecil Ward wrote: [snip] By the way, I found 2 implementations of unit of measurement in D: https://code.dlang.org/packages/units-d https://code.dlang.org/packages/quantities
Re: Visual D 1.0.0 released
On Thursday, 9 July 2020 at 12:14:51 UTC, Jacob Carlborg wrote: On Thursday, 9 July 2020 at 08:40:24 UTC, Petar Kirov [ZombineDev] wrote: What I really wish is we had a single shared codebase for dlang editor support, that could be shared among editor extension writers, instead of having many community members working on competing solutions. That would be really nice. Doesn't Visual Studio (not VSCode) supports LSP these days? -- /Jacob Carlborg Given that Microsoft were the ones who designed LSP in the first place, I'd be surprised if the they don't. Rainer for sure knows more about it than me, but a quick Google search yields this as one of the top results: https://docs.microsoft.com/en-us/visualstudio/extensibility/language-server-protocol?view=vs-2019 Note that this article is for VS and not VS Code.
Re: Visual D 1.0.0 released
On Thursday, 9 July 2020 at 12:06:52 UTC, aberba wrote: On Thursday, 9 July 2020 at 08:40:24 UTC, Petar Kirov [ZombineDev] wrote: On Thursday, 9 July 2020 at 00:03:02 UTC, Manu wrote: Not really. VisualD is objectively the most functional and competent IDE/Debugger solution, BY FAR. It's not an opinion, it's a measurable fact. Windows really sucks as a dev environment. Probably Manu and I are arguing from OPPOSITE sides. Linux as a dev env in itself contributes to 60-70% of the better-ness over Windows env for development. Its makes sense he holds such opinion since he's on windows...having to rely on Visual Studio for everything. Visual Studio as an IDE is pretty solid though...just not for everyone. Yep. Nevertheless VS Code is pretty good for development. Its not an IDE BTW. And even then its quite interesting people think of it as such. D integration is not perfect, but its what most of us use. I know a lot of people in the community use it. I might as well say its the most used Code editor on earth. VSCode is not an IDE out-of-the box, but with its extensions its able to become a much better IDE than many other actual IDEs for many use cases. Nevertheless, VisualD is high quality (not comparing here)...it makes sense considering the amount of work and yrs put into it. Yes, I agree that it's amazing.
Re: Visual D 1.0.0 released
On Thursday, 9 July 2020 at 10:22:50 UTC, Manu wrote: FWIW, I actually agree with everything you said about linux as a dev environment vs windows. But that wasn't the question... as an IDE and debugger integration, there is absolutely no comparison to VisualD, not by miles. While I agree about debugging in VS vs VS Code, I'd say that for my use cases VS Code is both a better editor and a better *IDE*. VS may come more fully-featured then VS Code out-of-the-box, but with its extensions ecosystem VS Code is a better IDE for my use cases and I suspect for many other people. Of course, your mileage may vary. It would be really cool if parts from VisualD were more suitable for VSCode, but I can't see that being easy or practical. One is the Concorde integration, which is pretty deep, and GDB is just not even remotely as good, and the vscode debug UX is embarrassing by contrast. I don't care about the VS debug engine since it's Windows only. Some of the UX may be nice to replicate, but think this falls outside big the scope of a dlang editor extension, if said editor already has general native code debugging functionality. Also some people even disagree that VS is better than GDB in general: https://www.quora.com/Why-is-the-Visual-Studio-C%2B%2B-debugger-much-better-than-GDB-LLDB-or-any-other-debugger-for-C%2B%2B?ch=10=b4f38907=3E2D0 Even if if I agree that VS provides a better debugging experience than VS Code, GDB is more powerful tool overall, so I don't miss Concorde on Linux. Then the general autocomplete engine, which is fairly dependent on the detail expressed in the project files. This is false. Most compilers don't work with project files. Same for LSPs. All you need is the is the list of all importable files and the current active build configuration (what compiler flags are set). It is the job the editor/IDE extension to figure out the build system or parse through project files. The autocomplete engine / the LSP implementation doesn't need to know about that stuff. Nobody writes VS project files, you generate them, just the same as makefiles... nobody writes makefiles. The problem is that there are many things (like MSBuild tasks in general) that the VS solution/project properties window doesn't allow you to edit effectively, or at all. Yes, the UI may be sufficient many/most developers, but that hasn't been the case at all for me. E.g. if you make changes through the UI, like the build configurations between x86/x64 and Debug/Release VS ends up duplicating large parts of the configuration, while if you edit the *proj files by hand you can avoid the duplication and make the files easier to read overall. The other deal breaker for me is that when the files are in version control I have to read the XML anyway in order to track changes. Using the UI to track changes to project files is just a nostarter. So, having had to edit both VS *.*proj files and Makefiles manually, I'd say that Makefiles are orders of magnitude more approachable and easier for me. MSBuild is just a giant PITA in my experience. Though I agree that I don't find Makefiles enjoyable either :D, but at least I can more easily track changes to them in VCS.
Re: Visual D 1.0.0 released
On Thursday, 9 July 2020 at 08:40:24 UTC, Petar Kirov [ZombineDev] wrote: Code-D is great work, but it's still catching up, and it may never do so because VSCode just has an embarrassingly bad debugger :( Professionally, I've used Visual Studio for the first 3-4 years of my career. Back then the company I worked for was a MSFT partner, so we all had the Professional or Ultimate edition that had all the bells and whistles. I agree that VS has probably the best debugger, though I'd actually say that the debugging experience is much better with C# than C++. Debugging C++ (with /Od and with or without /Zo) feels wanky compared C# which has always been rock-solid. s/wanky/kind of janky/
Re: Visual D 1.0.0 released
On Thursday, 9 July 2020 at 00:03:02 UTC, Manu wrote: Not really. VisualD is objectively the most functional and competent IDE/Debugger solution, BY FAR. It's not an opinion, it's a measurable fact. Obviously, if you are into vim/emacs/whatever, then you don't actually really care much about IDE support and debugging, and in that case, this question is not relevant to you. I agree that Code-D + VSCode is probably the second best solution, but there's really no comparison; the debugger is a kind of funny/sad joke, the D debug experience is poorly integrated, and the intellisense/autocomplete is nowhere near the same standard. There's no competition. Code-D is great work, but it's still catching up, and it may never do so because VSCode just has an embarrassingly bad debugger :( Professionally, I've used Visual Studio for the first 3-4 years of my career. Back then the company I worked for was a MSFT partner, so we all had the Professional or Ultimate edition that had all the bells and whistles. I agree that VS has probably the best debugger, though I'd actually say that the debugging experience is much better with C# than C++. Debugging C++ (with /Od and with or without /Zo) feels wanky compared C# which has always been rock-solid. However, I've since moved to Linux and I couldn't be happier. I haven't had to fire up Windows for the past 1-2 years. On my work machine, I neither have a dual boot, nor even a Windows VM, just Linux. Windows really sucks as a dev environment. And I'm telling this as someone who would for years be one of the first among my colleagues and friends to install the latest Windows, VS, MSVC, .NET FX /.NET Core preview builds, Chocolatey, vcpkg, WSL, Windows Terminal, Cygwin, Msys, Msys2 and so on. The only salvation I see is WSL2, but still, it's overall a pretty bad dev UX. No matter how much effort is put in a GUI IDE, nothing beets Unix as an IDE and especially modern distros, such as NixOS (my daily driver). Yes, it takes much more effort for beginners than VS, but it's all worth it. Coming back to VS Code, for what I do on my daily job it's really destroying the "real" VS: * It's cross-platform, so I can take my dev environment on whichever OS I work. * You don't need to create a "project file" to effectively work on a project * On Windows, admin user is not necessary to install & update. This makes the update process unnoticeable, where VS, before their new modular installer was unbearably slow (1h min). * Start time is much better. Additionally, in many cases, you don't need to restart when you install/uninstall an extension - this make's it much easier to test extensions for 1-2 mins and then throw them away. * The extensions integrate much better - in many cases it takes < 10 secs to install something, while with VS it takes at least 1min in my experience, sometimes even several minutes, depending on the size of the extension. * VS Code integrates much better with the system - on Windows you just right-click to open a folder or file and it's opened in less then 1-3secs. In the terminal you just type `code ` and it's done. I know this works already with full VS and I have used it, but its much slower startup time defeats this workflow. * For beginners (which don't know vim), VS Code is actually not a bad choice as the default git editor (it's just `git config --global core.editor "code --wait"`) (e.g. for interactive rebase, writing commit messages, git add -p edit, and so on) * Given that I spend at least at 30-70% of my time in the terminal, VS Code's integrated terminal is much better than whatever VS has had when I tried it over the years. I'd like the perf to be better with vim and git diff, but it's very workable. * vscodevim still has much to be desired, but it's miles ahead then the alternative extensions for the full VS * The editor as a whole is much *easier* to customize and I feel that in the past 1-2 years it has started to be *more* customizable compared to VS * Extensions like Remote development for containers and SSH are live savers. I couldn't live without them (if I have to use a GUI editor / IDE). * The overall language support is much better. VS does a couple of languages really well, but VSCode has a much richer extensions gallery and supports many more languages. * Of course, I'm biased, since I haven't had to use a debugger in the past several months, but these days I'd always pick an editor with a much better extensibility story because many of the things I need daily I haven't found alternatives for in VS. Rainer, the work you have done with VisualD is astounding! I have always been extremely impressed by the progress you have been making over the years! (Of course, not a high priority by any means, but) it would be great to have VisualD's engine for VS Code! I know that a large part of VisualD is very tightly coupled with VS, but I think that anything that could be
Re: Decimal string to floating point conversion with correct half-to-even rounding
On Tuesday, 7 July 2020 at 12:14:16 UTC, Guillaume Piolat wrote: Phobos is the stdlib of the language. Mir is not. I'm not sure why you point this out. No one is arguing that it is. On the other hand, it does many things better already. Likewise, you've made the std.experimental.allocator on DUB depends on mir-core... stdlib shouldn't depend on non-stdlib, there isn't anything to debate on this point. stdx-allocator [1] is not "stdlib" and not meant to be part of Phobos (the opposite actually). It's a fork of the code in Phobos made with the obvious intention of doing things differently than Phobos. If you want to use Phobos, then... use Phobos :D Actually, mir-core is not a dependency of stdx-allocator in general, just in V3. You can still use the 2.77.z branch which doesn't have any dependencies [2]. Many projects do use it. [1]: https://github.com/dlang-community/stdx-allocator [2]: https://github.com/dlang-community/stdx-allocator/blob/2.77.z/dub.sdl
Re: tardy v0.0.1 - Runtime polymorphism without inheritance
On Tuesday, 16 June 2020 at 09:15:10 UTC, Atila Neves wrote: On Tuesday, 16 June 2020 at 03:56:52 UTC, Petar Kirov [ZombineDev] wrote: On Saturday, 13 June 2020 at 15:11:49 UTC, Atila Neves wrote: https://code.dlang.org/packages/tardy https://github.com/atilaneves/tardy Looks interesting, nice work! How does it compare to: https://dlang.org/phobos/std_experimental_typecons#.wrap ? For starters, that uses a class and inheritance internally and therefore has all the drawbacks of that approach as laid out in tardy's README.md. Then there's the lack of allocator support. In the more longer-term, is the goal of the project to implement a Typescript / Go interfaces like structural type system in user space? Yes. Other than allowing multiple interfaces, I think it's already implemented. Cool! Also how would it compare to Rust traits? Rust's traits are usually used like D's template contraints and Haskell's type classes. The only way they're relevant here are trait objects: Yes I meant trait objects actually. https://doc.rust-lang.org/reference/types/trait-object.html The main difference is that tardy is supposed to give the user choices over how the dispatch is actually implemented. Allocators alone are huge. Interesting! I guess the main difference, would be that Rust enforces a nominal type system like approach, where 2 differently named traits that otherwise define the same interface are not considered interchangeable. Yes, that's also a difference.
Re: tardy v0.0.1 - Runtime polymorphism without inheritance
On Saturday, 13 June 2020 at 15:11:49 UTC, Atila Neves wrote: https://code.dlang.org/packages/tardy https://github.com/atilaneves/tardy Looks interesting, nice work! How does it compare to: https://dlang.org/phobos/std_experimental_typecons#.wrap ? In the more longer-term, is the goal of the project to implement a Typescript / Go interfaces like structural type system in user space? Also how would it compare to Rust traits? I guess the main difference, would be that Rust enforces a nominal type system like approach, where 2 differently named traits that otherwise define the same interface are not considered interchangeable.
Re: LDC 1.22.0-beta2
On Monday, 1 June 2020 at 17:29:44 UTC, kinke wrote: Glad to announce the second beta with the following main additions: - Based on DMD/druntime/Phobos stable from a couple of days ago. - `pragma(inline, true)` fix when emitting multiple object files in a single cmdline. This may have a significant impact on performance (incl. druntime/Phobos) when not using LTO. - Complete FreeBSD x86_64 support, incl. CI. - iOS/arm64 CI, running the debug druntime & Phobos unittests on an iPhone 6S. - core.math.ldexp 6-14x faster (Linux/Windows) on my i5-3550. Full release log and downloads: https://github.com/ldc-developers/ldc/releases/tag/v1.22.0-beta2 Please help test to ensure a smooth final release, and thanks to all contributors! Awesome progress, thank you @kinke and everyone else involved! Which FreeBSD version(s) are supported/have you tested with?
Re: DIP1028 - Rationale for accepting as is
On Monday, 25 May 2020 at 23:39:33 UTC, Andrei Alexandrescu wrote: [..] Thank you, Andrei, you've put this quite eloquently. With more than 200+ replies, unfortunately, this whole discussion looks like an excessively inefficient use of the community's time. One way to resolve this stalemate (which I've proposed in another post) is to split off the point of contention from DIP1028. In other words, amend the newly proposed compiler switch `-preview=safedefault` to not change the meaning of non-extern(D) function declarations and introduce a new compiler switch `-implicitly-{safe,trusted}-extern-fn-decls` (*) that adds the automatic "greenwashing" functionality that Walter desires so much. (*) I think that `-implicitly-safe-extern-fn-decls` would be lie, but `-implicitly-trusted-extern-fn-decls` I can tolerate.
Re: DIP1028 - Rationale for accepting as is
On Monday, 25 May 2020 at 13:43:07 UTC, Paul Backus wrote: On Monday, 25 May 2020 at 13:22:36 UTC, Petar Kirov [ZombineDev] wrote: On Monday, 25 May 2020 at 13:14:51 UTC, Petar Kirov [ZombineDev] wrote: It may be true (of course modulo meta-programming) that it doesn't make a difference for the calling code, but I personally want have the guarantees that a function that I'm doesn't make a difference for the calling code, but personally I want [to] have the guarantees that a function that I'm calling is truly @safe (it doesn't contain or call any @trusted code, transitively, nor it calls any @safe code, which access global variables initialized by @system static/module constructors). This is very far from a rigorous definition of "strong @safe-ty" - but I hope it's just enough for the casual reader to understand my intention. I'm sure this is reasonable for your use-case, but I hope you can recognize that this definition of safety is far too narrow to be suitable for a general-purpose programming language (which D purports to be). Most people would like their @safe code to be able to do I/O, for example, despite the fact that it necessarily involves calling @system code under the hood. I don't want to change the definition of @safe in D, but would rather like if D supported @strongSafe, that interested people like me could opt into. I know that worded like this it may sound like too narrow feature to add to the language (or at least not having favorable complexity/use cases ratio). So instead, I'd like to have transitive UDAs [1], a feature that has been requested by many, for various use cases ;) [1]: Basically I want to be able to implement function attributes like @nogc or nothrow in user-space, but that's a long way from now, as first, we need to be able to introspect function bodies.
Re: DIP1028 - Rationale for accepting as is
On Monday, 25 May 2020 at 13:14:51 UTC, Petar Kirov [ZombineDev] wrote: It may be true (of course modulo meta-programming) that it doesn't make a difference for the calling code, but I personally want have the guarantees that a function that I'm doesn't make a difference for the calling code, but personally I want [to] have the guarantees that a function that I'm calling is truly @safe (it doesn't contain or call any @trusted code, transitively, nor it calls any @safe code, which access global variables initialized by @system static/module constructors). This is very far from a rigorous definition of "strong @safe-ty" - but I hope it's just enough for the casual reader to understand my intention. In my line work (blockchain smart contracts) some of the ways of how this is typically achieved include: * having a very minimal smart contract code size * having either no third-party dependencies or using one or two which are open-source and more importantly verified by multiple teams and having very high reputation * extensive code auditing by third-party teams. Depending on the circumstances, we may end up paying more for the auditing of the code, than the actual development. That said, there is no "strong"-@safe today and even if there That said, there is no "strong-@safe" [in D] today and even if there was, it would account for a tiny subset of all attack vectors that I have to care about (basically all possible logical bugs allowed in type-safe and memory-safe code), but I'm not sure how erasing the difference between @safe and @trusted on the interface level would help.
Re: DIP1028 - Rationale for accepting as is
On Monday, 25 May 2020 at 12:41:01 UTC, Paul Backus wrote: On Monday, 25 May 2020 at 12:30:11 UTC, Zoadian wrote: On Monday, 25 May 2020 at 10:41:43 UTC, rikki cattermole wrote: It is meant to mean that at some point it has been mechanically checked by the compiler. Either during current compilation or a prior one. Which means it has to be valid on function declarations without bodies so i.e. .di file generation works correctly which is just a generated D file, nothing special syntax of semantics wise. .di files _could_ just use @trusted instead of @safe. but for extern(D) we could at least add it to the name mangling. it's still not 100% safe, but at least you'd have to work hard to get it wrong. It's been proposed before that @safe and @trusted should have the same mangling, since there's no difference between them from the calling code's perspective. It may be true (of course modulo meta-programming) that it doesn't make a difference for the calling code, but I personally want have the guarantees that a function that I'm calling is truly @safe (it doesn't contain or call any @trusted code, transitively, nor it calls any @safe code, which access global variables initialized by @system static/module constructors). In my line work (blockchain smart contracts) some of the ways of how this is typically achieved include: * having a very minimal smart contract code size * having either no third-party dependencies or using one or two which are open-source and more importantly verified by multiple teams and having very high reputation * extensive code auditing by third-party teams. Depending on the circumstances, we may end up paying more for the auditing of the code, than the actual development. That said, there is no "strong"-@safe today and even if there was, it would account for a tiny subset of all attack vectors that I have to care about (basically all possible logical bugs allowed in type-safe and memory-safe code), but I'm not sure how erasing the difference between @safe and @trusted on the interface level would help.
Re: DIP1028 - Rationale for accepting as is
On Monday, 25 May 2020 at 11:40:46 UTC, Johannes T wrote: On Monday, 25 May 2020 at 10:19:22 UTC, Johannes Loher wrote: [..] But with the DIP in its current form, we make @safe lose its meaning and power, which is much worse in my opinion. [..] The alternative, not making extern @safe, would result in more untrustworthy @trusted code we have to worry about. It's a vicious circle. Wrong. The quantity of untrustworthy code remains the same, but with DIP1028 (at least in the current form) the compiler sweeps the previously @system code under the rug and makes it harder for those who care about safety to trust @safe. @safe must mean only one thing: compiler verified. Or otherwise needing less manual review. @system and @trusted means that code review should be prioritized. @safe non-extern (D) marked either by the programmer, or the implicitly by the compiler should be disallowed as it *is* greenwashing. I try to relax my view on extern annotations. They are @system. We *should* go ahead and diligently mark with @trusted. From experience, it doesn't normally happen. It didn't happen, because it didn't need to. Naturally, most things go through the path of least resistance. Most developers are coming from other languages where they have never had the requirement to write @safe code [external pressure]. Also previously, @safe wasn't the default for D function definitions, so there was less [internal pressure] to do so. With @safe being the default of function definitions, it's more difficult to leave code as @system (of course, modulo @trusted). I don't like @safe extern, but it seems like the lesser evil. No, @safe extern is the worst possible option! It basically makes @safe meaningless. Walter got a lot of flak. I tried to retrace his thoughts and see the merits. On several occasions (e.g. on Reddit) I have defended Walter from unfair accusations, however, in this case, he's rightfully criticized. He seems to think that he's taking an unpopular decision for the greater good, but that's not the case. @safe-by-default on D function definitions could be considered an unpopular decision for the greater good. Implicitly @safe non-extern(D) functions is greenwashing, where the responsibility for the action is removed from the developer by a compiler switch. That's basically negating all the benefits of @safe-by-default on function definitions.
Re: DIP1028 - Rationale for accepting as is
On Friday, 22 May 2020 at 17:12:47 UTC, Atila Neves wrote: [..] Yes, there's a cost, which is carefully vetting extern(C) and extern(C++) declarations. The decision came down to finding this an acceptable trade-off. How would you feel about a DIP that the only thing it did was make all non-extern (D) code implicitly @trusted? And how about if all such code suddenly became @safe without any vetting by developers and not even a compiler switch to revert to the old behavior? Until DIP1028, putting @trusted: at the start of a module has always been considered bad practice and rightfully forbidden in libraries with a higher quality bar, such as phobos. But at least doing so is honest: you as an author are admitting that you don't have time to take care of @safe-ty for now, but it is likely ok to assume that the rest of your project is safe (well at least the @safe functions) *modulo* the @trusted parts. That way, later you can come back and search for @trusted and address those issues one by one. As evidenced by the community outrage, no one but Walter and (maybe) you are convinced that safe-by-default on function bodies should imply @safe on non-extern(D) function declarations. --- So how about a compromise? 1. We accept a modified version of DIP1028 in which safe-by-default applies only to function definitions and extern(D) function declarations. 2. We add a separate compiler switch that changes non-extern (D) function declarations to be @trusted. If you enable 1. and 2. you get the current version of DIP1028, but at least corrected, so that extern(C) code @trusted, and not @safe. If you like this "feature" (2.) you're free to use it on your personal projects, but please don't force it on everyone who wants @safe to be something meaningful.
Re: DIP1028 - Rationale for accepting as is
On Friday, 22 May 2020 at 17:12:47 UTC, Atila Neves wrote: [..] Yes, there's a cost, which is carefully vetting extern(C) and extern(C++) declarations. The decision came down to finding this an acceptable trade-off. How would you feel about a DIP that the only thing it did was assume all non-extern (D) code was implciti @trusted? And how about if all such code suddenly became @safe without any vetting by developers and not even a compiler switch to revert to the old behavior? As evidenced by the community outrage, no one but Walter and (maybe) you are convinced that safe-by-default on function bodies should imply @safe on non-extern(D) function declarations. So how about a compromise? We accept a modified version of DIP1028 in which safe-by-default applies only to function definitions and extern(D) function declarations. And then there's a separate compiler switch that changes non-extern (D) function declarations to be @trusted? If you like this "feature" you're free to use it on your personal projects, but please don't force it on everyone who wants @safe to mean anything meaningful.
Re: $750 Bounty: Issue 16416 - Phobos std.uni out of date (should be updated to latest Unicode standard)
On Monday, 4 May 2020 at 17:01:01 UTC, Robert M. Münch wrote: ... I believe this is an excellent initiative, thank you for starting it! Perhaps this script, along with repository that is part of can help those wishing to update std.uni to the latest version: https://github.com/DmitryOlshansky/gsoc-bench-2012/blob/master/gen_uni.d With regard to the rate of pull requests being merged into the core repositories, I would say that it is highly contextually dependent. I strongly advise either: a) subscribing for notifications from the core dlang repositories (dmd, druntime, phobos, dub, etc.) for an extended period of time (3 months min) - you'll be able to observe the group dynamics (e.g. which contributors have experience with which part of the codebase, why some things are merged quickly and others take a while, etc.) - this way you can really draw conclusions for yourself (b) looking at the statistics: - https://github.com/dlang/dmd/pulse/monthly - https://github.com/dlang/druntime/pulse/monthly - https://github.com/dlang/phobos/pulse/monthly - https://github.com/dlang/dub/pulse/monthly as opposed to drawing conclusions from single data points of anecdotal evidence. From my several years of experience, I can say the following: - small, less complex pull requests are generally easy to get merged - it depends on the part of the codebase - if you open a pull request for a part whose maintainers are currently active, you can expect a speedy review. If it's a part (e.g. std.regex) that is both highly complex and with a small number of maintainers, then it may take a while) - teamwork and communication - since all of us are living in different time zones, rather than working in the same office, you should be prepared that communication (which is a prerequisite of merging) will be with high-latency. Changes that are described well, for which the benefit is clear and doesn't look like they may introduce regressions are of course received well. Discussion prior to opening a merge request can help to guide the implementation in the right direction and save time later in the review process. Many contributors are active on the dlang Slack [1] which makes it a good place to ping people for feedback, or just to have a near real-time conversation. In the past 1-3 years, I have noticed a trend that many active contributors are mostly active on GitHub and Slack, rather than the newsgroup. If you see that pull request has fallen through the cracks (no new replies from maintainers), don't hesitate to ping us either there or here on the newsgroup. [1]: https://dlang.slack.com/
Re: wlanapi.h
On Tuesday, 14 April 2020 at 09:42:44 UTC, Виталий Фадеев wrote: I was writed "wlanapi.h". It is WLAN API windows header. I will be happy to see it in public D distributive. [...] The best way to go is to contribute your Windows API declarations to the upstream Druntime project: https://github.com/dlang/druntime/tree/master/src/core/sys/windows To do that, you can follow these guides: - https://github.com/dlang/druntime/blob/master/CONTRIBUTING.md - https://wiki.dlang.org/Starting_as_a_Contributor P.S. Please use the Learn or Announce groups for questions like this.
Re: DLS deprecation
On Thursday, 9 April 2020 at 15:25:46 UTC, Laurent Tréguier wrote: On Thursday, 9 April 2020 at 14:59:41 UTC, Petar Kirov [ZombineDev] wrote: Thanks a lot for your work! What do you think about transferring the project to dlang-community? Also, I think it's better to leave the VSCode extension in the marketplace, even if you're not able to continue working on it, people would still like to continue to use it. For example, I recently switched computers and I just found out that the extension was unpublished. It could be transferred to dlang-community if other members agreed to, but I don't know what this would achieve; I don't see the benefit of adding an archived project there. The idea is to move it there, so other motivated members of the community can pick up the torch from where you left it and continue active development. The reason I unpublished it is because I don't want to leave an unmaintained extension in the marketplace. Yes, I understand your intention. I still think it's better to leave it there for now, even if it's completely unmaintained, as removing it immediately causes more friction than the potential problems you're trying to avoid. Later on, we can publish the extension again under the dlang-community publisher and then you won't have to worry that people will complain to you when things break. However, it's still possible to use it, you simply need to clone the extension repo with git, run `npm install` and `./node_modules/.bin/vsce package`, and then install the resulting VSIX file from VSCode (in the extension panel, there is an option to "install from VSIX") Thanks, I'll try this!
Re: DLS deprecation
On Wednesday, 8 April 2020 at 12:47:57 UTC, aliak wrote: [..] I've been meaning to give flutter a try though... it seems to be catching steam. Only problem is google is "known" for just dropping things. But who knows, let's see. Flutter is indeed pretty cool. We've used it last year at work. And we'll likely continue using in the second half of this year. I don't think Flutter will go away as it's been gaining really high traction (almost 90k stars on GitHub [1]), and in general, it looks like Google is accelerating its investment in the tech and the community [2]. Also, AFAIK, it's the primary app platform for their upcoming Fuchsia OS. The Dart language, however, is seriously handicapped. It's much better than Go, but that's a pretty low bar. Ever since I've used Flutter, I've been making plans to create a tool that translates Dart code to D, so I could use the Flutter engine and the Flutter framework to write a D app ;) [1]: https://github.com/flutter/flutter [2]: https://flutterevents.com/
Re: DLS deprecation
On Thursday, 9 April 2020 at 13:06:42 UTC, Laurent Tréguier wrote: Thank you, and thank you to everyone else in this thread. I'll probably still be watching D's evolution from afar, and I wish all the best to this community! Thanks a lot for your work! What do you think about transferring the project to dlang-community? Also, I think it's better to leave the VSCode extension in the marketplace, even if you're not able to continue working on it, people would still like to continue to use it. For example, I recently switched computers and I just found out that the extension was unpublished.
Re: dlang-requests 1.1.0 released
On Sunday, 5 April 2020 at 11:53:29 UTC, Petar Kirov [ZombineDev] wrote: On Sunday, 5 April 2020 at 08:59:50 UTC, ikod wrote: Hello! Just a note that dlang-requests ver 1.1.0 released with new 'ByLine' interfaces added for get/post/put requests. range algorithms can be applied to server responses, so that simple chain getContentByLine("https://httpbin.org/anything;) .map!"cast(string)a" .filter!(a => a.canFind("data")) should work. These calls work lazily so you can apply them to large documents. dlang-requests - HTTP client library, inspired by python-requests with goals: small memory footprint performance simple, high level API native D implementation https://github.com/ikod/dlang-requests https://code.dlang.org/packages/requests Always waiting for your bugreports and proposals on project page. Best regards! Nice work! I noticed that by default trace logging from requests is turned on, which is quite verbose. I checked the docs in the readme file and at first, I only saw suggestions on how to customize the logging by writing a LoggerInterceptor, which I thought was a bit too much for casual use. Only after a bit more digging I found that you were using std.experimental.logging (e.g. here [1]). Later I found that the SSL example in the readme used std.experimental.logging [2], but I think it would be better to explicitly mention it earlier on, e.g. like this: /+dub.sdl: dependency "requests" version="~>1.1" +/ void main() { import requests : getContentByLine; import std : assumeUTF, canFind, each, filter, map, write, writeln; /* * dlang_requests uses `std.experimental.logger` for interal logging. * * The globalLogLevel is set to `LogLevel.all` by default in Phobos, which * may be too verbose for most applications. * * This can be changed like this: */ import std.experimental.logger : globalLogLevel, LogLevel; globalLogLevel = LogLevel.info; getContentByLine("https://httpbin.org/anything;) .map!assumeUTF .filter!(a => a.canFind("data")) .each!write; } [1]: https://github.com/ikod/dlang-requests/blob/v1.1.0/source/requests/streams.d#L383 [2]: https://github.com/ikod/dlang-requests/blob/v1.1.0/README.md#ssl-settings
Re: dlang-requests 1.1.0 released
On Sunday, 5 April 2020 at 08:59:50 UTC, ikod wrote: Hello! Just a note that dlang-requests ver 1.1.0 released with new 'ByLine' interfaces added for get/post/put requests. range algorithms can be applied to server responses, so that simple chain getContentByLine("https://httpbin.org/anything;) .map!"cast(string)a" .filter!(a => a.canFind("data")) should work. These calls work lazily so you can apply them to large documents. dlang-requests - HTTP client library, inspired by python-requests with goals: small memory footprint performance simple, high level API native D implementation https://github.com/ikod/dlang-requests https://code.dlang.org/packages/requests Always waiting for your bugreports and proposals on project page. Best regards! Nice work! One quick suggestion: avoid direct casting from `ubyte[]` to `string`: /+dub.sdl: dependency "requests" version="~>1.1" +/ void main() { import requests : getContentByLine; import std : assumeUTF, canFind, each, filter, map, write; getContentByLine("https://httpbin.org/anything;) .map!assumeUTF // instead of map!"cast(string)a" .filter!(a => a.canFind("data")) .each!write; } 1. From a code-style point of view, assumeUTF is better as it shows the intention to the reader - assume that the content is valid UTF8 encoded text, without performing validation. And if there are UTF8 errors, it is easy to go back and add validation there. 2. Avoid casting mutable data to immutable. The data path in your library is rather complex (getContentByLine -> _LineReader -> LineSplitter -> Buffer -> ...) and so it was hard to understand from a quick glance whether or not the buffer array is reused (but I would guess that it is). If the buffer array is reused, it means that the result of calling _LineReader.front() may be modified at a later point in time, which I think is obvious that it can lead to some rather nasty bugs in users' code. I suggest you look into Steven's iopipe[1] library, as I believe it can help you clean up and refactor this part of the codebase (and can probably yield some performance improvements along the way). [1]: https://github.com/schveiguy/iopipe
Re: DIP 1027---String Interpolation---Format Assessment
On Thursday, 27 February 2020 at 14:58:20 UTC, Adam D. Ruppe wrote: On Thursday, 27 February 2020 at 14:32:29 UTC, Petar Kirov [ZombineDev] wrote: 2. Have the new type implicitly convert to printf-style args. I think this is what Adam is proposing. While nice to have, I don't think it's necessary. You can read my document for more detail https://github.com/dlang/DIPs/pull/186 But basically writefln(i"hi $name, you are visitor ${%2d}(count)"); gets turned into: writefln( // the format string is represented by this type new_type!("hi ", spec(null), ", you are visitor ", spec("%2d"))(), // then the referenced arguments are passed as a tuple name, count ) So very, very, very similar to Walter's proposal, just instead of the compiler generating the format string as a plain string, the format string is represented by a new type, defined by the spec and implemented by druntime. As a result, no more guess work - it is clear that this is meant to be interpreted as a format string. It is clear which parts are placeholders/specifiers for which arguments. Perhaps my assumptions were based on an old version of your proposal. What I want is for: auto s = i"hi $name, you are visitor ${%2d}(count)"; to lower to: auto s = new_type!( "hi ", spec(null), ", you are visitor ", spec("%2d") )(name, count); I.e. the referenced arguments are passed to the constructor of new_type. That way new_type can offer implicit conversion to string, while support for zero-allocation printf, write, writeln, writef, writefln and so on can be done via function overloading.
Re: DIP 1027---String Interpolation---Format Assessment
On Thursday, 27 February 2020 at 09:30:30 UTC, Walter Bright wrote: On 2/27/2020 12:27 AM, Petar Kirov [ZombineDev] wrote: I'm well aware that allocation is inevitable if we want this behavior. My argument is that this behavior is so ubiquitous that not following it would be surprising to much more people, than if D didn't follow C's Usual Arithmetic Conversions rules. For example, Rust not following those conversion rules is considered a good thing, Rust does not follow C syntax at all, so nobody will reasonably expect it to have C semantics. D does follow it, it's a feature, so people will have expectations. I'm not sure where exactly you draw the line, but I would say that C# follows C's syntax about as much as D does. Yet it doesn't import some of the broken C semantics like implicit narrowing conversions (luckily, neither does D) and allowing mixed sign comparisons (the oldest open D issue :( [0]). My point is that if D didn't follow the usual arithmetic conversions, much fewer newcomers would even notice compared to extremely large backlash that we may get if go with the string interpolation -> raw tuple approach. [0]: https://issues.dlang.org/show_bug.cgi?id=259 while if D decided to be different than all other languages w.r.t. string interpolation, You can make it behave like all those other languages simply with: f(format("hello $a")); and there it is. But having it generate a GC allocated string is not so easy to unwind, i.e. it'll be useless with printf and generate unacceptable garbage with writefln. The extra string will always make it slow. Essentially, it'll be crippled. Making D behave like a scripting language will yield scripting performance. I know, I know. Though I think you misunderstood. There several ways to make printf work with zero allocations. For example: 1. Have a simple pragma(inline, true) wrapper function that will convert the distinct type to printf-style args. This wrapper function can even be named printf as it would work by virtue of function overloading. This is O(1) additional code that once written no one will need to bother with. 2. Have the new type implicitly convert to printf-style args. I think this is what Adam is proposing. While nice to have, I don't think it's necessary. As for std.stdio.write(f)(ln), there's no reason why any garbage would need to be created, as again, a simple overload that accepts the distinct type will be able to handle it just like it would (performance-wise) with DIP1027. However, with DIP1027, only writef and writefln can be used, while write and writeln will produce wrong results (wrong according to people that have used string interpolation in other languages). D is a language built up from simple, orthogonal parts (or at least that is a goal). A language built from larger indivisible parts is much, much less user-adaptable. I appreciate the sentiment. Having orthogonal features in D is important to me as well. However, I think DIP1027 falls short here because it produces a single format string and that way loses all of the structure. This is ok for printf, but not for third-party libraries and even our own std.format, as with a distinct type we won't need to parse the whole format string at all, just the individual format specifiers. In other words, a distinct type would make nothrow std.format a much more tractable problem (at least for some cases). An example of this is the built-in associative array, which has a series of fairly intractable problems as a result. Another example is the built-in complex type in D, which turned out to be a bad idea - a much better one is building it as a library type. AFAIR, most of the problems with D's built-in AAs are that they have an extern (C) interface that relies on typeinfo. If they are fully converted to templated library types, the situation would be much better. IIRC, one of the blocking issues was that D didn't have autovivification [1] operators, so a library type wouldn't be a complete replacement without additional help from the compiler. So in conclusion, having a distinct library-defined type (in druntime) seems the best way to go to me, as it's more flexible than raw tuples, could allow easy GC-backed conversion to string for script-like code and would offer a superset of the functionality that DIP1027 would offer, while still allowing easy (although not 100% direct) calls to printf. [1]: https://en.wikipedia.org/wiki/Autovivification
Re: DIP 1027---String Interpolation---Format Assessment
On Thursday, 27 February 2020 at 00:20:27 UTC, Walter Bright wrote: On 2/26/2020 3:13 AM, Petar Kirov [ZombineDev] wrote: In all other languages with string interpolation that I'm familiar with, `a` is not passed to the `i` parameter. All rely on a garbage collected string being generated as an intermediate variable. I'm well aware that allocation is inevitable if we want this behavior. My argument is that this behavior is so ubiquitous that not following it would be surprising to much more people, than if D didn't follow C's Usual Arithmetic Conversions rules. For example, Rust not following those conversion rules is considered a good thing, while if D decided to be different than all other languages w.r.t. string interpolation, most newcomers would consider this a bad thing and not elegant and innovative as we are aiming for. I agree with Adam, Steven and others that string interpolation expression should yield a distinct type and not a tuple. By doing this we would be able to overload functions so they could accept both strings (which would cause GC allocation when the argument is a interpolated string), and the new distinct type, in which case the allocation could be avoided.
Re: DIP 1027---String Interpolation---Format Assessment
On Wednesday, 26 February 2020 at 09:45:55 UTC, Walter Bright wrote: On 2/25/2020 1:36 AM, aliak wrote: This may have already been answered in the other threads, but I was just wondering if anyone managed to propose a way to avoid this scenario with DIP1027? void f(string s, int i = 0); f(i"hello $a"); // silent unwanted bahviour. ? It is lowered to: f("hello %s", a); as designed. I don't know what's unwanted about it. In all other languages with string interpolation that I'm familiar with, `a` is not passed to the `i` parameter. --- C# Code: public class Program { public static void f(string s, int i = 21) { System.Console.WriteLine($"s='{s}' | i='{i}'"); } public static void Main() { int a = 42; f($"hello {a}"); } } Output: s='hello 42' | i='21' Try it online: https://repl.it/repls/ZigzagStickyHardware --- JavaScript Code: function f(s, i = 21) { console.log(`s='${s}' | i='${i}'`); } const a = 42; f(`hello ${a}`); Output: s='hello 42' | i='21' Try it online: https://repl.it/repls/TechnologicalJointDisassembler --- Python Code: def f(s, i = 21): print(f"s='{s}' | i='{i}'") a = 42; f(f"hello {a}"); Output: s='hello 42' | i='21' Try it online: https://repl.it/repls/CrookedOutlandishInstructions --- Ruby: Code: def f(s, i = 21) puts "s='#{s}' | i='#{i}'" end a = 42; f("hello #{a}") Output: s='hello 42' | i='21' Try it online: https://repl.it/repls/MidnightblueProudAgent --- Kotlin Code: fun f(s: String, i: Int = 21) { println("s='$s' | i='$i'"); } val a = 42; f("hello $a"); Output: s='hello 42' | i='21' Try it online: https://repl.it/repls/ImpartialPepperyProducts --- Dart Code: void f(String s, [int i = 21]) { print("s='${s}' | i='${i}'"); } void main() { const a = 42; f("hello ${a}"); } Output: s='hello 42' | i='21' Try it online: https://repl.it/repls/AwareSqueakyProlog --- Swift Code: func f(_ s: String, _ i: Int = 21) { print("s='\(s)' | i='\(i)'"); } let a = 42 f("hello \(a)") Output: s='hello 42' | i='21' Try it online: https://repl.it/repls/MulticoloredCulturedRule --- Julia: Code: function f(s, i = 21) print("s='$s' | i='$i'") end a = 42 f("hello $a") Output: s='hello 42' | i='21' Try it online: https://repl.it/repls/StupidAcidicDatabases --- And so on...
Re: String switch is odd using betterC
On Wednesday, 26 February 2020 at 08:32:50 UTC, Abby wrote: On Wednesday, 26 February 2020 at 08:25:00 UTC, Abby wrote: Any idea why? Ok so this is enough to produce the same result, it seems that there is a problem in string switch when there is more the 6 cases. extern(C) void main() { auto s = "F"; final switch(s) { case "A": break; case "B": break; case "C": break; case "D": break; case "E": break; case "F": break; case "G": break; } } This looks like a possible cause: https://github.com/dlang/druntime/blob/e018a72084e54ecc7466e97c96e4557b96b7f905/src/core/internal/switch_.d#L34
Re: What's opIndexAssign supposed to return ?
On Tuesday, 25 February 2020 at 11:02:40 UTC, wjoe wrote: Lets say I've got 3 overloads of opIndexAssign: auto opIndexAssign(T t); auto opIndexAssign(T t, size_t i); and auto opIndexAssign(T t, size_t[2] i); I would assume to return what I would return with opIndex but I'd rather not act upon assumptions. But if yes is it supposed to be the newly assigned values or the pre-assignment ones ? By value or by reference ? And if it's the new stuff can I just return t ? The language manual on operator overloading didn't answer that question and neither did an internet search which didn't find any useful information. Something unrelated and a heads up about introducing opIndexAssign from 2004. opIndexAssign is the operator used in the following code: arr[1] = 8; It returns the element at index 1 (so 8 in this case) by reference. This allows you to do: (arr[1] = 8)++; assert(arr[1] == 9); Whether or not you want to support this behavior in your custom data structure is up to you. It's perfectly valid to return the element by value or even return void. Returning void from any custom assignment operator is always a safe choice. It's possible that some algorithms (e.g. in Phobos or third-party libraries) may need op*Assign to return something, but in that unlikely case you'll get a compile-time error, so it will be an easy fix.
Re: Two problems with json and lcd
On Wednesday, 19 February 2020 at 08:14:34 UTC, AlphaPurned wrote: The first is std.json. It is broke. Doesn't work with tuples. The change above fixes it by treating tuple as an array(same code). It works fine. Can you post a minimal, but complete program that shows the problems with std.json regarding tuples? If you do we could open a pull request that fixes the problem and also uses the code of your program as a unit test, to both showcase the support for tuples and also prevent regressions in the future.
Re: State of MIPS
On Wednesday, 19 February 2020 at 07:09:02 UTC, April wrote: What's the current state of MIPS compiling for bare metal? Especially the R4300i processor. I see MIPS on both GDC and LDC "partial support/bare metal" lists but them being somewhat vague about it I'm not quite sure which it means and I'm sure by now the processors and instruction sets are different from what they were in 1995. Thanks, April. Unfortunately, the current state is objectively unknown, as MIPS is not part of the architectures that we do continuous integration testing on. I suggest trying to run the compiler/druntime/phobos tests on MIPS (either real hardware, or emulator) to see what works at this moment. It is likely that for bare metal enough of the language would be stable and working correctly, but we can't know for sure. You can follow the instructions to cross-compile with LDC: https://wiki.dlang.org/Building_LDC_runtime_libraries And for GDC: https://wiki.dlang.org/GDC_Cross_Compiler If you need specific help any of those compilers, I suggest asking for help in their respective sections of the forum/newsgroup.
Re: Two problems with json and lcd
On Tuesday, 18 February 2020 at 18:05:43 UTC, AlphaPurned wrote: json has two issues, it doesn't work with tuple: (isArray!T) goes to (isArray!T || (T.stringof.length > 4 && T.stringof[0..5] == "Tuple")) and right below else { static assert(false, text(`unable to convert type "`, T.Stringof, `" to json`)); } and it used Stringof. This fixes json to work with tuples. Second, LCD gives me the error: error : function `Test.main.rate!(d, "", "").rate` cannot access frame of function `Test.main.__foreachbody1` Not sure the problem, works fine with DMD. I'm simply accessing a variable outside a templated function. I didn't understand your first point, but if I got the gist of your second one, the difference may be due to LDC not yet having implemented this: https://github.com/ldc-developers/ldc/issues/3125
Re: How to declare a virtual member (not a function) in a class
On Tuesday, 18 February 2020 at 12:37:45 UTC, Adnan wrote: I have a base class that has a couple of constant member variables. These variables are abstract, they will only get defined when the derived class gets constructed. class Person { const string name; const int id; } class Male : Person { this(string name = "Unnamed Male") { static int nextID = 0; this.id = nextID++; this.name = name; } } The compiler restricts me from assigning those two functions. How can I get around this? `const` members must be initialized by the same class that declares them. What you could do is have the abstract Person class declare a constructor (which would initialize the `const` members) and call it from derived class (such as `Male`) constructors by the `super(arg1, arg2)` syntax. Alternatively, you could define `abstract` accessor functions in the base class and have the derived classes implement them. In D you can use the same syntax to call functions as if they were fields. (Before you had to put the @property attribute on such functions, but for the most part it is not necessary now.)
Re: Alternative to friend functions?
On Tuesday, 18 February 2020 at 12:43:22 UTC, Adnan wrote: What is the alternative to C++'s friend functions in D? module stable_matching; alias FemaleID = int; alias MaleID = int; class Person { string name; int id; } class Male : Person { this(string name = "Unnamed Male") { static int nextID = 0; this.id = nextID++; this.name = name; } } class Female : Person { this(string name = "Unnamed Female") { static int nextID = 0; this.id = nextID++; this.name = name; } } class Husband(uint N) : Male { FemaleID engagedTo = -1; const FemaleID[N] preferences; this(FemaleID[N] preferences) { this.preferences = preferences; } } class Wife(uint N) : Female { FemaleID engagedTo = -1; const MaleID[N] preferences; this(MaleID[N] preferences) { this.preferences = preferences; } } void engage(N)(ref Wife!N, wife, ref Husband!N husband) { // Here, I want to access both husband and wife's engaged_to } class MatchPool(uint N) { Husband!N[N] husbands; Wife!N[N] wives; } In D the unit of encapsulation is not class, but module, and so `private` only restricts access from other modules. If `engage` is declared in the same module as the classes, you should have no problems accessing their private members. If you want to put `engage` in a different module, than you can use the `package` access modifier to allow all modules in a given package to access `package` members.
Re: DPP: Linker issue with functions implemented in C header files
On Tuesday, 18 February 2020 at 09:20:08 UTC, Andre Pany wrote: Hi Petar, Hi Andre, I'm happy to help :) thank you very much for the explanation and the code sample. Filling the az_span anonymous member was the tricky part, I thought it would be not possible to do so, but you showed me the trick. I wouldn't call it a trick, I was using standard struct literal initialization (the very syntax that DIP1031 proposes to deprecate). For example: struct Inner { int x, y; } struct Outer { Inner inner; } // You can initialize Outer in various ways: // 1) auto o1 = Outer(Inner(1, 2)); // 2) Outer o2 = { inner: Inner(1, 2) }; // 3) Outer o3 = { Inner(1, 2) }; // 4) Outer o4 = { inner: { x: 1, y: 2} }; // 5) Outer o5 = { { x: 1, y: 2} }; // 6) Outer o6; o6.inner.x = 1; o6.inner.y = 1; For POD (plain old data) struct like that, all six variants are equivalent (of course there more possible variations). Since there's no `private` protection modifier in C, the only thing C library authors can do is make it inconvenient to access struct fields (by prefixing them with underscores), but they can't really prevent it. For example, without this syntax, in pure C you can initialize a span like this: char my_string[] = "Hey"; az_span span; span._internal.ptr = my_string; span._internal.length = sizeof(my_string) - 1; span._internal.capacity = sizeof(my_string) - 1; And with almost the same syntax you can do this in D: string my_string = "Hey"; az_span span; span._internal.ptr = cast(ubyte*)my_string.ptr; // note: I think this should be safe, because of [1] span._internal.length = my_string.length; span._internal.capacity = my_string.length; It's just that that author wanted to prevent accidental bugs by pushing you to use the inline helper functions or macros (which are technically not needed). [1]: https://github.com/Azure/azure-sdk-for-c/blob/25f8a0228e5f250c02e389f19d88c064c93959c1/sdk/core/core/inc/az_span.h#L22 I will do it like you have proposed but had also already created a ticket for the Azure SDK developer: https://github.com/Azure/azure-sdk-for-c/issues/359 There should be a more convenient way to fill a az_span structure. To be honest, I don't think the authors will agree to change this, as putting inline functions in headers files is a pretty common practice in both C and C++. There are two benefits to that: 1) Potentially better performance, because the code is easier to inline 2) It's possible to provide header-only libraries (not the case here), that don't require build steps. For reference, here is my dockerfile which does the DPP call and linking: Cool, I'll check it later! ``` dockerfile FROM dlang2/ldc-ubuntu:1.20.0 as ldc RUN apt-get install -y git libssl-dev uuid-dev libcurl4-openssl-dev curl RUN curl -OL https://cmake.org/files/v3.12/cmake-3.12.4-Linux-x86_64.sh \ && mkdir /opt/cmake \ && sh /cmake-3.12.4-Linux-x86_64.sh --prefix=/opt/cmake --skip-license \ && ln -s /opt/cmake/bin/cmake /usr/local/bin/cmake RUN git clone https://github.com/Azure/azure-sdk-for-c.git \ && cd azure-sdk-for-c \ && git submodule update --init --recursive RUN cd azure-sdk-for-c \ && mkdir build \ && cd build \ && cmake ../ \ && make RUN apt-get install -y clang-9 libclang-9-dev RUN ln -s /usr/bin/clang-9 /usr/bin/clang COPY az_storage_blobs.dpp /tmp/ RUN DFLAGS="-L=-L/usr/lib/llvm-9/lib/" dub run dpp -- --help RUN DFLAGS="-L=-L/usr/lib/llvm-9/lib/" dub run dpp -- /tmp/az_storage_blobs.dpp \ --include-path /azure-sdk-for-c/sdk/core/core/inc \ --include-path /azure-sdk-for-c/sdk/core/core/internal \ --include-path /azure-sdk-for-c/sdk/storage/blobs/inc \ --include-path /azure-sdk-for-c/sdk/transport_policies/curl/inc \ --preprocess-only ADD blobs_client_example.d /tmp/blobs_client_example.d RUN ldc2 /tmp/blobs_client_example.d /tmp/az_storage_blobs.d \ /azure-sdk-for-c/build/sdk/core/core/libaz_core.a \ /azure-sdk-for-c/build/sdk/storage/blobs/libaz_storage_blobs.a \ /azure-sdk-for-c/build/sdk/transport_policies/curl/libaz_curl.a \ -of=/tmp/app ``` Kind regards André Cheers, Petar
Re: DPP: Linker issue with functions implemented in C header files
On Tuesday, 18 February 2020 at 05:41:38 UTC, Andre Pany wrote: Hi, I try to get wrap the "Azure SDK for C" using DPP and have following issue. Functions, which are actually implemented in C header files will cause linker errors: https://github.com/Azure/azure-sdk-for-c/blob/master/sdk/core/core/inc/az_span.h#L91 Example: AZ_NODISCARD AZ_INLINE az_span az_span_init(uint8_t * ptr, int32_t length, int32_t capacity) { return (az_span){ ._internal = { .ptr = ptr, .length = length, .capacity = capacity, }, }; } Error message: /tmp/app.o:az_storage_blobs.d:function _D20blobs_client_example__T19AZ_SPAN_FROM_BUFFERTG4096hZQBdFNbQnZS16az_storage_blobs7az_span: error: undefined reference to 'az_span_init' I do not know C well, is this the expected behavior and should I ask the Azure SDK developers to not implement functions within C header files? Kind regards André I think the problem is that you haven't actually linked in the Azure SDK C library. Dpp translates the header declarations from C to D, but the actual definitions (function bodies) are not part of the process. The executable code for the function definitions should be inside either a static or dynamic library provided by the SDK. From the the project's readme file, it looks like they're using CMake as the build system generator (afterwards both make and ninja should be valid choices for building): mkdir build cd build cmake ../ make In cases like this, it's best to checkout the CMakeLists.txt files of the individual sub project, like this one: https://github.com/Azure/azure-sdk-for-c/blob/master/sdk/core/core/CMakeLists.txt As you can see, there are several outputs of the build process, among which: - add_library(az_core ...) This defines a library named az_core which can produce either a static (.a on Linux, .lib on Windows) or dynamic library file (.so on Linux, .dll on Windows). (If no configuration is specified, I think it's static by default). So the final file name would be libaz_core.{a,so} on Linux. For the .c files to be built, a list of include directories must be specified, where the various .h would located (containing function and type declarations). This done like so: target_include_directories (az_core PUBLIC ...) The 'PUBLIC' argument to the target_include_directories specifies that if you want to use the library, you need to use the same include directories, as those needed for building it. - add_executable(az_core_test ..) This defines an executable build output, which looks is only used for testing, so it's not interesting to us, except that it can serve as an example app using the az_core library. --- So in summary, if you want to use the az_core library, you need to: 1. Build it 2. Run Dpp like so: d++ \ --include-path target_include_directories> You will need to repeat the same steps for any other part of the Azure C SDK. TL;DR After I went through all those steps I got a similar linker error for az_http_response_init. After looking where is the actual function definition, it turned out that it's not defined in a .c file, but it is an inline function part of a header file. Searching for az_span_init revealed the same (I could have saved myself some time by reading your message more carefully :D). So, to answer your original question, the problem is that dpp translates only declarations, not function definitions (such as inline functions like that). For now, your best course of action is to translate all inline function definition by hand. Since in C inline functions are mostly short and simple functions (a better alternative to macros), hopefully that won't be too much work. Also, looking at macros like AZ_SPAN_FROM_STR, there's really very little chance that they could be correctly translated automatically. As the things they do are likely not valid even in @system D code (without additional casts), so it's better to write your own D functions by hand anyway. Here's what I tried: test.dpp: #include #include import std.stdio; void main() { char[] resp = "HTTP/1.2 404 We removed the\tpage!\r\n" ~ "\r\n" ~ "But there is somebody. :-)".dup; az_span response_span = {{ ptr: cast(ubyte*)resp.ptr, length: cast(int)resp.length, capacity: cast(int)resp.length }}; az_http_response response; az_result result = az_http_response_init( , response_span); writeln(result); } d++ --compiler ldmd2 --include-path ./inc test.dpp ./build/libaz_core.a
Re: German D tutorial: HTML5 Anwendung mit GTK3 schreiben
On Friday, 14 February 2020 at 08:44:11 UTC, Petar Kirov [ZombineDev] wrote: On Thursday, 13 February 2020 at 22:48:32 UTC, Andre Pany wrote: Hi, Dieses Tutorial zeigt, wie GTK3 zum Erstellen von HTML5 Anwendungen verwendet werden kann. http://d-land.sepany.de/tutorials/gui/html5-anwendungen-mit-gtk3-schreiben Viele Grüße Andre Hi Andre, I quickly skimmed through your article and I noticed that you're making a copy of the D-style `string[] args`, so you can guarantee that you have null terminated C-style `const char** argv`. You can avoid this by directly accessing the original args that were passed to the C main: void main() { import core.stdc.stdio : printf; import core.runtime : Runtime; const args = Runtime.cArgs; foreach (i; 0 .. args.argc) printf("%s\n", args.argv[i]); } P.S. pretty interesting combination of GtkD, Broadway and Docker :)
Re: German D tutorial: HTML5 Anwendung mit GTK3 schreiben
On Thursday, 13 February 2020 at 22:48:32 UTC, Andre Pany wrote: Hi, Dieses Tutorial zeigt, wie GTK3 zum Erstellen von HTML5 Anwendungen verwendet werden kann. http://d-land.sepany.de/tutorials/gui/html5-anwendungen-mit-gtk3-schreiben Viele Grüße Andre Hi Andre, I quickly skimmed through your article and I noticed that you're making a copy of the D-style `string[] args`, so you can guarantee that you have null terminated C-style `const char** argv`. You can avoid this by directly accessing the original args that were passed to the C main: void main() { import core.stdc.stdio : printf; import core.runtime : Runtime; const args = Runtime.cArgs; foreach (i; 0 .. args.argc) printf("%s\n", args.argv[i]); }
Re: How to get Code.dlang.org to update the package?
On Wednesday, 12 February 2020 at 12:42:32 UTC, Dukc wrote: I have pushed a new release tag in Github around two weeks ago, and ordered a manual update at DUB, yet DUB has still not aknowledged the new tag. Is there some requirement for the release tag for it to be recognized? Hi Dukc, I'm not sure which dub package you're referring to, but I'm gonna guess that it's this one: http://code.dlang.org/packages/nuklearbrowser, which corresponds to this github repo: https://github.com/dukc/nuklearbrowser. I think the problem is that your latest tag is 0.0.2, instead of v0.0.2 (https://github.com/dukc/nuklearbrowser/tags). I hope this helps! Cheers, Petar
Re: Building for multiple platforms
On Wednesday, 12 February 2020 at 12:46:23 UTC, Petar Kirov [ZombineDev] wrote: On Wednesday, 12 February 2020 at 08:41:25 UTC, Neils wrote: [...] Since your project is already on GitHub, I think the easiest solution would be to use GitHub Actions [1] + setup-dlang action [2] + upload-release-asset action [3] to automate the whole process. [1]: https://help.github.com/en/actions [2]: https://github.com/mihails-strasuns/setup-dlang [3]: https://github.com/actions/upload-release-asset P.S. Your project looks quite interesting! Best of luck!
Re: Building for multiple platforms
On Wednesday, 12 February 2020 at 08:41:25 UTC, Neils wrote: I maintain an open-source project written in D and I use DUB for building and my compiler backend is DMD. My dub.json file is rather simple: https://github.com/neilsf/XC-BASIC/blob/master/dub.json I offer pre-built binaries for Linux x86, Linux x86_64, Windows and Mac OS. I'm only doing this for a year so I am still quite a beginner in D and my workflow is the following when building the project: 1. Launch a VM using VirtualBox 2. dub build 3. Repeat for each platforms The above is a painfully slow process. Is there any way to make it simpler and faster? Any suggestions are warmly appreciated. Since your project is already on GitHub, I think the easiest solution would be to use GitHub Actions [1] + setup-dlang action [2] + upload-release-asset action [3] to automate the whole process. [1]: https://help.github.com/en/actions [2]: https://github.com/mihails-strasuns/setup-dlang [3]: https://github.com/actions/upload-release-asset
Re: How do I fix my failed PRs?
On Sunday, 2 February 2020 at 08:54:02 UTC, mark wrote: I've done quite a few small corrections/improvements to the D-tour's English. Almost all have been accepted. However, four have not been accepted, apparently for technical reasons. But I don't understand what's wrong or what I need to do to fix them. (I'm not very knowledgeable about github.) These are the ones that are held up: https://github.com/dlang-tour/english/pull/336 https://github.com/dlang-tour/english/pull/335 https://github.com/dlang-tour/english/pull/328 https://github.com/dlang-tour/english/pull/316 Hi Mark, I will take care of reviewing and merging all of the rest of your pull requests later this week. For the most part, my process is: 1. Git checkout a pull request locally 2. Rebase its branch on top of the upstream master one 2. Fixing any whitespace issues (quite easy due to .editorconfig) 3. Reviewing your changes 4. Reviewing the paragraphs as a whole 5. Changing the commit message to something more descriptive, for example: [chapter-name]: Change being made Longer description... See also: https://chris.beams.io/posts/git-commit/ 6. Force-pushing and auto-merging. Last week I got stuck on step 4 for two of the chapters (classes.md - pr #329 and templates.md - pr #331) as I decided that small fixes won't be sufficient and I started rewriting a few paragraphs from scratch. However I ran out of time to finish both the rewriting and reviewing the rest of your changes. This time I'll try to prioritize merging the easier PRs before going down the rabbit whole of rewriting chapters. Anyway, thanks a lot for your help! I'll try to speed up the process on my side. Cheers, Petar
Re: wc in D: 712 Characters Without a Single Branch
On Tuesday, 28 January 2020 at 21:40:40 UTC, Petar Kirov [ZombineDev] wrote: BTW, while playing with a solution of my own [0] I noticed that both mine and Robert's version return different [... snip] I found the culprit - iswspace. For more info see: https://www.mail-archive.com/bug-coreutils@gnu.org/msg30803.html
Re: wc in D: 712 Characters Without a Single Branch
On Tuesday, 28 January 2020 at 14:01:35 UTC, Mike Parker wrote: [snip] BTW, while playing with a solution of my own [0] I noticed that both mine and Robert's version return different results for the following input [1]: expected: '\u0003\u\u\u5èÆÕL]\u0012|ξ\u001a7«\u00052\u0011(ÐY\n<\u0010\u\u\u\u\u\ue!ßd/ñõì\f:z¦Î¦±ç·÷Í¢Ëß\u00076* \bñùC1ÉUÀé2\u001aÓB' }, To reproduce: $ curl -fsSL https://github.com/ethjs/ethjs-util/raw/e9aede668177b6d1ea62d741ba1c19402bc337b3/src/tests/test.index.js | sed '350q;d' > input $ ./robert input 1 4 190 input $ wc -lwm input 1 3 190 input [0]: import std.algorithm : count, splitter; import std.stdio : File, writefln; import std.typecons : Yes; void main(string[] args) { size_t lines, words, bytes; foreach (line; args[1].File.byLine(Yes.keepTerminator)) { lines++; bytes += line.count; words += line.splitter.count; } writefln!"%u %u %u %s"(lines, words, bytes, args[1]); } [1]: https://github.com/ethjs/ethjs-util/blob/e9aede668177b6d1ea62d741ba1c19402bc337b3/src/tests/test.index.js#L350
Re: wc in D: 712 Characters Without a Single Branch
On Tuesday, 28 January 2020 at 21:40:40 UTC, Petar Kirov [ZombineDev] wrote: [snip] import std.algorithm : count, splitter; import std.stdio : File, writefln; import std.typecons : Yes; void main(string[] args) { size_t lines, words, bytes; foreach (line; args[1].File.byLine(Yes.keepTerminator)) { lines++; bytes += line.count; words += line.splitter.count; } writefln!"%u %u %u %s"(lines, words, bytes, args[1]); } [1]: https://github.com/ethjs/ethjs-util/blob/e9aede668177b6d1ea62d741ba1c19402bc337b3/src/tests/test.index.js#L350 s/bytes/chars/
Re: lambda alias import
On Friday, 17 January 2020 at 23:04:57 UTC, Petar Kirov [ZombineDev] wrote: [..] *If* you compile both modules .. [..]
Re: lambda alias import
On Friday, 17 January 2020 at 21:40:05 UTC, JN wrote: stuff.d: alias doStuff = () {}; main.d: import stuff; void main() { doStuff(); } DMD throws compile error: Error 42: Symbol Undefined __D5stuff9__lambda3FNaNbNiNfZv Is this expected behavior? It tripped me while trying to use DerelictVulkan :( I think the problem comes from the way you compile and link your code. I you compile both modules together like this it should work out: dmd -ofresult main.d stuff.d (I'm on the phone, so I can't verify if it works atm)