Re: More zero-initialization optimizations pending in std.experimental.allocator?
On Saturday, 20 October 2018 at 15:10:38 UTC, Nathan S. wrote: are there more zero-initializations that can be optimized in std.experimental.allocator? I looked and identified low-hanging fruit in std.mutation.initializeAll & moveEmplace and in std.typecons.RefCounted (PR #6698), and in std.conv.emplaceInitializer (PR #6461). What did you search for to find these? Other opportunities would rely on being able to identify if it's ever more efficient to write `memset(&x, 0, typeof(x).sizeof)` instead of `x = typeof(x).init` which seems like the kind of optimization that belongs in the compiler instead. So in which cases is `memset` faster than assignment? Thanks!
More zero-initialization optimizations pending in std.experimental.allocator?
Now that https://github.com/dlang/phobos/pull/6411 has been merged and DMD stable soon has the new __traits(isZeroInit, T) found here https://dlang.org/changelog/2.083.0.html#isZeroInit are there more zero-initializations that can be optimized in std.experimental.allocator?
Re: Using a development branch of druntime+phobos with ldc
On Wednesday, 10 October 2018 at 10:06:36 UTC, kinke wrote: LDC has its own forks of druntime and Phobos, with numerous required adaptations. So you'd need to apply your patches to those forks & build the libs (druntime and Phobos are separate libs for LDC), e.g., with the included ldc-build-runtime tool, which makes this painless: https://wiki.dlang.org/Building_LDC_runtime_libraries The Wiki page also shows how to link those libs instead of the shipped-with ones. Thanks.
Using a development branch of druntime+phobos with ldc
I'm experimenting with a new GC at https://github.com/nordlow/druntime/blob/fastalloc-gc/src/gc/impl/fastalloc/gc.d in my druntime branch fastalloc-gc. I've found a way to benchmark it using dmd as outlined at https://forum.dlang.org/post/zjxycchqrnxplkrlm...@forum.dlang.org but what about rebuilding druntime+phobos with ldc and linking with that specific libphobos.so when compiling my benchmarking app with ldc? Is it possible? If so, what's the preferred way?
Most Effective way of developing a new GC for D
I'm gonna play around with creating a GC that alleviates some of the uses described in https://olshansky.me/gc/runtime/dlang/2017/06/14/inside-d-gc.html What's the most effective way of incrementally developing a new pluggable GC for druntime with regards to prevention of really-hard-to-find-bugs? I'm aware of the run-time flag added in 2.072: https://dlang.org/changelog/2.072.0.html#gc-runtimeswitch-added Is it inevitable to rebuild druntime everytime I make an update to a new GC? Is so what's the preferred way of selecting a modified druntime in a standard installation of dmd on a Linux system (Ubuntu 18.04 in my case). Will Digger makes things easier?
Progress of Project Blizzard
Has there been any progress on Project Blizzard? Is there some working code available other than the snippets proposed at Alexandru's talk on DConf 2018? I'm also curious why the allocation in the following code fails with exit code -11 with both dmd and ldc on Ubuntu 18.04 x64: import std.experimental.allocator; import std.experimental.allocator.building_blocks : Segregator; import std.experimental.allocator.building_blocks.ascending_page_allocator; import std.experimental.allocator.building_blocks.aligned_block_list; import std.experimental.allocator.building_blocks.bitmapped_block; void benchmarkBlizzardSafeAllocator() { alias SafeAllocator = Segregator!(16, AlignedBlockList!(BitmappedBlock!16, AscendingPageAllocator*, 1 << 21), AscendingPageAllocator*); SafeAllocator allocator; int* i = allocator.make!int(32); // TODO: this fails with exit code -11 } taken directly from the talk. Is it because I'm using `BitmappedBlock` instead of `SafeBitmappedBlock`, which I couldn't find in Phobos eventhough Alexandru's talk assumes it existence in Phobos...
Compile-time branching on type size between segregator members during segregator. allocate()
After looking at https://www.youtube.com/watch?v=kaA3HPgowwY and std.experimental.allocator.building_blocks.segregator.Segregator I wonder if Segregator.allocate() is missing an important optimization on calls to someSegregatorInstance.make!T() where the choice of selecting the right segregator member can be done at _compile-time_ instead of run-time because `T.sizeof` is known at compile time. If so, what about adding an optional member to the allocator API, say, void[] allocate(size_t s)() that can realize this idea?
Re: Small @nogc experience report
On Friday, 7 September 2018 at 17:01:09 UTC, Meta wrote: So it seems that it's never worked. Looking at the implementation, it uses a std.container.BinaryHeap, so it'd require a small rewrite to work with @nogc. AFAICT, extending std.container with support for specifying you own @nogc (malloc-based) allocators is one way of making `topNCopy` not use the GC.
Re: One awesome GC feature we will use in Mir!
On Tuesday, 18 September 2018 at 14:23:44 UTC, 9il wrote: I just remember that D's GC has NO_SCAN [1] attribute! I thought D libraries like Mir and Lubeck only had to care about when to call GC.addRange after allocations that contain pointers to GC-backed storage and GC.removeRange before their corresponding deallocations. But that's perhaps only when using non-GC-backed allocators (not using new), right?
Re: D's policy on hidden memory allocations and nothrow @nogc
On Wednesday, 5 September 2018 at 21:06:07 UTC, Adam D. Ruppe wrote: It doesn't affect @nogc because the function above will throw a statically-allocated object instead of creating a new one (if it is out of memory, where would it allocate a new one anyway?). It doesn't affect nothrow because it is considered a fatal Error instead of a recoverable Exception. And how does this relate to instead using `assert`s and DIP-1008? assert works by similar rules and is thus unaffected by those things too. Thanks!
D's policy on hidden memory allocations and nothrow @nogc
After having read up on Zig's [1] policy for memory management, basically meaning no syntactically hidden memory allocations, I wonder if D has something similar. Is the praxis that _all_ containers and GC-allocations should throw a core.exception.OutOfMemoryError upon out of memory error? If so should all algorithms that potentially allocates memory be non-`nothrow`, and in turn, non-`@nogc`? And how does this relate to instead using `assert`s and DIP-1008? [1]: https://ziglang.org/
Re: C++ Expected converted to idiomatic D
On Tuesday, 28 August 2018 at 10:18:29 UTC, John Colvin wrote: I get the feeling from the talk that Andrei has some opinions about how it should be done that aren't completely in line with what has been proposed for the C++ standard. Anyhow my implementation at https://github.com/nordlow/phobos-next/blob/master/src/expected.d should match his C++ code in the lecture, AFAICT. My code doesn't depend on other modules an compiles very fast (~46 ms in my machine) (with DMD -debug -unittest).
C++ Expected converted to idiomatic D
In https://www.youtube.com/watch?v=nVzgkepAg5Y Andrei describes his proposal for STL `Expected` planned to be included in C++20. Have anybody converted the C++ proposal to idiomatic D, yet? Hopefully without the pointer-legacy which unfortunately was allowed into `std:optional`. Andrei claims we should it as return types for non-throwing variants of parse() and to() in the works at https://github.com/dlang/phobos/pull/6665
Re: std.experimental & http://jemalloc.net/
On Friday, 3 August 2018 at 15:23:59 UTC, Robert M. Münch wrote: Has anyone already experimented with the jemalloc [1] allocator and D? [1] http://jemalloc.net/ Here are some value insights on the matter: https://stackoverflow.com/questions/13027475/cpu-and-memory-usage-of-jemalloc-as-compared-to-glibc-malloc As always...it all depends...
Re: std.experimental & http://jemalloc.net/
On Friday, 3 August 2018 at 15:23:59 UTC, Robert M. Münch wrote: Has anyone already experimented with the jemalloc [1] allocator and D? [1] http://jemalloc.net/ You should also check out recent progress in glibc's default allocator.
Re: dmd optimizer now converted to D!
On Tuesday, 3 July 2018 at 21:57:07 UTC, Walter Bright wrote: A small, but important milestone has been achieved. Nice!
Re: D vs C++11
On Friday, 2 November 2012 at 21:53:06 UTC, Walter Bright wrote: No ranges. No purity. No immutability. No modules. No dynamic closures. No mixins. Little CTFE. No slicing. No delegates. No shared. No template symbolic arguments. No template string arguments. No alias this. And tens of more fundamental improvements...
Re: Disappointing performance from DMD/Phobos
On Wednesday, 27 June 2018 at 06:47:46 UTC, Manu wrote: This is some seriously good news for GDC. Awesome stuff guys! Agreed!
Re: Phobos' std.conv.to-conversion from enum to string doesn't scale beyond hundreds of enumerators
On Monday, 25 June 2018 at 07:43:53 UTC, Per Nordlöw wrote: On Monday, 25 June 2018 at 00:35:40 UTC, Jonathan M Davis wrote: Or if you want it to stay an AliasSeq, then just use Alias or AliasSeq on it. e.g. alias members = AliasSeq!(__traits(allMembers, E)); Thanks! Should we prefer this over enum members = [__traits(allMembers, E)]; ? I tested on a really big enum: alias members = AliasSeq!(__traits(allMembers, E)); is faster. :)
Re: Phobos' std.conv.to-conversion from enum to string doesn't scale beyond hundreds of enumerators
On Monday, 25 June 2018 at 00:35:40 UTC, Jonathan M Davis wrote: Or if you want it to stay an AliasSeq, then just use Alias or AliasSeq on it. e.g. alias members = AliasSeq!(__traits(allMembers, E)); Thanks! Should we prefer this over enum members = [__traits(allMembers, E)]; ?
Re: Phobos' std.conv.to-conversion from enum to string doesn't scale beyond hundreds of enumerators
On Sunday, 24 June 2018 at 23:53:09 UTC, Timoses wrote: enum members = [_traits(allMembers, E)]; seems to work Great! Now becomes: @safe: /** Enumeration wrapper that uses optimized conversion to string (via `toString` * member). */ struct Enum(E) if (is(E == enum)) { @property string toString() @safe pure nothrow @nogc { enum members = [__traits(allMembers, E)]; final switch (_enum) { static foreach (index, member; members) { static if (index == 0 || (__traits(getMember, E, members[index - 1]) != __traits(getMember, E, member))) { case __traits(getMember, E, member): return member; } } } } E _enum;// the wrapped enum alias _enum this; }
Re: Phobos' std.conv.to-conversion from enum to string doesn't scale beyond hundreds of enumerators
On Sunday, 24 June 2018 at 23:21:44 UTC, Steven Schveighoffer wrote: @property string toString() @safe pure nothrow @nogc { final switch (_enum) { static foreach (index, member; __traits(allMembers, E)) { static if (index == 0 || (__traits(getMember, E, __traits(allMembers, E)[index - 1]) != __traits(getMember, E, member))) { case __traits(getMember, E, member): return member; } } } } E _enum; // the wrapped enum alias _enum this; } Provided that __traits(allMembers, E) is a cheap operation as it's called once for every enumerator. I could get it out of the loop; if I do as @property string toString() @safe pure nothrow @nogc { final switch (_enum) { enum members = __traits(allMembers, E); static foreach (index, member; __traits(allMembers, E)) { static if (index == 0 || (__traits(getMember, E, members[index - 1]) != __traits(getMember, E, member))) { case __traits(getMember, E, member): return member; } } } } the compiler complains as enum_ex.d(19,29): Error: expression expected as second argument of __traits `getMember` Is __traits(allMembers, ...) cached by the compiler?
Re: Phobos' std.conv.to-conversion from enum to string doesn't scale beyond hundreds of enumerators
On Sunday, 24 June 2018 at 21:47:14 UTC, Per Nordlöw wrote: Further, it just struck me that we can generalize my fast solution to include enumerations with enumerator aliases that are defined directly after its original enumerator by checking with a `static if` if the current enumerator value equals the previous then we skip it. I'm gonna post the solution here after some hacking. Solution: @safe: /** Enumeration wrapper that uses optimized conversion to string (via `toString` * member). */ struct Enum(E) if (is(E == enum)) { @property string toString() @safe pure nothrow @nogc { final switch (_enum) { static foreach (index, member; __traits(allMembers, E)) { static if (index == 0 || (__traits(getMember, E, __traits(allMembers, E)[index - 1]) != __traits(getMember, E, member))) { case __traits(getMember, E, member): return member; } } } } E _enum;// the wrapped enum alias _enum this; } @safe pure unittest { import std.conv : to; enum X { a, b, _b = b // enumerator alias } alias EnumX = Enum!X; assert(EnumX(X.a).to!string == "a"); assert(EnumX(X.b).to!string == "b"); assert(EnumX(X._b).to!string == "b"); }
Re: Phobos' std.conv.to-conversion from enum to string doesn't scale beyond hundreds of enumerators
On Sunday, 24 June 2018 at 15:46:58 UTC, Per Nordlöw wrote: I would like to see a new trait named, say, `primaryMembers` or `nonAliasMembers` that returns exactly what the switch needs. Alternatively, we could define a new `__traits(isAlias, enumeratorSymbol)` that evaluates to true for enumerator to aliases and use that to static-if-filter inside the static foreach loop. Comments on that!
Re: Phobos' std.conv.to-conversion from enum to string doesn't scale beyond hundreds of enumerators
On Sunday, 24 June 2018 at 21:47:14 UTC, Per Nordlöw wrote: Yes, I thought about that too, but the problem is that std.conv.to is used in std.stdio and I don't want to remember to always to do writeln("Some text:", x.to!string); Or rather writeln("Some text:", x.toString);
Re: Phobos' std.conv.to-conversion from enum to string doesn't scale beyond hundreds of enumerators
On Sunday, 24 June 2018 at 17:23:54 UTC, Steven Schveighoffer wrote: static if(__traits(compiles, fastEnumToString(val))) return fastEnumToString(val); else return slowEnumToString(val); // checks for duplicates Should eliminate the issues, because it's not going to compile the slow version if the fast version can work. -Steve Yes, I thought about that too, but the problem is that std.conv.to is used in std.stdio and I don't want to remember to always to do writeln("Some text:", x.to!string); instead of writeln("Some text:", x); for some enum instance `x`. I'm gonna hack up another solution struct Enum(E) if (is(E == enum)) { @property string toString() @safe pure nothrow @nogc { // fast implementation } E _enum; alias _enum this; } Further, it just struck me that we can generalize my fast solution to include enumerations with enumerator aliases that are defined directly after its original enumerator by checking with a `static if` if the current enumerator value equals the previous then we skip it. I'm gonna post the solution here after some hacking.
Re: Phobos' std.conv.to-conversion from enum to string doesn't scale beyond hundreds of enumerators
On Friday, 22 June 2018 at 20:56:58 UTC, Stefan Koch wrote: How will that perform in CTFE? I'm concerned about swapping values making it allocate new arrays all over the place. What about creating a bit-array with the size of the enumerator count and use that to detect duplicates? How well would a mutating bitarray play in CTFE?
Re: Phobos' std.conv.to-conversion from enum to string doesn't scale beyond hundreds of enumerators
On Friday, 22 June 2018 at 00:50:05 UTC, Steven Schveighoffer wrote: The sucky thing is, the compiler is *already* doing a sort on the items in the switch, and *already* doing the duplicate check. It would be cool to be able to leverage this mechanism to avoid the library solution, but I don't know how we can do that, as the semantics for switch are well defined, and there's no other way to hook this builtin functionality. I would like to see a new trait named, say, `primaryMembers` or `nonAliasMembers` that returns exactly what the switch needs. I believe this is motivated by the fact that this is a serious issue; Without the user notices, the use of enums combined with .to!string or io sucks up more and more RAM in the compiler as new members are added. If enough (hundreds) of members are added you can get out of RAM, which is what happened to me. What do you think about that idea? We should plot compilation time and memory usage with enumerator count so we can reason about the severity of this issue.
Phobos' std.conv.to-conversion from enum to string doesn't scale beyond hundreds of enumerators
I've discovered the annoying fact that std.conv.to doesn't scale for enum to string conversion when the enum has hundreds of members. This because of a call to `NoDuplicates` which has (at least) O(n*log(n) time and space complexity. So I've come up with /** Faster implementation of `std.conv.to`. */ string toString(T)(T value) @safe pure nothrow @nogc if (is(T == enum)) { final switch (value) { static foreach (member; __traits(allMembers, T)) { case __traits(getMember, T, member): return member; } } } /// @safe pure nothrow @nogc unittest { enum E { unknown, x, y, z, } assert(E.x.toString == "x"); assert(E.y.toString == "y"); assert(E.z.toString == "z"); } The question know is: How do I make this support enums with enumerator aliases without needing to call `NoDuplicates`? For instance, this should work /// @safe pure nothrow @nogc unittest { enum E { unknown, x, y, z, z_ = z, } assert(E.x.toString == "x"); assert(E.y.toString == "y"); assert(E.z.toString == "z"); assert(E.z_.toString == "z"); }
Re: Cannot hash a std.datetime.Date
On Tuesday, 19 June 2018 at 02:15:46 UTC, Seb wrote: I opened an issue for you: https://issues.dlang.org/show_bug.cgi?id=19005 The PR that introduced this regression was https://github.com/dlang/druntime/pull/2200 Thank you so much again, Seb!
Cannot hash a std.datetime.Date
The following unittest { import std.datetime.date : Date; Date date; import core.internal.hash : hashOf; auto hash = date.hashOf; } errors (with DMD v2.081.0-beta.1) as /usr/include/dmd/druntime/import/core/internal/convert.d(619,101): Error: template `core.internal.convert.toUbyte` cannot deduce function from argument types `!()(Month)`, candidates are: /usr/include/dmd/druntime/import/core/internal/convert.d(14,16): `core.internal.convert.toUbyte(T)(ref T val) if (is(Unqual!T == float) || is(Unqual!T == double) || is(Unqual!T == real) || is(Unqual!T == ifloat) || is(Unqual!T == idouble) || is(Unqual!T == ireal))` /usr/include/dmd/druntime/import/core/internal/convert.d(479,16): `core.internal.convert.toUbyte(T)(T[] arr) if (T.sizeof == 1)` /usr/include/dmd/druntime/import/core/internal/convert.d(485,16): `core.internal.convert.toUbyte(T)(T[] arr) if (is(typeof(toUbyte(arr[0])) == const(ubyte)[]) && (T.sizeof > 1))` /usr/include/dmd/druntime/import/core/internal/convert.d(503,16): `core.internal.convert.toUbyte(T)(ref T val) if (__traits(isIntegral, T) && !is(T == enum))` /usr/include/dmd/druntime/import/core/internal/convert.d(537,16): `core.internal.convert.toUbyte(T)(ref T val) if (is(Unqual!T == cfloat) || is(Unqual!T == cdouble) || is(Unqual!T == creal))` /usr/include/dmd/druntime/import/core/internal/convert.d(619,101):... (2 more, -v to show) ... /usr/include/dmd/druntime/import/core/internal/hash.d(145,37): Error: template instance `core.internal.convert.toUbyte!(Date)` error instantiating foo.d(6,21):instantiated from here: `hashOf!(Date)` but not with 2.080.1. A regression?
Re: Safe and performant actor model in D
On Thursday, 14 June 2018 at 13:24:06 UTC, Atila Neves wrote: I need to think about how to do isolated properly. I'll look at vibe.d for inspiration. Thanks. I'll have a look when you have something working.
Safe and performant actor model in D
I've read up on Pony [1] and realized that it currently has a superior implementation of the actor model when it comes to combining safety, efficiency and memory management determinism (thread-local reference-counting GC with consensus guarantees) What libraries do we have at our disposal in D (including code.dlang.org) for implementing task-based parallelism that is close to Pony's solution with regards to 1. @safely sending isolated (transitively unique reference to) messages between actors (tasks) without the need for copying. Vibe.d has, for instance, `makeIsolated` [2] that serves this purpose. 2. a task-scheduler that can move blocked tasks between threads. Yes, I know, this has been discussed many times before...I'm checking to see if there are any updates. 3. could we make such a solution GC-free by requiring immutable data inside isolated messages to be unique references (not currently implicitly shared) aswell using, for instance, https://dlang.org/library/std/typecons/unique.html. I'm thinking of a trait named something like `makeIsolatedUnshared` that checks these restrictions. [1] https://www.ponylang.org/ [2] http://vibed.org/api/vibe.core.concurrency/makeIsolated What assistance can/could we currently/in-the-future get from D's type-system to verify correctness of these paradigms?
Re: newCTFE: perliminary delegate support is in!
On Wednesday, 13 June 2018 at 05:57:31 UTC, Stefan Koch wrote: Good day ladies and gentleman, it is my distinct please to announce that a new feature just landed in newCTFE. !!! DELEGATES !!! Nice!
grain: mir, LLVM, GPU, CUDA, dynamic neural networks
I just discovered https://github.com/ShigekiKarita/grain which seems like a very ambitious and active project for making dynamic neural networks run on the GPU using D in front of mir and CUDA. Are there any long-term goals around this project except for the title? It would great if someone (author) could write a little background-knowledge (tutorial) around the subject of dynamic neural networks that assists all the details in the examples at https://github.com/ShigekiKarita/grain/tree/master/example Further, could parts of grain be refactored out into some generic CUDA-library for use in domains other than dynamic neural networks?
Re: Migrating an existing more modern GC to D's gc.d
On Thursday, 24 May 2018 at 13:13:03 UTC, Steven Schveighoffer wrote: Really though, the issues with D's GC are partly to blame from the language itself rather than the GC design. Having certain aspects of the language precludes certain GCs. Java as a language is much more conducive to more advanced GC designs. I'm hoping for a tough long-term deprecation process that alleviates these issues eventhough they will cause big breakage. I believe it will be worth it.
Re: DIP-1000 scope analysis doesn't kick in for small-size-optimized GC-string
On Thursday, 19 April 2018 at 17:36:19 UTC, Per Nordlöw wrote: However, I'm having problems within making lifetime analysis via scope kick in. For instance, the function `f` at https://github.com/nordlow/phobos-next/blob/master/src/sso_string.d#L219 Note that qualifying `opSlice` as `inout` doesn't help.
DIP-1000 scope analysis doesn't kick in for small-size-optimized GC-string
I'm now satisfied with my SSOString at https://github.com/nordlow/phobos-next/blob/master/src/sso_string.d However, I'm having problems within making lifetime analysis via scope kick in. For instance, the function `f` at https://github.com/nordlow/phobos-next/blob/master/src/sso_string.d#L219 shouldn't compile, but it does. Have I missed something or is this yet another corner-case that requires a new bugzilla issue? Walter? Andrei?
Re: D vs nim
On Friday, 10 April 2015 at 18:52:24 UTC, weaselcat wrote: P.S., the example on the language's frontpage is cool! http://nim-lang.org/ Why should I be excited? Nim is the only language that leverages automated proof technology to perform a disjoint check for your parallel code. Working on disjoint data means no locking is required and yet data races are impossible: I believe Rust's rayon [1] can do this too... [1] https://github.com/rayon-rs/rayon
Re: Reddit Post: Overview of the Efficient Programming Languages (v.3)
On Tuesday, 17 April 2018 at 15:10:35 UTC, Nerve wrote: Overview of the Efficient Programming Languages (v.3): C++, Rust, Swift, Scala, Dlang, Kotlin, Nim, Julia, Golang, Python. http://reddit.com/r/programming/comments/8cw2xn/overview_of_the_efficient_programming_languages/ Nice overview. Thanks.
Re: Small Buffer Optimization for string and friends
On Sunday, 8 April 2012 at 05:56:36 UTC, Andrei Alexandrescu wrote: Walter and I discussed today about using the small string optimization in string and other arrays of immutable small objects. I put together SSOString at https://github.com/nordlow/phobos-next/blob/967eb1088fbfab8be5ccd811b66e7b5171b46acf/src/sso_string.d that uses small-string-optimization on top of a normal D string (slice). I'm satisfied with everything excepts that -dip1000 doesn't vorbids `f` from compiling. I also don't understand why `x[0]` cannot be returned by ref in the function `g`. Comments are welcome. Contents of sso_string.d follows: module sso_string; /** Small-size-optimized string. * * Store on the stack if constructed with <= `smallCapacity` number of * characters, otherwise on the GC heap. */ struct SSOString { private alias E = immutable(char); // immutable element type private alias ME = char; // mutable element type pure nothrow: /** Construct from `elements`, with potential GC-allocation (iff * `elements.length > smallCapacity`). */ this()(scope ME[] elements) @trusted // template-lazy { if (elements.length <= smallCapacity) { small.data[0 .. elements.length] = elements; small.length = cast(typeof(small.length))(2*elements.length); } else { large = elements.idup; // GC-allocate raw.length *= 2; // shift up raw.length |= 1; // tag as large } } @nogc: // TODO add @nogc overload to construct from mutable static array <= smallCapacity /** Construct from `elements` without any kind of heap allocation. */ this()(immutable(E)[] elements) @trusted // template-lazy { if (elements.length <= smallCapacity) { small.data[0 .. elements.length] = elements; small.length = cast(typeof(small.length))(2*elements.length); } else { large = elements; // @nogc raw.length *= 2;// shift up raw.length |= 1;// tag as large } } @property size_t length() const @trusted { if (isLarge) { return large.length/2; // skip first bit } else { return small.length/2; // skip fist bit } } scope ref inout(E) opIndex(size_t index) inout return @trusted { return opSlice()[index]; // automatic range checking } scope inout(E)[] opSlice() inout return @trusted { if (isLarge) { union RawLarge { Raw raw; Large large; } RawLarge copy = void; copy.large = cast(Large)large; copy.raw.length /= 2; // adjust length return copy.large; } else { return small.data[0 .. small.length/2]; // scoped } } private @property bool isLarge() const @trusted { return large.length & 1; // first bit discriminates small from large } private: struct Raw // same memory layout as `E[]` { size_t length; // can be bit-fiddled without GC allocation E* ptr; } alias Large = E[]; enum smallCapacity = Large.sizeof - Small.length.sizeof; static assert(smallCapacity > 0, "No room for small elements for E being " ~ E.stringof); version(LittleEndian) // see: http://forum.dlang.org/posting/zifyahfohbwavwkwbgmw { struct Small { ubyte length; E[smallCapacity] data; } } else { static assert(0, "BigEndian support and test"); } union { Raw raw; Large large; Small small; } } /// @safe pure nothrow @nogc unittest { import container_traits : mustAddGCRange; alias S = SSOString; static assert(S.sizeof == 2*size_t.sizeof); // two words static assert(S.smallCapacity == 15); static assert(mustAddGCRange!S); // `Large large.ptr` must be scanned auto s0 = S.init; assert(s0.length == 0); assert(!s0.isLarge); assert(s0[] == []); const s7 = S("0123456"); static assert(is(typeof(s7[]) == string)); assert(!s7.isLarge); assert(s7.length == 7); assert(s7[] == "0123456"); // TODO assert(s7[0 .. 4] == "0123"); const s15 = S("012345678901234"); static assert(is(typeof(s15[]) == string)); assert(!s15.isLarge); assert(s15.length == 15); assert(s15[] == "012345678901234"); const s16 = S("0123456789abcdef"); static assert(is(typeof(s16[]) == string)); assert(s16.isLarge); assert(s16.length == 16); assert(s16[] == "0123456789abcdef"); assert(s16[0] == '0'); assert(s16[10] == 'a'); assert(s16[15] == 'f'); // TODO static assert(!__traits(compiles, { auto _ = S((char[]).in
Re: Small Buffer Optimization for string and friends
On Sunday, 8 April 2012 at 05:56:36 UTC, Andrei Alexandrescu wrote: Andrei Have anybody put together code that implements this idea in a library? That is, a small strings up to length 15 bytes unioned with a `string`.
Is sorted using SIMD instructions
Neither GCC, LLVM nor ICC can auto-vectorize (and use SIMD) the seemly simple function bool is_sorted(const int32_t* input, size_t n) { if (n < 2) { return true; } for (size_t i=0; i < n - 1; i++) { if (input[i] > input[i + 1]) return false; } return true; } Can D's compilers do better? See http://0x80.pl/notesen/2018-04-11-simd-is-sorted.html
Re: Migrating an existing more modern GC to D's gc.d
On Monday, 9 April 2018 at 18:39:11 UTC, Jack Stouffer wrote: On Monday, 9 April 2018 at 18:27:26 UTC, Per Nordlöw wrote: How difficult would it be to migrate an existing modern GC-implementation into D's? Considering no one has done it, very. What's the reason for this being so hard? A too unstrict programming model that enables (has enabled) to much bit-fiddling with pointers (classes)?
Migrating an existing more modern GC to D's gc.d
How difficult would it be to migrate an existing modern GC-implementation into D's? Which kinds of GC's would be of interest? Which attempts have been made already?
Re: Could someone take a look at DIP PR 109?
On Wednesday, 28 March 2018 at 06:43:15 UTC, Shachar Shemesh wrote: https://github.com/dlang/DIPs/pull/109 I submitted it 12 days ago. So far, except for two thumbs up, I got no official reaction of any kind for it. I did get an unofficial list of suggestions from Andrei, which I have now incorporated into the DIP, but I was under the impression that I was supposed to either get rejects or a DIP number after a week. That has not happened so far. For those too lazy to click on the link, the DIP is about adding ability to hook the implicit move D does with structs in order to update references (internal and/or external). Shachar Good idea! I've experimented with run-time variants [1] of Rust-style borrow-checking to detect range-invalidation in my containers but they can't handle moves because they don't use reference-counted storage. Having `opMove` will make it possible to forbid (via an `assert`) ranges from being invalidated when their associated container is about to moved so this is really a very good idea! Alternatively the borrow-checking logic could be built into a reference-counted storage wrapper without the need for the potential use of `opMove` at the cost extra memory indirections. [1] https://github.com/nordlow/phobos-next/blob/f64b94761325b68a52c361ffe36c95fc77c582c7/src/open_hashmap_or_hashset.d#L1308
Re: newCTFE Status March 2018
On Friday, 30 March 2018 at 20:46:32 UTC, Stefan Koch wrote: 85 to 90% maybe. I expect that there will many bugs which were hidden by newCTFE not supporting classes, which will now be out in the open and have to be dealt with. Also the code is in need of cleanup before I would release it for upstream-inclusion. I tried building your newCTFE_reboot branch but it fails as expression.d(15724): Deprecation: Implicit string concatenation is deprecated, use "identity comparison of static arrays " ~ "implicitly coerces them to slices, " instead expression.d(15725): Deprecation: Implicit string concatenation is deprecated, use "implicitly coerces them to slices, " ~ "which are compared by reference" instead gluelayer.d(61): Deprecation: Symbol ddmd.backend.code_x86.code is not visible from module gluelayer because it is privately imported in module code ctfe/ctfe_bc.d(4): Deprecation: Symbol ddmd.func.FuncDeclaration is not visible from module ctfe_bc because it is privately imported in module declaration ctfe/ctfe_bc.d(4): Deprecation: Symbol ddmd.func.CtorDeclaration is not visible from module ctfe_bc because it is privately imported in module declaration Sizeof BCValue: 56LU ctfe/ctfe_bc.d(262): Error: module `bc_gccjit_backend` is in file 'ddmd/ctfe/bc_gccjit_backend.d' which cannot be read import path[0] = /usr/include/dmd/phobos import path[1] = /usr/include/dmd/druntime/import posix.mak:338: receptet för målet ”dmd” misslyckades make[1]: *** [dmd] Fel 1 make[1]: Lämnar katalogen ”/home/per/Work/dmd/src” posix.mak:8: receptet för målet ”all” misslyckades make: *** [all] Fel 2 I'm on Ubuntu 17.10 and building with DMD 2.079.
Re: Deprecating this(this)
On Sunday, 1 April 2018 at 01:56:40 UTC, Jonathan M Davis wrote: Another potential issue is whether any of this does or should relate to https://github.com/dlang/DIPs/pull/109 and it's solution for hooking into to moves. I'm not at all sure that what happens with that needs to be related to this at all, but it might. - Jonathan M Davis And before we think about `opMove` we should, IMO, make the compiler pass by move in more cases, for instance, in range constructors such as this(Source source) { this.source = source; // last occurrence of `source` can be moved } I'd be happy to help out with adding this in dmd. Andrei has already showed interest in this idea.
Re: newCTFE Status March 2018
On Friday, 30 March 2018 at 19:48:02 UTC, Stefan Koch wrote: Have a nice easter. Stefan Great, then there's hope.
Re: D, Parasail, Pascal, and Rust vs The Steelman
On Thursday, 22 March 2018 at 11:16:37 UTC, Atila Neves wrote: I wonder how they concluded that. Atila I too.
Re: Google alert for "dlang"
On Saturday, 2 December 2017 at 17:43:01 UTC, Andrei Alexandrescu wrote: Thanks, Andrei Done. Thanks.
Re: Automatically using stack allocations instead of GC
On Monday, 23 October 2017 at 10:48:37 UTC, Walter Bright wrote: There are no plans at the moment, but it's a good idea that `scope` can make possible. I'm glad your open for such automatic optimizations, Walter. Making D compilers automate these things which are cumbersome manual labour in languages such as Rust is, IMHO, the competitive way forward for D. And how does/should/will this interact with `@nogc`? If it gets allocated on the stack, then it should be compatible with @nogc. Great. I believe good diagnostics (for, in this case, mismatches between allocations and qualifiers) will play a key role in this regard.
Automatically using stack allocations instead of GC
Are there any plans (or is it already happening) to make D-compilers automatically use stack allocations when possible in cases like int foo() { auto x = [1, 2]; // should be allocated on the stack return y = x[0] + x[1]; } where allocations are "small enough" and cannot escape the current scope? And how does/should/will this interact with `@nogc`?
Re: Current limitations of -dip1000
On Wednesday, 11 October 2017 at 03:32:41 UTC, Walter Bright wrote: Thank you! You're very welcome!
Re: newCTFE Status August 2017
On Wednesday, 11 October 2017 at 07:39:47 UTC, Tourist wrote: What about October 2017? I miss your frequent updates on newCTFE. Me too.
Current limitations of -dip1000
I'm trying to figure out how to make my manually written containers have scope-aware element(s)-accessing functions. I've come up with 5 different situations as follows @safe pure nothrow @nogc: struct S(T) { static private struct Range { S!T* _parent; } scope inout(Range) range() inout return { return typeof(return)(&this); } scope inout(T)[] opSlice() inout return { return x[]; } scope inout(T)[] slice() inout return { return x[]; } scope ref inout(T) front() inout return { return x[0]; } scope inout(T)* pointer() inout return { return &x[0]; } T[128] x; } /// this correctly fails int[] testOpSlice() { S!int s; return s[]; // errors with -dip1000 } /// this correctly fails int[] testSlice() { S!int s; return s.slice; // errors with -dip1000 } /// this correctly fails auto testRange() { S!int s; return s.range; // errors with -dip1000 } /// TODO this should fail ref int testFront() { S!int s; return s.front; // should error with -dip1000 } /// TODO this should fail int* testPointer() { S!int s; return s.pointer; // should error with -dip1000 } Compiling this with dmd version 2.076.0-b1 along with -dip25 and -dip1000 flags gives three errors: test_scope.d(42,13): Error: returning `s.opSlice()` escapes a reference to local variable `s` test_scope.d(49,12): Error: returning `s.slice()` escapes a reference to local variable `s` test_scope.d(56,12): Error: returning `s.range()` escapes a reference to local variable `s` It's very nice that the scope-analysis figures out that even the `range` member function contains an escaping pointer to the owning struct. However, the other two `testFront` and `testPointer` don't error. Why are these two simpler cases allowed to escape a scoped reference and pointer which both outlive the lifetime of the owning struct `S`?
Re: Default hashing function for AA's
On Monday, 9 October 2017 at 14:11:13 UTC, RazvanN wrote: We in the UPB dlang group have been having discussions about the hashing functions of associative arrays. In particular, we were wondering why is the AA implementation in druntime is not using the hash function implemented in druntime/src/core/internal/hash.hashOf for classes that don't define toHash(). For us, that seems to be a very good default hashing function. Further, I haven't found any instructions on changing the default hash-digest for `hashOf`. Is this in conflict with `hashOf` being `pure`? Could the interface to builtin AA's be extended to support changing the default hash algorithm (which in turn `hashOf` will use) upon AA instantiation?
Re: gdc is in
On Tuesday, 3 October 2017 at 22:00:51 UTC, Joakim wrote: On Wednesday, 21 June 2017 at 15:11:39 UTC, Joakim wrote: the gcc tree: https://gcc.gnu.org/ml/gcc/2017-06/msg00111.html Congratulations to Iain and the gdc team. :) I found out because it's on the front page of HN right now, where commenters are asking questions about D. An update, including the latest 2.076 frontend: https://www.phoronix.com/scan.php?page=news_item&px=D-GCC-v3-Patches Does this include DMD and static if?
Re: static array with inferred size
On Wednesday, 20 September 2017 at 18:41:51 UTC, Timon Gehr wrote: Can that be done without breakages? -- Andrei No. Are thinking about typeof([1,2]) changing from int[] to int[2] ?
Re: static array with inferred size
On Wednesday, 20 September 2017 at 09:13:52 UTC, Jonathan M Davis wrote: https://issues.dlang.org/show_bug.cgi?id=12625 - Jonathan M Davis Looks like we should we wait for https://github.com/dlang/dmd/pull/7110 to be merged before adding `s` to druntime.
Re: newCTFE Status August 2017
On Monday, 14 August 2017 at 11:25:14 UTC, Stefan Koch wrote: On Tuesday, 1 August 2017 at 21:27:32 UTC, Stefan Koch wrote: [ ... ] Guys, newCTFE is green on 64 and 32bit! I've finally fixed || and &&. For good! whoho ;) Release is coming closer! Wow, I can't wait!
Re: CTFE Status 2
On Tuesday, 6 June 2017 at 04:11:33 UTC, Stefan Koch wrote: On Tuesday, 6 June 2017 at 02:03:46 UTC, jmh530 wrote: On Tuesday, 6 June 2017 at 00:46:00 UTC, Stefan Koch wrote: Time to find this: roughly 2 weeks. Damn. That's some commitment. There is no other way, really. These things need to be fixed. Great work. Keep up.
zapcc - time to start adding caching to D compilers aswell?
http://www.zapcc.com/
Re: how to do iota(0,256) with ubytes ? (cf need for iotaInclusive)
On Friday, 9 October 2015 at 07:20:43 UTC, John Colvin wrote: For anyone googling for a solution to this, here's a workaround: auto b = iota(0, 256).map!"cast(ubyte)a"; Without string lambdas: auto b = iota(0, 256).map!(a => cast(ubyte)a); I would suggest to turn this pattern into a new algorithm typically called iotaOf(T, B, E)(B begin, E end); called as iotaOf!ubyte(0,256) and use `std.conv.to` instead.
Re: how to do iota(0,256) with ubytes ? (cf need for iotaInclusive)
On Monday, 12 October 2015 at 08:20:12 UTC, Per Nordlöw wrote: What about adding an overload supporting iota!ubyte(0, 256) ? Ahh, already present of course, according to declaration auto iota(B, E)( B begin, E end ) if (isIntegral!(CommonType!(B, E)) || isPointer!(CommonType!(B, E))); This will make `B` be ubyte and `E` be int. I guess current behaviour is to return a range where `ElementType` is `CommonType!(ubyte, int)` which is `int`.
Re: how to do iota(0,256) with ubytes ? (cf need for iotaInclusive)
On Friday, 9 October 2015 at 02:41:50 UTC, Timothee Cour wrote: of course this doesn't work: auto b=iota(ubyte(0), ubyte(256)); //cannot implicitly convert expression (256) of type int to ubyte What about adding an overload supporting iota!ubyte(0, 256) ?
Re: Categorizing Ranges
On Friday, 9 October 2015 at 10:01:47 UTC, Per Nordlöw wrote: I've Googled a bit on this topic, say: "algorithm visualization" "Software Visualization" seems to be the correct research term.
Re: Categorizing Ranges
On Wednesday, 7 October 2015 at 15:06:55 UTC, Mike Parker wrote: I'm looking for ideas on how to label the ranges returned from take and drop. Some examples of what I think are appropriate categories for other types of ranges: Generative - iota, recurrence, sequence Compositional - chain, roundRobin, transposed Iterative - retro, stride, lockstep XXX - take, drop I'm guessing you're thinking about categorizing the list at http://dlang.org/phobos/std_range.html , right? ;) That would, IMHO, be a nice usability/discoverability improvement, especially for new users! :) Further, I've thought about adding some kind standardized graphical explanation for the ranges and algorithms in Phobos. I've Googled a bit on this topic, say: "algorithm visualization" but I can't seem to find any concrete work on this topic. Refs ideas anyone? What file format would be preferred for such graphical descriptions? I'm guessing SVG would be a good contender. A supercool thing would be if we, with the help of D's marvellous meta-programming and CT/RT-reflection, could auto-generate these visualizations.
Re: Voting for std.experimental.testing
On Thursday, 8 October 2015 at 08:52:04 UTC, Rikki Cattermole wrote: Alright seriously? +/** + * Generate green coloured output on POSIX systems + */ +string green(in string msg) @safe pure const +{ +return escCode(Color.green) ~ msg ~ escCode(Color.cancel); +} Somebody fix please: https://github.com/robik/consoled Irk I don't like it being done like this. I want it done right or not at all pretty much. Further thought about UDA's especially those with high conflict potential. Perhaps they should instead be moved out into e.g. std.stdudas. That way it is not locked into e.g. testing while being reusable. What about using compile-time-only struct-wrappers or UDA's for Visual attributes such as color, boldness, etc? For a showcase see my pretty.d (which I plan to propose to put in std.experimental.pretty): https://github.com/nordlow/justd/blob/master/pretty.d It has bitrotten a bit lately but I'll fix it today if you want to have live showcase.
Re: Voting for std.experimental.testing
On Thursday, 8 October 2015 at 08:21:58 UTC, Robert burner Schadek wrote: This is the voting thread for inclusion of std.experimental.testing into phobos. Voting ends in 2 weeks, on Oktober 22. Sorry for being late with this but I added two comments in the PR: One more important: https://github.com/D-Programming-Language/phobos/pull/3207/files#r41494484 One less important: https://github.com/D-Programming-Language/phobos/pull/3207/files#r41494229
Re: Improving assert-printing in DMD
On Thursday, 1 October 2015 at 19:04:51 UTC, Andrei Alexandrescu wrote: * I don't think we need a new flag, just make the new behavior work. Should I remove all mentioning of extra compiler flags?
Re: Improving assert-printing in DMD
On Friday, 2 October 2015 at 14:54:08 UTC, Andrei Alexandrescu wrote: My first proposed lowering creates more copies than needed. My second one is therefore probably better, without resorting to compiler magic. -- Andrei I update the spec. Could you take a look? I'll update the operators, next.
Re: Improving assert-printing in DMD
On Friday, 2 October 2015 at 12:15:13 UTC, Per Nordlöw wrote: I guess we only need on symbol name for `onAssertFailed` then instead of `assertBinOp` and `assertUnOp`, right? And two overloads Binary case: onAssertFailed(string op)(e1, e2, __FILE__, ...) Unary case: onAssertFailed(string op)(e, __FILE__, ...) I presume? Because number of arguments to each overload will be fixed, right? What about the case assert(f(expr)) assert(symbol) Should `op` be empty in that casesor should we use yet another overload onAssertFailed(e, __FILE__, ...) for that case?
Re: Improving assert-printing in DMD
On Friday, 2 October 2015 at 11:19:51 UTC, Andrei Alexandrescu wrote: assert(e1 == e2) could be lowered into: { auto a = e1, b = e2; if (a == b) return; onAssertFailed!"=="(a, b, __FILE__, __LINE__, __FUNCTION__, __MODULE__); }() So lowering is kind of like macro expansion for AST-nodes, then? Is DMD clever enough to avoid trigger postblits for auto a = e1, b = e2; if (a == b) return; ? Or is that part of the question whether this will work? I guess we only need on symbol name for `onAssertFailed` then instead of `assertBinOp` and `assertUnOp`, right?
Re: Improving assert-printing in DMD
On Friday, 2 October 2015 at 11:54:31 UTC, Atila Neves wrote: That's what I was hoping for. Good, unless anybody has comments to make on the current state of the PR, I'll leave everything as it is now. Fine with me :)
Re: Improving assert-printing in DMD
On Thursday, 1 October 2015 at 17:33:51 UTC, Jack Stouffer wrote: Bikesheading: could you change "being" in "([1,2,3][2] being 3) != ([1,2,4][2] being 4)" and the other examples to "is"? Done.
Re: Improving assert-printing in DMD
On Thursday, 1 October 2015 at 19:04:51 UTC, Andrei Alexandrescu wrote: * I don't think we need a new flag, just make the new behavior work. So you mean that extra diagnostics should kick in when extra overloads are made visible via import of, for instance, `core.assert`? * Should the lowering happen only on the function called if the assertion actually fails? Then no more need for laziness and other complications. Could explain what you mean by *lowering*, please? I'm currently unsure whether in L lhs or lazy L lhs should be used and whether or not we should use version(assert) See the added example at http://wiki.dlang.org/DIP83 * Extend to other expressions (!=, ordering etc). How should we categorize expressions? Like this - Unary: assert(x UNOP y) - Binary: assert(x BINOP y) - Function Calls: assert(f(x,y)) - Other: assert(x) Does it suffice to just mention these or should I be explicit exactly about which operators for each category that should be included?
Re: Improving assert-printing in DMD
On Thursday, 1 October 2015 at 14:37:55 UTC, Andrei Alexandrescu wrote: Whoever wants to work on better assert expression printing: make sure you specify which grammar constructs are supported, and how the parts involved are printed. Expressing semantics via lowering would be great. Write a DIP, discuss, implement. I'll have your six. Andrei A first version: http://wiki.dlang.org/DIP83
Re: Improving assert-printing in DMD
On Thursday, 1 October 2015 at 16:35:51 UTC, Per Nordlöw wrote: Help please. I figured it out.
Re: Improving assert-printing in DMD
On Thursday, 1 October 2015 at 14:37:55 UTC, Andrei Alexandrescu wrote: Whoever wants to work on better assert expression printing: make sure you specify which grammar constructs are supported, and how the parts involved are printed. Expressing semantics via lowering would be great. Write a DIP, discuss, implement. I'll have your six. I registered a user named `nordlow` at the D Wiki but I can't find a way to write a DIP. Help please.
Re: Improving assert-printing in DMD
On Tuesday, 29 September 2015 at 21:02:42 UTC, Nordlöw wrote: As a follow-up to https://github.com/D-Programming-Language/phobos/pull/3207#issuecomment-144073495 I added a long comment about a new more flexible solution to this problem: https://github.com/D-Programming-Language/phobos/pull/3207#issuecomment-144701371
Re: std.experimental.testing formal review
On Wednesday, 9 September 2015 at 15:20:41 UTC, Robert burner Schadek wrote: This post marks the start of the two week review process of std.experimental.testing. Will `runTests` automatically assert that all pure unittests by default are parallellized and all non-pure are serialized? If so why is the @serial UDA needed? Will it make use of D's builtin threadpool or will every unittest run in its own thread? IMHO, we should strive for threadpool usage here.
Re: Type helpers instead of UFCS
On Saturday, 12 September 2015 at 20:37:37 UTC, BBasile wrote: UFCS is good but there are two huge problems: - code completion in IDE. It'will never work. Is is possible. DCD plans to support it: https://github.com/Hackerpilot/DCD/issues/13 I agree that this is a big issue, though, and is one of the most important things to work on.
Re: Top-3 for 2.066
What are yours? Make it possible to defined implicit conversions between wrapped types in order to, for instance, correctly implement NotNull for reference types. See: http://stackoverflow.com/questions/21588742/getting-notnull-right?noredirect=1#comment33399977_21588742
Re: Emplacement in D
Nice! Thx, Per