Re: LNK2019 error in the C++ interface
On Thursday, 10 June 2021 at 15:19:27 UTC, kinke wrote: Confirmed: https://issues.dlang.org/show_bug.cgi?id=22014 Thank you for the bug report. I'm glad I couldn't handle it myself. The only thing that bothers me is that there is no sign of this problem being fixed. I fear that this may be the case forever. On Thursday, 10 June 2021 at 15:19:27 UTC, kinke wrote: Wrt. `tagRECT`, this should come in handy (for a druntime fix): https://dlang.org/changelog/2.097.0.html#pragma-mangle-aggregate I don't see how this helps with the modifications related to tagRECT.
Internal Server Error on reload of dfeed.js
STR: 1. open http://forum.dlang.org/static-bundle/637528586548394375/dlang.org/js/dlang.js+js/dfeed.js 2. press reload (F5 or ctrl+R)
Re: Parallel For
On Tuesday, 15 June 2021 at 06:39:24 UTC, seany wrote: ... This is the best I could do: https://run.dlang.io/is/dm8LBP For some reason, LDC refuses to vectorize or even just unroll the nonparallel version, and more than one `parallel` corrupts the results. But judging by the results you expected and what you described, you could maybe replace it by a ton of `c[] = a[] *operand* b[]` operations? Unless you use conditionals after or do something else that confuses the compiler, it will maybe use SSE/AVX instructions, and at worst use basic loop unrolling.
Re: How to translate this C macro to D mixin/template mixin?
On Tuesday, 15 June 2021 at 12:39:40 UTC, Dennis wrote: On Tuesday, 15 June 2021 at 12:18:26 UTC, VitaliiY wrote: [...] ```D enum string ADDBITS(string a, string b) = ` { bitbuffer = (bitbuffer<<(`~a~`))|((`~b~`)&((1<<`~a~`)-1)); numbits += (`~a~`); mixin(STOREBITS); }`; // on use: ADDBITS(varA, varB) becomes mixin(ADDBITS!("varA", "varB")); ` [...] Thank you, Dennis. I tried with this kind of mixin, but a, b - are type of 'int so it's confusing to use them as mixin arguments.
Re: How to translate this C macro to D mixin/template mixin?
On Tuesday, 15 June 2021 at 12:38:15 UTC, Ali Çehreli wrote: On 6/15/21 5:18 AM, VitaliiY wrote: > STOREBITS and ADDBITS use variables defined in STARTDATA If possible in your use case, I would put those variables in a struct type and make add() a member function. However, a similar type already exists as std.bitmanip.BitArray. Ali Thank you, Ali! Idea with member function seems interesting.
Re: What exactly gets returned with extern(C) export string func() ?
On Sunday, 13 June 2021 at 21:13:33 UTC, frame wrote: On Sunday, 13 June 2021 at 10:02:45 UTC, cc wrote: it seems to work as expected with the same C# code. Does D explicitly disallow slices as an extern(C) export parameter type? The spec says that there is no equivalent to type[]. You get a type* instead. I can't seem to get it to work as a return type, but interestingly it does work as an out/pass by ref parameter. D: ```d export void D_testString(out string ret) { ret = "hello".idup; } ``` C#: ```c# public struct DString { public ulong length; public IntPtr ptr; public string str { get { byte[] b = new byte[length]; for (int i = 0; i < (int)length; i++) { b[i] = Marshal.ReadByte(ptr, i); } return Encoding.UTF8.GetString(b); } } } [DllImport("test.dll")] private static extern void D_testString(out DString dd); public static string testString() { DString d; D_testString(out d); return d.str; } ```
Re: In general, who should do more work: popFront or front?
On 6/15/21 12:24 AM, surlymoor wrote: All my custom range types perform all their meaningful work in their respective popFront methods, in addition to its expected source data iteration duties. The reason I do this is because I swear I read in a github discussion that front is expected to be O(1), and the only way I can think to achieve this is to stash the front element of a range in a private field; popFront would thus also set this field to a new value upon every call, and front would forward to it. (Or front would be the cache itself.) At the moment, I feel that as long as the stashed front element isn't too "big" (For some definition of big, I guess.), that built-in caching should be fine. But is this acceptable? What's the best practice for determining which range member should perform what work? (Other than iterating, of course.) IMO, `front` should not be figuring out which element is next. That is the job of `popFront`. But that doesn't mean `front` cannot do work. map is a prime example. If you use `map` though, you know what you are signing up for. `front` is expected to be called multiple times to get the same data. One thing to think about is that there is a `cache` wrapper range which can store it for you. This way, you give your users the option of caching or not. Each situation is different, and depends on the underlying mechanisms needed to make the range operate. -Steve
Re: Parallel For
On Tuesday, 15 June 2021 at 09:09:29 UTC, Ali Çehreli wrote: On 6/14/21 11:39 PM, seany wrote: > [...] I gave an example of it in my DConf Online 2020 presentation as well: https://www.youtube.com/watch?v=dRORNQIB2wA&t=1324s > [...] That is violating a parallelism requirement that loop bodies must be independent. (I use a similar example during my presentation above.) You need to either pre-allocate the array (as jfondren said) or not store the elements at all but use them independently in the loop body. Yes, std.concurrency is another option. I show a "recipe" of usage here: https://www.youtube.com/watch?v=dRORNQIB2wA&t=1737s Ali Ali Chehreli is an angel, Thank you.
Re: In general, who should do more work: popFront or front?
On Tue, Jun 15, 2021 at 02:20:11PM +, Paul Backus via Digitalmars-d-learn wrote: [...] > It's a time-space tradeoff. As you say, caching requires additional > space to store the cached element. On the other hand, *not* caching > means that you spend unnecessary time computing the next element in > cases where the range is only partially consumed. For example: > > ```d > import std.range: generate, take; > import std.algorithm: each; > import std.stdio: writeln; > > generate!someExpensiveFunction.take(3).each!writeln; > ``` > > Naively, you'd expect that `someExpensiveFunction` would be called 3 > times--but it is actually called 4 times, because `generate` does its > work in its constructor and `popFront` instead of in `front`. One way to address this is to make the computation lazy: the element is only computed once on demand, and once computed it's cached. But of course, this is probably overkill on many ranges. So the long answer is, it depends. :-) T -- It is not the employer who pays the wages. Employers only handle the money. It is the customer who pays the wages. -- Henry Ford
Re: In general, who should do more work: popFront or front?
On Tuesday, 15 June 2021 at 04:24:09 UTC, surlymoor wrote: All my custom range types perform all their meaningful work in their respective popFront methods, in addition to its expected source data iteration duties. The reason I do this is because I swear I read in a github discussion that front is expected to be O(1), and the only way I can think to achieve this is to stash the front element of a range in a private field; popFront would thus also set this field to a new value upon every call, and front would forward to it. (Or front would be the cache itself.) At the moment, I feel that as long as the stashed front element isn't too "big" (For some definition of big, I guess.), that built-in caching should be fine. But is this acceptable? What's the best practice for determining which range member should perform what work? (Other than iterating, of course.) It's a time-space tradeoff. As you say, caching requires additional space to store the cached element. On the other hand, *not* caching means that you spend unnecessary time computing the next element in cases where the range is only partially consumed. For example: ```d import std.range: generate, take; import std.algorithm: each; import std.stdio: writeln; generate!someExpensiveFunction.take(3).each!writeln; ``` Naively, you'd expect that `someExpensiveFunction` would be called 3 times--but it is actually called 4 times, because `generate` does its work in its constructor and `popFront` instead of in `front`.
Re: In general, who should do more work: popFront or front?
On 6/14/21 10:17 PM, mw wrote: > I think there is another convention (although it's not formally > enforced, but should be) is: > > -- `obj.front() [should be] const`, i.e. it shouldn't modify `obj`, so > can be called multiple times at any given state, and produce the same > result In other words, front() should be "idempotent". To the OP, there is the following presentation that is related and touches on similar concerns: https://forum.dlang.org/thread/diexjstekiyzgxlic...@forum.dlang.org Ali
Re: How to translate this C macro to D mixin/template mixin?
On Tuesday, 15 June 2021 at 12:18:26 UTC, VitaliiY wrote: It's simple with STARTDATA as mixin, but STOREBITS and ADDBITS use variables defined in STARTDATA scope, so I can't understand how to do mixin template with it. If the code duplication isn't too bad, consider just expanding the C macros and translating that. I've noticed that some C programmers like to use complex macros just to save 10 lines. Otherwise, to make STOREBITS and ADDBITS access variables from STARTDATA, you can define them as inner functions. ```D void f() { size_t ressize=0; char* blockstart; int numbits; ulong bitbuffer=0; size_t size; void storeBits() { while(numbits >= 8) { if(!size) return 0; *(buffer++) = bitbuffer>>(numbits-8); numbits -= 8; ++ressize; --size; } } } ``` For the most literal translation, you can use a string mixin. You can't use a template mixin here since those can't insert code, only declarations. ```D enum string ADDBITS(string a, string b) = ` { bitbuffer = (bitbuffer<<(`~a~`))|((`~b~`)&((1<<`~a~`)-1)); numbits += (`~a~`); mixin(STOREBITS); }`; // on use: ADDBITS(varA, varB) becomes mixin(ADDBITS!("varA", "varB")); ```
Re: How to translate this C macro to D mixin/template mixin?
On 6/15/21 5:18 AM, VitaliiY wrote: > STOREBITS and ADDBITS use variables defined in STARTDATA If possible in your use case, I would put those variables in a struct type and make add() a member function. However, a similar type already exists as std.bitmanip.BitArray. Ali
How to translate this C macro to D mixin/template mixin?
Could anybody help with translation of this C macro to D mixin/mixin template? Here a - unsigned char, b - int. It's simple with STARTDATA as mixin, but STOREBITS and ADDBITS use variables defined in STARTDATA scope, so I can't understand how to do mixin template with it. #define STARTDATA \ size_t ressize=0; \ char *blockstart; \ int numbits; \ uint64_t bitbuffer=0; #define STOREBITS \ while(numbits >= 8) \ { \ if(!size) return 0; \ *(buffer++) = bitbuffer>>(numbits-8); \ numbits -= 8; \ ++ressize; \ --size; \ } #define ADDBITS(a, b) \ { \ bitbuffer = (bitbuffer<<(a))|((b)&((1<
Re: Parallel For
On 6/14/21 11:39 PM, seany wrote: > I know that D has parallel foreach [like > this](http://ddili.org/ders/d.en/parallelism.html). I gave an example of it in my DConf Online 2020 presentation as well: https://www.youtube.com/watch?v=dRORNQIB2wA&t=1324s > int[] c ; > foreach(aa; parallel (a)) { > foreach (bb; parallel(b)) { > > c ~= aa+bb; That is violating a parallelism requirement that loop bodies must be independent. (I use a similar example during my presentation above.) You need to either pre-allocate the array (as jfondren said) or not store the elements at all but use them independently in the loop body. Yes, std.concurrency is another option. I show a "recipe" of usage here: https://www.youtube.com/watch?v=dRORNQIB2wA&t=1737s Ali
Re: Dynamically allocated Array mutable but non resizeable
On Monday, 14 June 2021 at 17:34:00 UTC, Steven Schveighoffer wrote: D doesn't have head-const. So you must hide the mutable implementation to get this to work. You'd want to do this anyway, since you don't want to directly use the pointer for anything like indexing (it should first validate the index is valid, at least in an assert). -Steve Seems like I'll have to have a look at operator overloading, thanks for the clarification!
Re: Parallel For
On Tuesday, 15 June 2021 at 07:41:06 UTC, jfondren wrote: On Tuesday, 15 June 2021 at 06:39:24 UTC, seany wrote: [...] add a `writeln(c.length);` in your inner loop and consider the output. If you were always pushing to the end of c, then only unique numbers should be output. But I see e.g. six occurrences of 0, four of 8 ... [...] My attempt is to make such huge O(N^4) or O(N^5) algorithms faster. Wouldn't all these calculation of unique `i` make the program slow? I would like to know how to do this properly.
Re: Parallel For
On Tuesday, 15 June 2021 at 06:39:24 UTC, seany wrote: What am I doing wrong? add a `writeln(c.length);` in your inner loop and consider the output. If you were always pushing to the end of c, then only unique numbers should be output. But I see e.g. six occurrences of 0, four of 8 ... Here's a non-solution: ```d import std; import core.sync.mutex; void main() { int[] a = [1, 2, 3, 4, 5, 6, 7, 8, 9]; int[] b = [11, 12, 13, 14, 15, 16, 17, 18]; int[] c; shared Mutex mtx = new shared Mutex(); foreach (aa; parallel(a)) { foreach (bb; parallel(b)) { mtx.lock_nothrow(); c ~= aa + bb; mtx.unlock_nothrow(); } } writeln(c); } ``` That solves the inconsistent access to c, but the parallelism is probably pointless. Real solutions might include 1. `c[i] = aa + bb;`, with a calculated unique i per assignment. 2. std.concurrency and message passing to build c? 3. using core.atomic for i?