Re: Hello dears, where I can deploy my vibe-d project? Someone know any hosting?
On Thursday, 3 October 2024 at 08:51:12 UTC, Danic wrote: I want to know where publish mi D web You didn't say what platform you were comfortable operating. For Linux, I've often used Debian on a Linode nano instance. $5/month, and with an efficient app, by the time you outgrow it, you can afford one of their $10/month or higher VPS instance upgrades. Andy
Re: Templates considered impressive
On Tuesday, 1 October 2024 at 11:45:35 UTC, monkyyy wrote: On Tuesday, 1 October 2024 at 05:44:16 UTC, H. S. Teoh wrote: why spend the time and effort when you could have just done: ``` import std.conv; theres a bunch of relivent tradeoffs and phoboes doesnt make a good set of them To be fair, although Phobos sometimes has shockingly large code expansion for modest functions, in this case "123.456".to!double comes to 27k, which seems quite reasonable, even to this old 1802/6502/8080/z80/PDP-11 coder. Thanks for the pointers to the numerous ways std.conv can be used! Andy
Re: Best practices for class instance variables as parameters
On Saturday, 28 September 2024 at 18:16:55 UTC, Ian wrote: Hi, I'm coming from C and some C++ so the way D stores class instance variables is new to me. If I'm not mistaken the basic unadorned instance variable is like a "hidden" pointer. So, when passing class instance variables to a function, what would be the point of passing a pointer or ref? An instance of a class is, of course, an object. The instance's variables can be int's or float's or struct's. Values. The instance has storage right there to hold the value. An instance can also reference another class instance. Then, underneath, that's a pointer. I think I answered myself, in that they'd would be pointers or references to the variable that holds the... hidden pointer to the class instance. I think I see you assuming that how an instance variable treats values is different from how a local variable or a struct field would treat a value. My experience is they're all the same. The important difference between struct and class instance is that structs want to be values, and you have to go to extra trouble to work with pointers thereof. Instances want to be references, and you have to go to trouble (shallow or deep copy, presumably) if you want to get a value copy. But all this applies equally to instance variables and a struct's fields. Now I'm unsure. When I pass a class instance to a function by value, I'm not creating a copy of the instance, am I? No you aren't. (Now let the much deeper Dlang minds sweep in and correct me.)
Re: assert
On Thursday, 12 September 2024 at 22:34:04 UTC, user1234 wrote: On Wednesday, 11 September 2024 at 10:08:29 UTC, ryuukk_ wrote: On Wednesday, 11 September 2024 at 09:14:39 UTC, Nick Treleaven wrote: On Wednesday, 11 September 2024 at 08:08:45 UTC, ryuukk_ wrote: [...] I again apologies for being wrong and i apologies again for trying to improve things, i didn't meant to do that, i now may leave again Oh no please stay, your unrecognisable passive aggressive style is so useful. To put it more gently, ryuukk--and following the generally congenial and informative flavor of communications here--you may not realize that your messages have an abrasive feel to them. It really does make a difference to be polite and even respectful when people seek to help you by answering your questions. Showing gratitude makes it much more likely that these valuable contributors stay around and answer newbie questions in the future. Andy
Re: Can the send function send an array?
On Tuesday, 10 September 2024 at 13:14:05 UTC, Fox wrote: // I am learning how to send and receive data. The following is my intention, but it cannot be compiled. // aliases to mutable thread-local data not allowed, what does it mean? thank you. dlang tries to use the type system to make one be clear about what data is shared between potentially concurrent threads. You need that data to be "shared" before you can send it between threads. Andy
Re: How to find the right function in the Phobos library?
On Wednesday, 21 August 2024 at 20:45:10 UTC, IchorDev wrote: You should’ve probably considered using the equivalent function from Phobos because it’s a D function so it can be inlined and such: https://dlang.org/library/std/bitmanip/native_to_big_endian.html Brilliant, that API gives me exactly what I'd want. Thank you. Andy
Re: How to find the right function in the Phobos library?
On Saturday, 17 August 2024 at 17:31:53 UTC, Steven Schveighoffer wrote: On Saturday, 17 August 2024 at 05:28:37 UTC, Bruce wrote: What is the best way to search for a function in the Phobos library? Go to dlang.org, select dicumentation, then library reference. Pick any module, click on it In the upper right, switch the docs from stable to ddox Now you can use the search bar and it is interactive. Typing in indexOf found it right away. I'm doing some network programming, and have run things down with casts and all to the point where I have an IPv4 address. The documentation says it's in "host order", so obviously a 32-bit number. My C days tell me htonl is what's needed--but it's nowhere to be found in the API index? I did a search and it's apparently under core/sys, but "sys" isn't included in the online documentation? Doing bulk searches in /usr/lib/ldc/x86_64-linux-gnu/include/d/core/sys lets me run it down to three places, of which I'd guess posix/arpa/inet.d is the one to use. But this all seems a little bit harder than it might be? A map of C or Python API's to the Dlang counterpart might be the easiest way to let people find things.
Re: Pointer to dlang spec for this alias construct?
On Monday, 17 June 2024 at 05:05:06 UTC, Jonathan M Davis wrote: alias Unshared(T) = T; alias Unshared(T: shared U, U) = U; ... Unshared is an eponymous template. https://dlang.org/spec/template.html#implicit_template_properties And it's using a shortcut syntax. ... The second template uses a template specialization https://dlang.org/spec/template.html#parameters_specialization No wonder I couldn't find it in the spec; I was looking for an "alias" language feature. alias as used here leans on a template mechanism--specialization. It might have taken many readings of the spec to have hunted that down, so--thank you. It's quite an interesting mechanism. D is quite easy for a C programmer to get started with, but it certainly has its depths! Andy
Pointer to dlang spec for this alias construct?
In the alias: alias Unshared(T) = T; alias Unshared(T: shared U, U) = U; as used in: cast(Unshared!mytype)value turns a mytype with shared attribute into one without shared. I deduce the alias is using some sort of type matching and decomposition? I've read through the language spec, and didn't spot this mechanism. Can somebody tell me what section of the spec covers this very interesting mechanism? Thanks in advance, Andy
Unintentional sharing?
I was using instance initialization which allocated a new object. My intention was this initialization would happen per-instance, but all instances appear to share the same sub-object? That is, f1.b and f2.b appear to point to a single object? Obviously I moved the new into the initializer code, but I hadn't appreciated how initial instance values were calculated once. Interestingly, this makes it similar to how Python calculates default argument values for functions. class Bar { int z = 3; } class Foo { auto b = new Bar(); } void main() { import std.stdio : writeln; auto f1 = new Foo(), f2 = new Foo(); f1.b.z = 0; writeln(f2.b.z); }
Re: How to pass in reference a fixed array in parameter
On Tuesday, 4 June 2024 at 12:22:23 UTC, Eric P626 wrote: I tried to find a solution on the internet, but could not find anything, I stumble a lot on threads about Go or Rust language even if I specify "d language" in my search. Aside from the excellent answer already present, I wanted to mention that searching with "dlang" has helped target my searches. Welcome to D! (From another newbie.) Andy
Re: Socket and spawn()
On Sunday, 2 June 2024 at 17:46:09 UTC, bauss wrote: If anything you should use a thread pool that each handles a set of sockets, instead of each thread being a single socket. Yup, thread pool it is. I'm still fleshing out the data structure which manages the incoming work presented to the pool, but here's what I have so far: https://sources.vsta.org:7100/dlang/file?name=tiny/rotor.d&ci=tip Andy
Re: Socket and spawn()
On Friday, 31 May 2024 at 16:59:08 UTC, Jonathan M Davis wrote: Strictly speaking, unless you're dealing with a module or static-level variable, the object is not in TLS. It's treated as thread-local by the type system, and the type system will assume that no other thread has access to it, but you can freely cast it to shared or immutable and pass it across threads. It's just that it's up to you to make sure that you don't have a thread-local reference to shared data that isn't protected in a fashion that accessing the thread-local references is guarantee to be thread-safe (e.g. the appropriate mutex has been locked). Thank you; this is the most complete explanation I've found yet for hwo to look at data sharing in D. On the other hand, if you're actively sharing an object across threads, then you cast it to shared and give it to the other thread. But then you have to use an appropriate thread-synchronization mechanism (likely a mutex in the case of a socket) to ensure that accessing the object is thread-safe. Speaking as an old kernel engineer for the Sequent multiprocessor product line, this is all very comfortable to me. I'm very glad that D has a suitable selection of spinlocks, process semaphores, and memory atomic operations. I can work with this! In any case, you can freely cast between thread-local and shared. It's just that you need to be sure that when you do that, you're not violating the type system by having a thread-local reference to shared data access that shared data without first protecting it with a mutex. That was the trick for me; TLS implied to me that an implementation would be free to arrange that the address of a variable in one thread's TLS would not necessarily be accessible from another thread. Now I'm clearer on the usage of the term WRT the D runtime. All good. Thanks again, Andy
Re: Socket and spawn()
On Friday, 31 May 2024 at 19:48:37 UTC, kdevel wrote: Have you taken into consideration that each of the (pre-spawned) threads can call accept()? Your program may also accept in multiple processes on the same socket. [1] Yes, but I am planning on some global behavior--mostly concerning resource consumption--where that would make implementing the policy harder. I've indeed done the cast-to-shared and then cast-to-unshared and it's working fine. BTW, if the strategy forward is where the type system is going to assist with flagging code paths requiring allowance for multiple threads, it would be nice if the modifiers were available symmetrically. "shared" and "unshared", "mutable" and "immutable", and so forth? I'm using: alias Unshared(T) = T; alias Unshared(T: shared U, U) = U; and that's fine, but for core semantics of the language, it might make sense to treat these as first class citizens. Andy
Socket and spawn()
I'm coding a server which takes TCP connections. I end up in the main thread with .accept() which hands me a Socket. I'd like to hand this off to a spawn()'ed thread to do the actual work. Aliases to mutable thread-local data not allowed. Is there some standard way to get something which _isn't_ in TLS? Or do I have to drop back to file descriptors and do my own socket handling? TIA, Andy
Re: Problem with clear on shared associative array?
On Monday, 27 May 2024 at 04:04:03 UTC, mw wrote: Pls NOTE: it is a `sharded` (meaning trunk-ed) NON-concurrent map, not `shared` concurrent map. Assuming I put it in shared memory, in what way is it not able to be used concurrently? It seems to have the needed lock operations? Thanks, Andy
Re: Problem with clear on shared associative array?
On Sunday, 26 May 2024 at 20:00:50 UTC, Jonathan M Davis wrote: No operation on an associative array is thread-safe. As such, you should not be doing _any_ operation on a shared AA without first locking a mutex to protect it. Then you need to cast away shared to access or mutate it or do whatever it is you want to do with it other than pass it around. And then when you're done, you make sure that no thread-local references to the AA exist, and you release the mutex. ... Thank you, that's exactly the big picture explanation I was hoping for. For others wrestling with this issue, I found out how to cast to unshared at this article: https://forum.dlang.org/thread/jwasqvrvkpqzimlut...@forum.dlang.org Andy
Problem with clear on shared associative array?
The following code fails to compile; it appears from the error message that the library's clear() function is not ready to act on a shared AA? synchronized class F { private: string[int] mydict; public: void clear() { this.mydict.clear(); } } void main() { auto f = new shared F(); f.clear(); }
Parallel safe associative array?
I was playing with parallel programming, and experienced "undefined behavior" when storing into an Associative Array in parallel. Guarding the assignments with a synchronized barrier fixed it, of course. And obviously loading down your raw AA with thread barriers would be foolish. But this set me searching through the library for a standard Associative Array construct which _is_ thread safe? It didn't jump out at me. I know I can place such a thing within a synchronized class, but I was wondering if there's a standard AA which has the standard usage but is safe when called in parallel? Thanks, Andy
Re: FIFO
On Sunday, 12 May 2024 at 22:03:21 UTC, Ferhat Kurtulmuş wrote: https://dlang.org/phobos/std_container_slist.html This is a stack, isn't it? LIFO? Ahh yes. Then use dlist Thank you. I read its source, and was curious so I wrote a small performance measurement: put 10,000 things in a FIFO, pull them back out, and loop around that 10,000 times. My FIFO resulted in: real0m1.589s user0m1.585s sys 0m0.004s And the dlist based one: real0m4.731s user0m5.211s sys 0m0.308s Representing the FIFO as a linked list clearly has its cost, but I found the increased system time interesting. OS memory allocations maybe? The code is spaghetti, fifo/dlist, but it seemed the easiest way to see the two API's being used side by side: version(fifo) { import tiny.fifo : FIFO; } else { import std.container.dlist : DList; } const uint ITERS = 10_000; const uint DEPTH = 10_000; void main() { version(fifo) { auto d = FIFO!uint(); } else { auto d = DList!uint(); } foreach(_; 0 .. ITERS) { foreach(x; 0 .. DEPTH) { version(fifo) { d.add(x); } else { d.insertBack(x); } } foreach(x; 0 .. DEPTH) { version(fifo) { assert(x == d.next()); } else { assert(x == d.front()); d.removeFront(); } } } }
Re: FIFO
On Sunday, 12 May 2024 at 19:45:44 UTC, Ferhat Kurtulmuş wrote: On Saturday, 11 May 2024 at 23:44:28 UTC, Andy Valencia wrote: I need a FIFO for a work scheduler, and nothing suitable jumped out at me. ... https://dlang.org/phobos/std_container_slist.html This is a stack, isn't it? LIFO? Andy
FIFO
I need a FIFO for a work scheduler, and nothing suitable jumped out at me. I wrote the following, but as a newbie, would be happy to receive any suggestions or observations. TIA! /* * fifo.d * FIFO data structure */ module tiny.fifo; import std.exception : enforce; const uint GROWBY = 16; /* * This is a FIFO, with "hd" walking forward and "tl" trailing * behind: *tl hd /Add here next *v v * | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 * * Mildly complicated by a module-size indexing. */ struct FIFO(T) { T[] items; ulong hd, tl, length; void add(T t) { // Make more room when needed if (this.items.length == this.length) { assert(this.hd == this.tl); // Add room and shuffle current contents auto olen = this.items.length; auto newlen = olen + GROWBY; this.items.length = newlen; this.tl = (this.tl + GROWBY) % newlen; // Shuffle what we're butted up against to their // new position at the top of this.items[] ulong moved = olen - this.hd; this.items[$ - moved .. $] = this.items[this.hd .. this.hd + moved]; } // Add item at next position this.items[hd] = t; this.hd = (this.hd + 1) % this.items.length; this.length += 1; } // Give back next T next() { enforce(this.length > 0, "next() from empty FIFO"); this.length -= 1; auto res = this.items[this.tl]; this.tl = (this.tl + 1) % this.items.length; return res; } } unittest { auto f = FIFO!uint(); f.add(1); f.add(2); f.add(3); assert(f.next() == 1); assert(f.next() == 2); assert(f.next() == 3); assert(f.length == 0); // Now overflow several times f = FIFO!uint(); foreach(x; 0 .. GROWBY * 3 + GROWBY/2) { f.add(x); } foreach(x; 0 .. GROWBY * 3 + GROWBY/2) { assert(f.next() == x); } assert(f.length == 0); } version(unittest) { void main() { } }
Re: "in" operator gives a pointer result from a test against an Associative Array?
On Friday, 10 May 2024 at 16:33:53 UTC, Nick Treleaven wrote: Arrays evaluate to true in boolean conditions if their `.ptr` field is non-null. This is bug-prone and I hope we can remove this in the next edition. ... A string literal's `.ptr` field is always non-null, because it is null-terminated. Thank you! Andy
Re: "in" operator gives a pointer result from a test against an Associative Array?
On Friday, 10 May 2024 at 03:07:43 UTC, Steven Schveighoffer wrote: Yes, we say that a type has "truthiness" if it can be used in a condition (`while`, `if`, `assert`, etc). So if I may ask for one more small clarification... WRT "truthiness", I've observed that empty arrays are treated as false, non-empty as true. However, although I thought a string was basically an immutable array of characters, "" is treated as true, not false? Thanks again, Andy
Re: "in" operator gives a pointer result from a test against an Associative Array?
On Friday, 10 May 2024 at 00:40:01 UTC, Meta wrote: Yes. The reason for this is that it avoids having to essentially do the same check twice. If `in` returned a bool instead of a pointer, after checking for whether the element exists (which requires searching for the element in the associative array), you'd then have to actually *get* it from the array, which would require searching again. Returning a pointer to the element if it exists (or `null` if it doesn't) cuts this down to 1 operation. Looking at Programming in D section 28.5, I'm guessing that pointer versus null is treated as the appropriate boolean value when consumed by an "if" test. So that example is getting a pointer to a string, or null, but the example looks exactly as the same as if it had directly gotten a bool. Thank you! Andy
"in" operator gives a pointer result from a test against an Associative Array?
tst7.d(6): Error: cannot implicitly convert expression `e in this.members` of type `bool*` to `bool` tst7.d(15): Error: template instance `tst7.Foo!uint` error instantiating I'm getting this for this bit of source (trimmed from the bigger code). I switched to this.members.get(e, false) and that works fine, but I'm still curious: struct Foo(T) { bool[T] members; bool has(T e) { return (e in this.members); } } void main() { import std.stdio : writeln; auto t = Foo!uint(); writeln(t.has(123)); }
Re: TIL: statically initializing an Associative Array
On Tuesday, 7 May 2024 at 01:14:24 UTC, Steven Schveighoffer wrote: On Tuesday, 7 May 2024 at 00:10:27 UTC, Andy Valencia wrote: I had a set of default error messages to go with error code numbers, and did something along the lines of: string[uint] error_text = [ 400: "A message", 401: "A different message" ]; and got "expression is not a constant" ... This was fixed [in 2.106.0](https://dlang.org/changelog/2.106.0.html#dmd.static-assoc-array) please upgrade your compiler. I'm using ldc2 from Debian stable; great news that it's fixed as of late 2023. I'll probably live with my workaround, but it's good to know that it's a bug which has been resolved. Thank you! Andy
TIL: statically initializing an Associative Array
I had a set of default error messages to go with error code numbers, and did something along the lines of: string[uint] error_text = [ 400: "A message", 401: "A different message" ]; and got "expression is not a constant" I eventually found this discussion: https://issues.dlang.org/show_bug.cgi?id=6238 I understand that it's problematic, but a message which makes it clearer that compile-time initialization of global AA's are not supported? Because it cost me about a half hour trying to figure out what I was doing wrong. (My workaround was to initialize the data structure once during app startup.)
Re: Challenge Tuples
On Friday, 26 April 2024 at 13:25:34 UTC, Salih Dincer wrote: You have a 5-item data tuples as Tuple(1, 2, 3, [1, 3], 5) and implement the sum (total = 15) with the least codes using the sum() function of the language you are coding... My Python solution (function named dosum to avoid collision w. Python primitive): def dosum(itm): if isinstance(itm, (int, float)): return itm return sum( dosum(_i) for _i in itm ); print dosum( [1, 2, 3, [1, 3], 5] )
Re: Making one struct work in place of another for function calls.
On Wednesday, 17 April 2024 at 03:13:46 UTC, Liam McGillivray wrote: Is there a way I can replace "`TypeB`" in the function parameters with another symbol, and then define that symbol to accept `TypeB` as an argument, but also accept `TypeA` which would get converted to `TypeB` using a function? I'm willing to make a function template if it's rather simple. Of course, if these were classes, this is classic inheritance and polymorphism. It would be trivial to subclass the library's version and still have it accepted by things which knew how to use the parent class. Or the library specified an interface, you could again use the polymorphism. The closest I got was using function overloads, attached. Andy import std.stdio : writeln; struct Foo { int x; void doit() { writeln(this.x); } } struct Bar { int y; // No doit() } void myop(Foo f) { f.doit(); } void myop(Bar b) { auto f = Foo(b.y); f.doit(); } void main() { auto b = Bar(3); b.myop(); }
Re: mmap file performance
On Monday, 15 April 2024 at 08:05:25 UTC, Patrick Schluter wrote: The setup of a memory mapped file is relatively costly. For smaller files it is a net loss and read/write beats it hands down. Interestingly, this performance deficit is present even when run against the largest conveniently available file on my system--libQt6WebEngineCore.so.6.4.2 at 148 megs. But since this reproduces in its C counterpart, it is not at all a reflection of D. As you say, truly random access might play to mmap's strengths. My real point is that, whichever API I use, coding in D was far less tedious; I like the resulting code, and it showed no meaningful performance cost.
Re: mmap file performance
On Thursday, 11 April 2024 at 14:54:36 UTC, Steven Schveighoffer wrote: For a repeatable comparison, you should provide the code which does 1MB reads. With pleasure: import std.stdio : writeln, File, stderr; const uint BUFSIZE = 1024*1024; private uint countnl(File f) { uint res = 0; char[BUFSIZE] buf; while (!f.eof) { auto sl = f.rawRead(buf); foreach (c; sl) { if (c == '\n') { res += 1; } } } return res; } private uint procfile(in string fn) { import std.exception : ErrnoException; File f; try { f = File(fn, "r"); } catch(ErrnoException e) { stderr.writeln("Can't open: ", fn); return 0; } uint res = countnl(f); f.close(); return res; } void main(in string[] argv) { foreach (fn; argv[1 .. $]) { uint res; res = procfile(fn); writeln(fn, ": ", res); } }
mmap file performance
I wrote a "count newlines" based on mapped files. It used about twice the CPU of the version which just read 1 meg at a time. I thought something was amiss (needless slice indirection or something), so I wrote the code in C. It had the same CPU usage as the D version. So...mapped files, not so much. Not D's fault. And writing it in C made me realize how much easier it is to code in D! The D version: import std.stdio : writeln; import std.mmfile : MmFile; const uint CHUNKSZ = 65536; size_t countnl(ref shared char[] data) { size_t res = 0; foreach (c; data) { if (c == '\n') { res += 1; } } return res; } void usage(in string progname) { import core.stdc.stdlib : exit; import std.stdio : stderr; stderr.writeln("Usage is: ", progname, " %s ..."); exit(1); } public: void main(string[] argv) { if (argv.length < 2) { usage(argv[0]); } foreach(mn; argv[1 .. $]) { auto mf = new MmFile(mn); auto data = cast(shared char[])mf.opSlice(); size_t res; res = countnl(data); writeln(mn, ": ", res); } } And the C one (no performance gain over D): #include #include #include #include #include static unsigned long countnl(int fd, char *nm) { char *buf, *p; struct stat st; unsigned int cnt; unsigned long res; if (fstat(fd, &st) < 0) { perror(nm); return(0); } cnt = st.st_size; buf = mmap(0, cnt, PROT_READ, MAP_SHARED, fd, 0); if (buf == MAP_FAILED) { perror(nm); return(0); } res = 0L; for (p = buf; cnt; cnt -= 1) { if (*p++ == '\n') { res += 1L; } } munmap(buf, st.st_size); return(res); } int main(int argc, char **argv) { int x; for (x = 1; x < argc; ++x) { unsigned long res; char *nm = argv[x]; int fd = open(nm, O_RDONLY); if (fd < 0) { perror(nm); continue; } res = countnl(fd, nm); close(fd); printf("%s: %uld\n", nm, res); } }
Re: Why does Nullable implicitly casts when assigning a variable but not when returning from a function?
On Wednesday, 10 April 2024 at 20:41:56 UTC, Lettever wrote: ``` import std; Nullable!int func() => 3; void main() { Nullable!int a = 3; //works fine Nullable!int b = func(); //does not compile } Why make func() Nullable? It just wants to give you an int, right? Making it a function returning an int fixes this. Andy
Opinions on iterating a struct to absorb the decoding of a CSV?
I wanted a lightweight and simpler CSV decoder. I won't post the whole thing, but basically you instantiate one as: struct Whatever { ... } ... f = File("path.csv", "r"); auto c = CSVreader!Whatever(f); foreach (rec; c) { ... CSVreader is, of course, templated: struct CSVreader(T) { ... } and the innermost bit of CSVreader is: auto t = T(); foreach (i, ref val; t.tupleof) { static if (is(typeof(val) == int)) { val = this.get_int(); } else { val = this.get_str(); } } return t; So you cue off the type of the struct field, and decode the next CSV field, and put the value into the new struct. Is there a cleaner way to do this? This _does_ work, and gives me very compact code.
Re: varargs when they're not all the same type?
On Friday, 15 March 2024 at 00:11:11 UTC, Andy Valencia wrote: (varargs & friends) Which statement leads me to section 77.2 of "Programming in D", and now I am deep into the mechanisms behind what you have very kindly shared. Thank you once more. As some fruits of my labors here, below is a link to a "fmt" module which does C-style formatting. It supports int/long signed/unsigned, right/left padding and zero padding, plus strings (w. padding). It's memory and type safe; I ended up using unions to tabulate the arguments as I need to access them as an array (rather than walking them--I'm walking the format string instead). It adds 6k to an executable, which means dlang will work out fine for all of my smaller scripting needs in the future. Calls look like: auto s = fmt("%d %u - %20s", 123, 456, "Hi, Mom"); https://sources.vsta.org:7100/dlang/file?name=fmt.d&ci=tip Comments are welcome! I'd post here, but it seems a little long for that? Andy
Re: varargs when they're not all the same type?
On Thursday, 14 March 2024 at 23:13:51 UTC, Basile B. wrote: ... However explicit instantiation can take whatever is known at compile time, such as constant expressions or even certain static variables. So that is rather called an `alias sequence` in D. Which statement leads me to section 77.2 of "Programming in D", and now I am deep into the mechanisms behind what you have very kindly shared. Thank you once more. Andy
Re: varargs when they're not all the same type?
On Thursday, 14 March 2024 at 18:05:59 UTC, H. S. Teoh wrote: ... The best way to do multi-type varags in D is to use templates: import std; void myFunc(Args...)(Args args) { Thank you. The first parenthetical list is of types, is it not? I can't find anywhere which says what "type" is inferred for "Args..."? (gdb pretends like "arg" is not a known symbol.) Is it basically a tuple of the suitable type? Andy
varargs when they're not all the same type?
Can somebody give me a starting point for understanding varadic functions? I know that we can declare them int[] args... and pick through whatever the caller provided. But if the caller wants to pass two int's and a _string_? That declaration won't permit it. I've looked into the formatter, and also the varargs implementation. But it's a bit of a trip through a funhouse full of mirrors. Can somebody describe the basic language approach to non-uniform varargs, and then I can take it the rest of the way reading the library. Thanks in advance! Andy
Re: static functions?
On Monday, 11 March 2024 at 16:25:13 UTC, Jonathan M Davis wrote: ... But what exactly static means varies based on the context. Thank you for the list! But none of those appear to apply to a function defined in the outermost scope of the module. Is static accepted here--but has no actual effect? I will look at the privacy controls--thanks again. Andy
static functions?
Leveraging my knowledge of C, I assumed a "static" function would be hidden outside of its own source file. I can't find any statement about the semantics of a static function in the documentation, and in practice (ldc2 on Linux) it doesn't hide the function? file tst.d: import std.stdio : writeln; import tst1; void main() { writeln(do_op()); writeln(do_op()); } and file tst1.d: static int do_op() { static int x; x += 1; return(x); }
Re: Question on shared memory concurrency
On Monday, 4 March 2024 at 18:08:52 UTC, Andy Valencia wrote: For any other newbie dlang voyagers, here's a version which works as expected using the system memory allocator. On my little i7 I get 1.48 secs wallclock with 5.26 CPU seconds. ... Using a technique I found in a unit test in std/concurrency.d, I managed to share process memory without GC. It counted up to 1,000,000,000 on my low end i7 in: real0m15.666s user0m59.913s sys 0m0.004s import core.atomic : atomicFetchAdd; import std.concurrency : spawn, send, receiveOnly, ownerTid; import core.thread : Thread; const uint NSWEPT = 1_000_000_000; const uint NCPU = 4; void doadd() { auto val = receiveOnly!(shared(int)[]); for (uint count = 0; count < NSWEPT/NCPU; ++count) { atomicFetchAdd(val[0], 1); } ownerTid.send(true); } void main() { static shared int[] val = new shared(int)[1]; // Parallel workers for (int x = 0; x < NCPU; ++x) { auto tid = spawn(&doadd); tid.send(val); } // Pick up all completed workers for (int x = 0; x < NCPU; ++x) { receiveOnly!(bool); } assert(val[0] == NSWEPT); }
Re: Question on shared memory concurrency
On Monday, 4 March 2024 at 16:02:50 UTC, Andy Valencia wrote: On Monday, 4 March 2024 at 03:42:48 UTC, Richard (Rikki) Andrew Cattermole wrote: ... I still hope to be able to share memory between spawned threads, and if it isn't a shared ref of a shared variable, then what would it be? Do I have to use the memory allocator? For any other newbie dlang voyagers, here's a version which works as expected using the system memory allocator. On my little i7 I get 1.48 secs wallclock with 5.26 CPU seconds. import core.atomic : atomicFetchAdd; import std.concurrency : spawn; import core.time : msecs; import core.thread : Thread; import core.memory : GC; const uint NSWEPT = 100_000_000; const uint NCPU = 4; void doadd(shared uint *buf) { for (uint count = 0; count < NSWEPT/NCPU; ++count) { atomicFetchAdd(buf[0], 1); } } void main() { shared uint *buf = cast(shared uint *)GC.calloc(uint.sizeof * 1, GC.BlkAttr.NO_SCAN); for (uint x = 0; x < NCPU-1; ++x) { spawn(&doadd, buf); } doadd(buf); while (buf[0] != NSWEPT) { Thread.sleep(1.msecs); } }
Re: Question on shared memory concurrency
On Monday, 4 March 2024 at 03:42:48 UTC, Richard (Rikki) Andrew Cattermole wrote: A way to do this without spawning threads manually: ... Thank you! Of course, a thread dispatch per atomic increment is going to be s.l.o.w., so not surprising you had to trim the iterations. Bug I still hope to be able to share memory between spawned threads, and if it isn't a shared ref of a shared variable, then what would it be? Do I have to use the memory allocator?
Question on shared memory concurrency
I tried a shared memory parallel increment. Yes, it's basically a cache line thrasher, but I wanted to see what's involved in shared memory programming. Even though I tried to follow all the rules to make true shared memory (not thread local) it appears I failed, as the wait loop at the end only sees its own local 250 million increments? import core.atomic : atomicFetchAdd; import std.stdio : writeln; import std.concurrency : spawn; import core.time : msecs; import core.thread : Thread; const uint NSWEPT = 1_000_000_000; const uint NCPU = 4; void doadd(ref shared(uint) val) { for (uint count = 0; count < NSWEPT/NCPU; ++count) { atomicFetchAdd(val, 1); } } void main() { shared(uint) val = 0; for (int x = 0; x < NCPU-1; ++x) { spawn(&doadd, val); } doadd(val); while (val != NSWEPT) { Thread.sleep(1.msecs); } }