non-block reading from pipe stdout
Hello. I run program through std.process.pipeShell and want to read from it stdout in loop. How do this non-blocking? I try int fd = p.stdout.fileno; int flags = fcntl(fd, F_GETFL, 0); flags |= O_NONBLOCK; fcntl(fd, F_SETFL, flags); but get error "Resource temporarily unavailable".
Static introspection of suitable hash function depending on type of hash key
Does anyone have any good reads on the subject of statically choosing suitable hash-functions depending on the type (and in turn size) of the key? I wonder because I'm currently experimenting with a hash set implementation at https://github.com/nordlow/phobos-next/blob/40e2973b74d58470a13a5a6ee0ed9c9a42c6dea1/src/hashset.d and benchmark for different hash-functions for it at https://github.com/nordlow/phobos-next/blob/b8942dc569921b4dadfddbdcdac3a2bb0834a9e0/src/benchmarkAppend.d I'm measuring significant differences in speed depending on the choice of the hash-function: Inserted 100 integers in 49 ms, 65 μs, and 9 hnsecs, Checked 100 integers in 48 ms, 562 μs, and 2 hnsecs for HashSet!(uint, null, typeidHashOf) Inserted 100 integers in 51 ms, 897 μs, and 5 hnsecs, Checked 100 integers in 47 ms, 108 μs, and 9 hnsecs for HashSet!(uint, null, hashOf) Inserted 100 integers in 60 ms, 641 μs, and 5 hnsecs, Checked 100 integers in 70 ms, 664 μs, and 2 hnsecs for HashSet!(uint, null, MurmurHash3!(128u, 64u)) Inserted 100 integers in 34 ms, 450 μs, and 5 hnsecs, Checked 100 integers in 27 ms, 738 μs, and 8 hnsecs for HashSet!(uint, null, FNV!(64LU, true)) Inserted 100 integers in 97 ms, 400 μs, and 6 hnsecs, Checked 100 integers in 104 ms, 33 μs, and 1 hnsec for HashSet!(uint, null, XXHash64) integers in 39 ms, 304 μs, and 3 hnsecs for bool[uint] using LDC 1.4.0. A factor 2 for insert() and factor 4 for contains(). The reason is partly because many high-performance hashes, such as XXHash64, have a significant overhead (tens of clock-cycles) because of its super-scalar nature but are fast (~1 clock-cycle per byte) for large keys. The test is dumb for now and is only constructed to benchmark the hash-function. According to a comment at https://stackoverflow.com/questions/46533112/static-introspection-of-suitable-hash-function-depending-on-type-of-hash-key "the C++ standard library already includes hashes for many basic types? (I ask this because if you don't have a special distribution I would assume the standard committee has already made good choices. My answer would be to use those.)" Does Phobos have anything similar today or planned?
Re: When to opCall instead of opIndex and opSlice?
On 10/2/17 2:31 PM, Per Nordlöw wrote: On Monday, 2 October 2017 at 18:14:24 UTC, Jacob Carlborg wrote: On 2017-10-02 17:57, Nordlöw wrote: Is implementing opCall(size_t) for structures such as array containers that already define opIndex and opSlice deprecated? I can't find any documentation on the subject on when opCall should be defined to enable foreach (isIterable). opCall is not related to foreach. It's used to overload the call operator, i.e. (). struct Foo { void opCall() {}; } Foo foo; foo(); Are you thinking of opApply [1]? [1] https://dlang.org/spec/statement.html#foreach_over_struct_and_classes Ahh, yes of course. It seems like defining opIndex anf opSlice is enough in the array container case. Why do we have opApply as well? Non-random access? First, it should be front, popFront, and empty, not opIndex and opSlice. Second, opApply has existed forever (even in D1). Ranges are more recent. Third, opApply has some characteristics that are difficult to implement via ranges. -Steve
Re: When to opCall instead of opIndex and opSlice?
On Monday, October 02, 2017 18:31:23 Per Nordlöw via Digitalmars-d-learn wrote: > On Monday, 2 October 2017 at 18:14:24 UTC, Jacob Carlborg wrote: > > On 2017-10-02 17:57, Nordlöw wrote: > >> Is implementing opCall(size_t) for structures such as array > >> containers that already define opIndex and opSlice deprecated? > >> > >> I can't find any documentation on the subject on when opCall > >> should be defined to enable foreach (isIterable). > > > > opCall is not related to foreach. It's used to overload the > > call operator, i.e. (). > > > > struct Foo > > { > > > > void opCall() {}; > > > > } > > > > Foo foo; > > foo(); > > > > Are you thinking of opApply [1]? > > > > [1] > > https://dlang.org/spec/statement.html#foreach_over_struct_and_classes > > Ahh, yes of course. > > It seems like defining opIndex anf opSlice is enough in the array > container case. Why do we have opApply as well? Non-random access? opApply was the original way to implement the ability to use foreach with user defined types. Later, when ranges were added to the language, foreach was made to try slicing the type, and if that resulted in a range, then that was used with foreach. So, opSlice made it possible to use foreach (but only if the result was a range), and later opIndex was altered to also do what opSlice does. opApply is kept around in part to avoid breaking any code and in part because there are cases where implementing foreach that way rather than with a range is more efficient. IMHO, it also makes more sense in cases where you're dealing with a transitive front - e.g. if std.stdio's byLine were implemented via opApply and you _had_ to use byLineCopy when you wanted a range, then we'd have avoided a lot of problems with byLine - though at least we do now have byLineCopy instead of just byLine, even if byLine still returns a range. Random access has nothing to do with foreach in any of those cases, because random access isn't ever used with foreach and user-defined types. The closest you get is when you use index with foreach, and that works with arrays (but not ranges) and with opApply. So, it's probably using random access on arrays when iterating over them, but it doesn't with ranges in general, and it doesn't with opApply unless the underlying implementation happens to use random access inside of opApply; opApply itself isn't designed to use random access any more than front and popFront are. - Jonathan M Davis
Re: When to opCall instead of opIndex and opSlice?
On Monday, 2 October 2017 at 18:14:24 UTC, Jacob Carlborg wrote: On 2017-10-02 17:57, Nordlöw wrote: Is implementing opCall(size_t) for structures such as array containers that already define opIndex and opSlice deprecated? I can't find any documentation on the subject on when opCall should be defined to enable foreach (isIterable). opCall is not related to foreach. It's used to overload the call operator, i.e. (). struct Foo { void opCall() {}; } Foo foo; foo(); Are you thinking of opApply [1]? [1] https://dlang.org/spec/statement.html#foreach_over_struct_and_classes Ahh, yes of course. It seems like defining opIndex anf opSlice is enough in the array container case. Why do we have opApply as well? Non-random access?
Re: When to opCall instead of opIndex and opSlice?
On 2017-10-02 17:57, Nordlöw wrote: Is implementing opCall(size_t) for structures such as array containers that already define opIndex and opSlice deprecated? I can't find any documentation on the subject on when opCall should be defined to enable foreach (isIterable). opCall is not related to foreach. It's used to overload the call operator, i.e. (). struct Foo { void opCall() {}; } Foo foo; foo(); Are you thinking of opApply [1]? [1] https://dlang.org/spec/statement.html#foreach_over_struct_and_classes -- /Jacob Carlborg
When to opCall instead of opIndex and opSlice?
Is implementing opCall(size_t) for structures such as array containers that already define opIndex and opSlice deprecated? I can't find any documentation on the subject on when opCall should be defined to enable foreach (isIterable).
Re: Struct bug?
On Monday, 2 October 2017 at 09:34:29 UTC, Andrea Fontana wrote: Anyway: you cant put a default destructor on struct True. In which case you should either @disable this() (which presents its own set of issues) or hide b behind a @property function, something like: struct S { B _b; @property B b() { if (_b is null) _b = new B(); return b; } } This exact same issue also crops up for classes, since typeid(T).initializer is simply blitted over the newly allocated memory. At least for classes we could change the language such that: class C { int[] p = new int[5]; } is sugar for: class C { int[] p; this() { p = new int[5]; } } No such solution exists for structs, since they don't have default constructors. -- Biotronic
Re: Struct bug?
On Monday, 2 October 2017 at 09:08:59 UTC, Biotronic wrote: Not knowing what you're attempting to do, I'm not sure how to fix your problem. But if what I've described above does indeed cover it, initializing b in the constructor is the way to get it to work. -- Biotronic Obviusly real example is quite different and larger. Anyway: you cant put a default destructor on struct
Re: Struct bug?
On Monday, 2 October 2017 at 08:47:47 UTC, Andrea Fontana wrote: Why this code doesn't write two identical lines? https://dpaste.dzfl.pl/e99aad315a2a Andrea A reduced example of where it goes wrong: class B {} struct A { B b = new B; } unittest { A a1, a2; assert(a1 == a2); } In other words, when you initialize the class reference in your struct, it has to be a value that's known at compile-time. So the compiler creates a single instance of B, and every instance of A points to it. So this line: A a = A(A(1), 2); first appends 1 to b.data, then appends 2 to b.data, and it's the same b in both cases. Not knowing what you're attempting to do, I'm not sure how to fix your problem. But if what I've described above does indeed cover it, initializing b in the constructor is the way to get it to work. -- Biotronic
Struct bug?
Why this code doesn't write two identical lines? https://dpaste.dzfl.pl/e99aad315a2a Andrea