Re: Post increment and decrement
On Wednesday, 11 March 2015 at 17:23:15 UTC, welkam wrote: Observation Nr. 1 People prefer to write var++ instead of ++var. The root of your reasoning stems from this "observation", which I argue is wrong. The recommendation has always been to chose ++var, unless you have a reason to chose var++. Because of all this why not make only one increment/decrement operator and have post increment/decrement to be called by template name, because it is a template? Or, instead of creating a new semantic, simply use the existing one. It really isn't that complicated.
Re: Two suggestions for safe refcounting
On Friday, 6 March 2015 at 07:46:13 UTC, Zach the Mystic wrote: The second, harder problem, is when you take a reference to a subcomponent of an RC'd type, e.g. an individual E of an RCArray of E: struct RCArray(E) { E[] array; int* count; ... } auto x = RCArray([E()]); E* t = &x[0]; But taking that address is unsafe to begim with. Do arguably, this isn't that big of a problem. Your first dual reference issue seems much more problematic, as there are always cases the compiler can't catch.
Re: A Refcounted Array Type
On Thursday, 5 March 2015 at 16:19:09 UTC, Marc Schütz wrote: On Thursday, 5 March 2015 at 15:20:47 UTC, monarch_dodra wrote: On Monday, 23 February 2015 at 22:15:54 UTC, Walter Bright wrote: private: E[] array; size_t start, end; int* count; What is the point of keeping start/end? Aren't those baked into the array slice? Not storing start end means not having to do index arithmetic (minor), reducing struct size (always nice). But more importantly, it allows implicit (and conditional) bounds checking (awesome), which actually runs regardless anyways. Or did I miss something? `GC.free()` needs a pointer to the start of the allocated block; it will not release memory if it gets an interior pointer. But as far as I can see, one pointer that stores the original address should be enough. Still, you shouldn't need "end", and bounds checking would "just work".
Re: A Refcounted Array Type
On Monday, 23 February 2015 at 22:15:54 UTC, Walter Bright wrote: private: E[] array; size_t start, end; int* count; What is the point of keeping start/end? Aren't those baked into the array slice? Not storing start end means not having to do index arithmetic (minor), reducing struct size (always nice). But more importantly, it allows implicit (and conditional) bounds checking (awesome), which actually runs regardless anyways. Or did I miss something?
Re: "Named parameter builder" pattern for template parameters
On Friday, 21 November 2014 at 14:46:00 UTC, ketmar via Digitalmars-d wrote: On Fri, 21 Nov 2014 13:39:42 + monarch_dodra via Digitalmars-d wrote: D has phenomenal meta programming possibilities, and I see more and more templates taking more and more parameters. So I thought to myself, why not have a template builder pattern? i did something similar in my iv.writer (ctfe format string parser). once i got about ten argumnets to each template (and growing), i moved everything to structure. struct WrData { ... fields ... auto set(string name, T) (in T value) if (__traits(hasMember, this, name)) { __traits(getMember, this, name) = value; return this; } } and then i can pass the struct around, changing only the fields i want, in "pseudo-functional" style: template MyTpl!(WrData data) { ... MyTpl!(data.set!"field0"(value0).set!"field1"(value1)); } that was a real saver. but now i have to wait for GDC update, 'cause 2.065 frontend is not able to use structs as template args. Hum, use a standard structure combined with CTFE to achieve the same goal. Smart. BTW, if you use opDispatch, you could probably have it as: MyTpl!(data.set_field0(value0).set_field1(value1)); Which may or may not read nicer, depending on how you look at it. With proper template overloading, you should even be able to implement this as a backwards compatible template.
"Named parameter builder" pattern for template parameters
I trust everyone here knows about the "builder" pattern (http://en.wikipedia.org/wiki/Builder_pattern)? It can be very useful when the number of (optional) arguments in a function start to run rampant, and you know the user only wants to start setting a subset of these. D has phenomenal meta programming possibilities, and I see more and more templates taking more and more parameters. So I thought to myself, why not have a template builder pattern? I was able to throw this together: http://dpaste.dzfl.pl/366d1fc22c9c Which allows things like: alias MyContainerType = ContainerBuilder!int //.setUseGC!??? //Don't care about GCm use default. .setScan!false .setUSomeOtherThing!true .Type; I think this is hugely neat. I'm posting here both to share, and for "peer review feedback". I'm also wondering if there is prior "literature" about this, and if it's something we'd want more of in Phobos?
Re: New std.range submodule
On Wednesday, 19 November 2014 at 18:22:27 UTC, H. S. Teoh via Digitalmars-d wrote: On Wed, Nov 19, 2014 at 06:06:26PM +, Nick Treleaven via Digitalmars-d wrote: On 14/11/2014 21:52, David Nadlinger wrote: >On Friday, 14 November 2014 at 06:10:43 UTC, Rikki Cattermole >wrote: >>std.range.checks > >For this, std.range.constraints would also be perfectly fine. If it's not too late, can we change the name to std.range.traits? It seems better as they can be used in 'static if', not just for template constraints. Plus it's shorter and consistent with std.traits naming. It's not too late until the next release. I prefer std.range.primitives, since it's not just traits, but also includes things like range API for built-in arrays (.front, .empty, .popFront). T I think "constraints" should be called "traits", because that's what it actually is, and the top level module is called "traits" (eg: isSomeString <==> isForwardRange). It makes little sense to me for both to have different names. The "popFront"/"moveFront" and friends I think belong in "primitives". It sounds nice. Also, every package std.range.* should publicly include std.range.constraints too. That's what container does, in that every sub package also includes the the "make" in std.container.utility.
Re: example of pointer usefulness in D
On Tuesday, 21 October 2014 at 12:22:54 UTC, edn wrote: Could someone provide me with examples showing the usefulness of pointers in the D language? They don't seem to be used as much as in C and C++. The only difference between C/C++ and D is that C uses pointers for both "pointer to object" and "pointer to array", whereas D has a "slice" object. C++ introduced "pass-by-ref" (also exists in D), which has tended to reduce the (visible) use. Furthermore, the more modern the language, the more the "raw" pointers tend to be encapsulated in structures that manage them for you. So while you don't "see" them quite as much, they are still there, and fill exactly the same role.
Re: OT: Minecraft death by GC
On Tuesday, 21 October 2014 at 09:07:04 UTC, ROOAR wrote: I could quote the entire post, but the bottom line is: this issue has nothing to do with the GC. Crappy code is crappy code. So your OP is just pointless troll. This issue sure does seem to crop up in GC world, wonder why. Oh well. Hurp, I wonder why GC issues only appear with application that use a GC. Hurp-a-durp. Also, the issue of memory leak and core dumps seem to only appear when you use a systems language. How crazy is that?
Re: GCC Undefined Behavior Sanitizer
On Sunday, 19 October 2014 at 09:56:44 UTC, Ola Fosheim Grøstad wrote: In C++ you should default to int and avoid uint unless you do bit manipulation according to the C++ designers. There are three reasons: speed, portability to new hardware and correctness. Speed: How so? Portability: One issue to keep in mind is that C works on *tons* of hardware. C allows hardware to follow either two's complement, or one's complement. This means that, at best, signed overflow can be implementation defined, but not defined by spec. Unfortunately, it appears C decided to outright go the undefined way. Correctness: IMO, I'm not even sure. Yeah, use int for numbers, but stick to size_t for indexing. I've seen too many bugs on x64 software when data becomes larger than 4G...
Re: GCC Undefined Behavior Sanitizer
On Saturday, 18 October 2014 at 23:10:15 UTC, Ola Fosheim Grøstad wrote: On Saturday, 18 October 2014 at 08:22:25 UTC, monarch_dodra wrote: Besides, the code uses x + 1, so the code is already in undefined state. It's just as wrong as the "horrible code with UB" we wère trying to avoid in the first place. So much for convincing me that it's a good idea... Not sure if you are saying that modulo-arithmetic as a default is a bad or good idea? Op usually suggested that all overflows should be undefined behavior, and that you could "pre-emptivelly" check for overflow with the above code. The code provided itself overflowed, so was also undefined. What I'm pointing out is that working with undefined behavior overflow is exceptionally difficult, see later. In D and (C++ for uint) it is modulo-arithmetic so it is defined as a circular type with at discontinuity which makes reasoning about integers harder. What interesting is that overflow is only defined for unsigned integers. signed integer overflow is *undefined*, and GCC *will* optimize away any conditions that rely on it. One thing I am certain of, is that making overflow *undefined* is *much* worst than simple having modulo arithmetic. In particular, implementing trivial overflow checks is much easier for the average developper. And worst case scenario, you can still have library defined checked integers.
Re: C++ Ranges proposal for the Standard Library
On Saturday, 18 October 2014 at 17:31:18 UTC, Walter Bright wrote: I agree. It's like "foreach" in D. It's less powerful and foundational than a "for" loop (in fact, the compiler internally rewrites foreach into for), but that takes nothing away from how darned useful (and far less bug prone) foreach is. Actually, there are quite a few bugs related to modifying values and/or indexes in a foreach loop. In particular, foreach allows "ref" iteration over ranges that don't give ref access...
Re: C++ Ranges proposal for the Standard Library
On Saturday, 18 October 2014 at 15:39:36 UTC, Ola Fosheim Grøstad wrote: On Saturday, 18 October 2014 at 15:17:09 UTC, Andrei Alexandrescu wrote: No need to implement it. http://dlang.org/phobos/std_algorithm.html#.sum It isn't accurate. Did you look at the doc. It's specially designed to be accurate...
Re: Postblit bug
On Saturday, 18 October 2014 at 06:43:28 UTC, Marco Leise wrote: Am Fri, 17 Oct 2014 17:25:46 + schrieb "monarch_dodra" : But maybe this answers your question? import std.stdio; struct S { int* p; this(this) { ++*p; } } void main() { immutable i = 0; auto s1 = immutable(S)(&i); auto s2 = s1; assert(*&i == 0); } Consider that when passing a variable you can always remove top level const-ness because a copy is made. This holds for returns, parameters, assignments, ... Post-blit is no different. The issue as I see it, is that D doesn't have strong support for this notion of head-mutable or D has it for primitives, such as pointers, slices... else it would work with this type during post-blit: struct S { immutable(int)* p; this(this) { ++*p; } } Unsure how that's relevant? This code looks wrong to me no matter how you look at it?
Re: GCC Undefined Behavior Sanitizer
On Friday, 17 October 2014 at 13:44:24 UTC, ketmar via Digitalmars-d wrote: On Fri, 17 Oct 2014 09:46:48 + via Digitalmars-d wrote: In D (and C++) you would get: if (x < ((x+1)&0x)){…} perfect. nice and straightforward way to do overflow checks. Besides, the code uses x + 1, so the code is already in undefined state. It's just as wrong as the "horrible code with UB" we wère trying to avoid in the first place. So much for convincing me that it's a good idea...
Re: C++ Ranges proposal for the Standard Library
On Friday, 17 October 2014 at 19:11:48 UTC, eles wrote: On Friday, 17 October 2014 at 17:14:24 UTC, Paulo Pinto wrote: Am 17.10.2014 um 17:14 schrieb "Ola Fosheim =?UTF-8?B?R3LDuHN0YWQi?= ": On Friday, 17 October 2014 at 09:52:26 UTC, Marco Leise wrote: And slrrr And it has tbs! :(( Tabs, in and out of themselves, aren't bad. Arguably, they are better than spaces. In a perfect world, everyone would use tabs everywhere, and each individual would set their editor to their prefered indent size (personally, I like too. It's concise) However, once you start working with people who can't be arsed to keep a consistent style, then you have to "lower your standard to the lowest common denominator". That's mostly the reason people tend to opt for spaces everywhere. I just read this though: "Python 3 disallows mixing the use of tabs and spaces for indentation." Fucking WIN. A compiler that will *refuse* to compile your code because it is too ugly? Mind BLOWN.
Re: Postblit bug
On Friday, 17 October 2014 at 16:19:47 UTC, IgorStepanov wrote: It's just common words=) I meant that when postblit is called when new object is being creating and doesn't exists for user code. E.g. const S v1 = v2; Ok, v1 _will_ be const when it will be _created_. However postblit can think that object is mutable, because it called before the first accessing to the object from user code. Thus I ask about case when postblit may mutate a const object, which created before postblitted object and may been accessed from user code before this postblitting. That's way too many words for a single sentence for me to understand ;) But maybe this answers your question? import std.stdio; struct S { int* p; this(this) { ++*p; } } void main() { immutable i = 0; auto s1 = immutable(S)(&i); auto s2 = s1; assert(*&i == 0); }
Re: How to iterate over const(RedBlackTree)?
On Friday, 17 October 2014 at 01:09:00 UTC, John McFarlane wrote: On Friday, 3 January 2014 at 07:22:32 UTC, monarch_dodra wrote: On Thursday, 2 January 2014 at 14:59:55 UTC, Adam D. Ruppe wrote: On Thursday, 2 January 2014 at 13:30:06 UTC, monarch_dodra wrote: Currently, this is not possible. Or if it was, it would have a *very* high code cost inside RBT. It is possible in theory; RBT returns a separate Range type with opSlice that is implemented in terms of pointers to node. (This is hidden by an alias RBNode!Elem* Node, which gets in the way, but if those were inout(RBNode)* or const()* it'd work). We can have mutable pointers to const data, which would work with the range. So opSlice returns a mutable range that points back to const data. But in this case, none of the functions in rbtree use const nor inout, and there's some caching (e.g. _left and _right) that I'm not sure can work with it at all anyway. In a const node, the left and right properties won't work.. Right. Doable, but not trivially so :/ Array might be able to pull it off more easily. That said, it would only solve the "const container => Range" issue, but the "const range" problem itself would remain :( I'm trying to get to grips with D, coming from a C++ background and const correctness is tough thing to get comfortable with. I've got a member variable that's RedBlackTree and I'd like to examine its contents inside an invariants() block. Is this possible at all right now? Obviously, this would be trivial to achieve using std::set::const_iterator so is it the language or the library that poses the difficulty for me? Thanks. Currently, D containers don't offer "ConstRange opSlice() const" (which would be the equivalent of const_iterator). This could be a good solution the the issue. But to answer your question: Both the language and library that are making your life difficult. For starters, the D language does not use const the way C++ does, so this usually confuses the zealous newcomers that want to be "const correct". The library is also getting in your way in that it does not provide support for "const(container/range)" nor "container of const".
Re: Postblit bug
On Friday, 17 October 2014 at 00:55:25 UTC, ketmar via Digitalmars-d wrote: On Fri, 17 Oct 2014 00:42:24 + IgorStepanov via Digitalmars-d wrote: Can someone comment this code? Should I think that it's a bug. it's just an anomaly. const postblit can do alot of things besides adjusting struct fields, and it's logical that compiler cannot call non-const methods for const objects. yet it's still on of those "unforseen consequences" that arises from conjunction of different features. i don't think that it's a bug, but i think that this must be discussed anyway, and then documented. AFAIK, Kenji has submitted a DIP, and has begun working on "fixing" the const/immutable/inout posblit issue. However, there are some very subtle corner cases, so (afaik) work is slow. To be honest, I think people use "const" way too much in D. It's *not* the C++ head const you can use anywhere. It's really just the "base" attribute between mutable and immutable data. In particular, due to the transitive nature of const, any time you use const it means "you can't modify this, or anything produced or acquired from this, ever". It's usually not what people think they are signing for... When it makes little sense to have your type as immutable, then I don't think you should bother much
Re: C++ Ranges proposal for the Standard Library
On Friday, 17 October 2014 at 09:17:52 UTC, ZombineDev wrote: I saw [this][0] proposal for adding ranges to C++'s standard library. The [paper][1] looks at D style ranges, but concludes: Since iterators can implement D ranges, but D ranges cannot be used to implement iterators, we conclude that iterators form a more powerful and foundational basis. What do you guys think? [0]: https://isocpp.org/blog/2014/10/ranges [1]: https://ericniebler.github.io/std/wg21/D4128.html One problem with C++ style iterators is composition, and their exponential growth, due to their "pair" approach. Indeed, more often than not, proper iteration *requires* the "it" *already* know where the underlying iterator ends. For example, a "stride" adapter iterator would look like this: template struct StrideIt { It current; It end; void operator++() { ++current; if (current != end) ++current; } } Then you combine it with: StrideId sit (it,itend); StrideId sitend(itend, itend); for ( ; ++it ; it != itend) ... As you can see, it quickly becomes bloated and cumbersome. In particular, it takes *tons* of lines of code, traits and what not to compose. C++11's "auto" make things somewhat simpler, but it is still bloated. Another issue is that iterators model a *pointer* abstraction. Iterators *must* have "reference_t". ranges are more generic in the sense that they simply model iteration. *THAT SAID*, as convenient as ranges are, they do suffer from the "shrink but can't grow" issue. In particular, you can't really "cut" a range the way you can with iterators: "first, middle, last". If you are using RA ranges with slicing, it doesn't show too much. However, if you are using generic bidir ranges, on containers such as "DList", you really start to feel the pain. My personal feeling (IMO): - Consuming, adapting, producing data: Ranges win hands down. - Managing, shuffling or inserting elements in a container: To be honest, I prefer iterators. Given how C++'s STL is container-centric, whereas D's phobos is range centric, I can totally understand both sides' position.
Re: @safety of Array
On Tuesday, 14 October 2014 at 17:59:43 UTC, Brad Roberts via Digitalmars-d wrote: On 10/14/2014 3:49 AM, monarch_dodra via Digitalmars-d wrote: You say I'm focused on impl, but @safe *is* an implementation certification. I'm not derailing the thread or talking about process. If Array can't be certified memory safe, then it can't be marked as @safe. That's really all there is to it. Sorry, the request to not derail was for future posts to the thread, not a reaction to your comment. I was/am worried that the "more and more code being created" comment would spiral the discussion sideways. Your response wasn't making any statements of what should happen but rather why it can't based on the current state. Useful, but still irrelevant to the Should question. Unless you were saying that it shouldn't become usable due to that part of the api. To that I'd respond that your thinking is too narrow in scope or too black and white. As to the rest, once we decide if Array should be usable in the @safe subset of the language, then we can start to make choices about how to achieve that. Some obvious choices: 1) remove the parts that aren't (unlikely to be a good choice) 2) partition the api into parts that are and parts that aren't (only some parts get the @safe attribute, maybe some gets @trusted) 3) improve the implementation of @safe to cover all the parts that can't right now (likely to result in significant delay before any useful progress is made) 4) force the parts that aren't anyway (probably violates the basic precepts of @safety but including for the sake of completeness) 5) ? 6) some combination of the above My personal thinking is that #2 is the way to go in the short term as long as a reasonably large subset of the functionality can be made usable (right now we can't even construct one). With a likely very strategic sprinkling of @trusted where absolutely necessary. As #3 progresses on it's own merits, the set that falls into #2 might expand. I'm on vacation on a phone, so I'll be brief for now. I replied what I said because I felt some arguments were leading to: let's make it trusted and worry about implementation layer. I'la ignore the comment that my vision is narrow and discuss improvement possibilities. The issue witH safe/unsafe split is that the functions themselves aren't actually unsafe, but rather their vombination: deletion is only unsafe *if* an escape has occurred. The funny thing is that Array used to be sealed (specifically to avoid escapes) abd could have been safe. I unsealed it with Andrei because sealed containers come with their own problems. To the topic at hand though, I don't think safety should dictate our implementations. In particular, dmd improvements can and will mean that something unsafe can become safe later. The real question here is when will we implement the long promised scope?
Re: @safety of Array
On Tuesday, 14 October 2014 at 01:47:10 UTC, Brad Roberts via Digitalmars-d wrote: On 10/13/2014 1:28 PM, monarch_dodra via Digitalmars-d wrote: On Monday, 13 October 2014 at 17:16:40 UTC, Brad Roberts via Digitalmars-d wrote: On 10/13/2014 7:47 AM, Andrei Alexandrescu via Digitalmars-d wrote: On 10/12/14, 5:41 PM, Brad Roberts via Digitalmars-d wrote: I know it's a tricky implementation, but let's focus on the goal.. should Array be usable in @safe code? Yes. In order for that to be 100% automatically checkable, we need the rules restricting escape of addresses of returns by reference. -- Andrei 100% checkable isn't required right now. For it to be used in an @safe context all that's needed is liberal use of @trusted. That can be refined over time to a more checked version. We shouldn't wait for checkability. Will one of you experts in the impl of Array volunteer to make the appropriate changes? The issue is that it's *not* safe though. You can escape the reference, destroy it, and end up with a dangling pointer. Arbitrarily marking things as trusted seriously undermines what safe means. @trusted should be used with extreme caution. That's why I asked the question I did. The core question isn't about what the current implementation is or does but about where it should end up. Should Array be usable in @safe code. So far: Jakob: focused on impl Andrei: yes Monarch: focused on impl I totally agree that @trusted must be used with lots of caution. But my point in that post was that impl isn't the issue and requiring that everything be fixed and perfect also isn't the issue. If we don't know and understand where we want to be, the chances of accidentally landing there are rather low. More and more code is being created in Phobos all the time, and it's use in @safe code is largely an afterthought. Please don't derail this thread and talk about process.. keep this thread focused on Array. Thanks, Brad You say I'm focused on impl, but @safe *is* an implementation certification. I'm not derailing the thread or talking about process. If Array can't be certified memory safe, then it can't be marked as @safe. That's really all there is to it.
Re: @safety of Array
Example: ref int getAsRef(int a) @safe //Magic! { RefCounted!int rc = a; return rc.getPayload(); } I wouldn't want to be on the calling end of this "safe" function...
Re: @safety of Array
On Monday, 13 October 2014 at 17:16:40 UTC, Brad Roberts via Digitalmars-d wrote: On 10/13/2014 7:47 AM, Andrei Alexandrescu via Digitalmars-d wrote: On 10/12/14, 5:41 PM, Brad Roberts via Digitalmars-d wrote: I know it's a tricky implementation, but let's focus on the goal.. should Array be usable in @safe code? Yes. In order for that to be 100% automatically checkable, we need the rules restricting escape of addresses of returns by reference. -- Andrei 100% checkable isn't required right now. For it to be used in an @safe context all that's needed is liberal use of @trusted. That can be refined over time to a more checked version. We shouldn't wait for checkability. Will one of you experts in the impl of Array volunteer to make the appropriate changes? The issue is that it's *not* safe though. You can escape the reference, destroy it, and end up with a dangling pointer. Arbitrarily marking things as trusted seriously undermines what safe means. @trusted should be used with extreme caution.
Re: Make const, immutable, inout, and shared illegal as function attributes on the left-hand side of a function
On Saturday, 11 October 2014 at 12:45:40 UTC, Dicebot wrote: On Saturday, 11 October 2014 at 07:36:21 UTC, monarch_dodra wrote: Wait what? Are you saying there is a single case when this: const T var; is not identical to this: const(T) var; No, look at the pointer symbol. module test; const int** a; const(int**) b; The original code you quoted was "const(T)* v;" where the "*" was *outside* of the parens. **a = 42; Error: cannot modify const expression **a B _D4test1axPPi 0008 B _D4test1bxPPi ??? Can you give an examle of the code that actually observes the semantical difference? I cannot. I was trying to prove that there isn't, after you made the statement "Wait what? Are you saying there is a single case when this is not identical to this".
Re: Make const, immutable, inout, and shared illegal as function attributes on the left-hand side of a function
On Saturday, 11 October 2014 at 04:11:30 UTC, Dicebot wrote: On Friday, 10 October 2014 at 02:38:42 UTC, Walter Bright wrote: This has come up before, and has been debated at length. const is used both as a storage class and as a type constructor, and is distinguished by the grammar: const(T) v; // type constructor, it affects the type T const T v; // storage class, affects the symbol v and the type of v In particular, const T *v; does not mean: const(T)* v; Wait what? Are you saying there is a single case when this: const T var; is not identical to this: const(T) var; No, look at the pointer symbol. Reddit users are not the ones who invest into this language. If this attitude won't change it is only a matter of time until you start losing existing corporate users deciding to go with other language or a fork instead (likely latter). I am very serious. Being a D user pretty much by definition implies someone willing to risk and experiment with programming tools to get a business edge. If costs of maintaing own fork become lower than regular losses from maintenance overhead from language quirks it becomes simple pragmatical solution. There is nothing personal about it. Consistency and being robust in preventing programmer mistakes is single most important feature in the long term. @nogc, C++ support, any declared feature - it all means nothing with a simple necessity to not waste money fighting the language. In that sense proposed change is _very_ beneficial in ROI terms. It forces trivial code base adjustment that results in preventing very common mistake rarely obvious for a newbies. This means a very real money gains in terms of training and daily mantenance overhead. Something I don't care much in a personal projects but will damn appreciate as one caring for success of my employer. This endless search for the ideal syntax is consuming our time while we aren't working on issues that matter. (And this change will consume users' time, too, not just ours.) Hardly anything matters more than that. Issues like that consume our time continiously for years, accumulating in wasted days weeks of worker time. Compared with time needed to adjust even several MLOC project gain is clear. Agreed with the sentiment. #pleasebreakourcode Lol.
Re: struct and default constructor
On Sunday, 27 November 2011 at 19:50:24 UTC, deadalnix wrote: Hi, I wonder why struct can't have a default constructor. TDPL state that it is required to allow every types to have a constant .init . That is true, however not suffiscient. A struct can has a void[constant] as a member and this doesn't have a .init . So this limitation does not ensure that the benefit claimed is garanteed. Additionnaly, if it is the only benefit, this is pretty thin compared to the drawback of not having a default constructor. Think the argument is that declaring `T t;` must be CTFE, which kind of implies a T.init state (which may have non-deterministic values in the presence of " = void"). This is mostly for declaring things static, and the whole "constructors are run after the .init blit". But even then: T t; //value is T.init, if not @disable this() T t = T(); //Potentially run-time I've started threads and tried to start discussions about this before, but to no avail. It's a relativelly recurrent complain, especially from "newer" C++ users. The older D users have either of thrown in the towel, or implemented "workarounds".
Re: Make const, immutable, inout, and shared illegal as function attributes on the left-hand side of a function
On Thursday, 9 October 2014 at 08:50:52 UTC, Martin Nowak wrote: Would this affect your code? I've written code before in the style: @property pure nothrow const //<- HERE int foo(); So anybody else using this style might be affected. But even then, I agree. D has always been about "if it's ambiguous, dangerous, and can be avoided, don't make it fucking legal". Do you think it makes your code better or worse? Is this just a pointless style change? Anything else? I'm not really sure about the: "Then at some future point we could apply the left hand side qualifiers to the return type, e.g. `const int foo();` == `const(int) foo();`" I don't think it buys us anything, except maybe silently changing semantics of code that hibernated through the deprecation process. I mean, sure, it might be a little surprising, but it's not the end of the world.
Re: The pull request of the day: 3998
On Tuesday, 7 October 2014 at 17:58:41 UTC, Dicebot wrote: On Tuesday, 7 October 2014 at 17:36:24 UTC, Jonathan wrote: What are some "common uses" for multiple aliasing? I understand the feature, but curious where it would be commonly employed. To me, this allows structs to have something like inheritance. You add a property for another struct that acts like an interface and alias that struct to the current one. Thoughts? Multiple inheritance of implementation for structs + implicit casting in one basket I wish that's what we used it for. More often than not, it's used to simulate implicit casting, sometimes with catastrophic results in generic code...
Re: scope() statements and return
On Monday, 6 October 2014 at 16:59:35 UTC, Andrei Alexandrescu wrote: Whenever an exception is converted to a string, the chained exceptions should be part of it too. On Monday, 6 October 2014 at 17:12:00 UTC, Jakob Ovrum wrote: However, the whole point is implicit chaining, which is where the language and runtime kicks in. Look through your own scope(exit|failure) blocks and struct destructors - are they all nothrow? If not, you are using exception chaining. Hum... But arguably, that's just exception chaining "happening". Do you have any examples of someone actually "dealing" with all the exceptions in a chain in a catch, or actually using the information in a manner that is more than just printing?
Re: scope() statements and return
On Monday, 6 October 2014 at 14:54:21 UTC, Andrei Alexandrescu wrote: On 10/6/14, 7:24 AM, monarch_dodra wrote: If your "catch" throws an exception, then the new exception simply "squashes" replaces the old one: // catch (Exception e) { thisMightThrow(); //Lose "e" here throw e; } // That's code under library/user control, I'm talking about the responsibility of the language. -- Andrei Right. but if correct usage is so cumbersome no one does it right, then it is 's/the responsibility/a flaw/' of the language. Are we advocating then that code under user control should systematically look like this? catch (Exception e) { try { thisMightThrow(); //Lose "e" here } catch(Exception ee) { findLast(e).next = ee; //because e.next might } throw e; } Honestly, a good middle ground is to ditch chaining, but allow multiple exceptions thrown at once. Subsequent Exceptions/Errors will just be lost (sorry), with the exception that an Error will override an Exception. For the sake of argument, do you have any examples where a program has used chaining?
Re: scope() statements and return
On Monday, 6 October 2014 at 13:48:07 UTC, Andrei Alexandrescu wrote: On 10/6/14, 12:27 AM, Walter Bright wrote: On 10/5/2014 10:09 AM, Dicebot wrote: On Sunday, 5 October 2014 at 17:03:07 UTC, Andrei Alexandrescu wrote: On 10/5/14, 9:42 AM, Dicebot wrote: On Sunday, 5 October 2014 at 16:30:47 UTC, Ola Fosheim Grøstad wrote: Does D have exception chaining? Yes. http://dlang.org/phobos/object.html#.Throwable.next Though it seems to do more harm then good so far. What harm does it do? -- Andrei Good chunk of issues with pre-allocated exceptions (and possible cycles in reference counted ones) comes from the chanining possibility. At the same time I have yet to see it actively used as a feature. Doesn't mean it is bad thing, just not used wide enough to compensate for trouble right now. FWIW, I'm skeptical as well of the value of chaining relative to its cost in complexity. It's one of those designs in which there's little room to turn. We wanted to (a) allow destructors to throw, (b) conserve information. Offering access to all exceptions caught was a natural consequence. -- Andrei Well, then again, even that promise isn't really held. If your "catch" throws an exception, then the new exception simply "squashes" replaces the old one: // catch (Exception e) { thisMightThrow(); //Lose "e" here throw e; } // I've seen literally no-one ever chain exceptions once one has been caught. Not even druntime does it. For example, if an exception occurs during a static array construction, this triggers the destruction of already constructed items. If one of *those* fails, then the cleanup loop terminates (without cleaning the rest of the items mind you). And the user is greeted with a "Object destruction failed", even though the array was never constructed to begin with! There's merit in the goal, but I don't think the current design has achieved it.
Re: scope() statements and return
On Monday, 6 October 2014 at 02:35:52 UTC, Shammah Chancellor wrote: It doesn't "catch" the error. Propigation should continue as normal. Right, it only "intercepts, cleanups, and rethrows". But the argument is that even that shouldn't happen, as you aren't sure the cleanup code is still relevant. At least, I don't think it should be doing cleanup unless you go out of your way to say: "do this, *even* in case of errors". I mean, your RAII has already failed anyways. Your memory has leaked, your ref counts are of the counter, your transactions are open, your file handles are open... The only thing you should be doing is trying to die gracefully, and maybe salvage user data. Your program is in *ERROR*. Cleanup really shouldn't be the priority, especially if it can potentially add more corruption to your state.
Re: scope() statements and return
On Sunday, 5 October 2014 at 15:03:08 UTC, ketmar via Digitalmars-d wrote: On Sun, 05 Oct 2014 14:53:37 + monarch_dodra via Digitalmars-d wrote: Promises hold provided the precondition your program is in a valid state. Having an Error invalidates that precondition, hence voids that promise. so Error should not be catchable and should crash immidiately, without any unwinding. Don't put words in my mouth. Also, Errors do only partial stack unwinding, so yes, once an Error has been thrown, your program should terminate. as long as Errors are just another kind of exception, the promise must be kept. Errors aren't Exceptions. They make no promises.
Re: scope() statements and return
On Sunday, 5 October 2014 at 12:36:30 UTC, ketmar via Digitalmars-d wrote: On Sun, 05 Oct 2014 11:28:59 + monarch_dodra via Digitalmars-d wrote: In theory, you should seldom ever catch Errors. I don't understand why "scope(exit)" are catching them. 'cause scope(exit) keeps the promise to execute cleanup code before exiting code block? Promises hold provided the precondition your program is in a valid state. Having an Error invalidates that precondition, hence voids that promise. RAII also makes the promise, but you don't see Errors giving much of a fuck about that.
Re: scope() statements and return
On Saturday, 4 October 2014 at 18:42:05 UTC, Shammah Chancellor wrote: Didn't miss anything. I was responding to Andrei such that he might think it's not so straightforward to evaluate that code. I am with you on this. It was my original complaint months ago that resulted in this being disallowed behavior. Specifically because you could stop error propigation by accident even though you did not intend to prevent their propigation. e.g: int main() { scope(exit) return 0; assert(false, "whoops!"); } -S Isn't this the "should scope(exit/failure) catch Error" issue though? In theory, you should seldom ever catch Errors. I don't understand why "scope(exit)" are catching them.
Re: No TypeTuple expansion for assert?
On Friday, 3 October 2014 at 20:28:21 UTC, H. S. Teoh via Digitalmars-d wrote: On Fri, Oct 03, 2014 at 08:02:14PM +, monarch_dodra via Digitalmars-d wrote: On Friday, 3 October 2014 at 19:21:38 UTC, Dmitry Olshansky wrote: >03-Oct-2014 23:08, Ali Çehreli пишет: >>I know that assert is not a function but it would be nice to >>have. >> > >Indeed. If we make it a function and put in object.d would >anyone >notice the change? I think there are semantics that prevent that. Such as "assert(0)", or removing the evaluation of arg in "assert(arg())" altogether in release. void assert(lazy bool exp) { version(assert) // (!) if (!exp) __fatal_runtime_error(); } Doesn't work for assert(0); but does work for not evaluating the argument in release build. (Yeah I know, implementation of 'lazy' leaves a lot to be desired, but hey, that can be argued to be a QOI issue.) T There might still be an issue in regards to linking though: I have some code where assert-only functions are only defined for non-release.
Re: No TypeTuple expansion for assert?
On Friday, 3 October 2014 at 19:21:38 UTC, Dmitry Olshansky wrote: 03-Oct-2014 23:08, Ali Çehreli пишет: I know that assert is not a function but it would be nice to have. Indeed. If we make it a function and put in object.d would anyone notice the change? I think there are semantics that prevent that. Such as "assert(0)", or removing the evaluation of arg in "assert(arg())" altogether in release.
Re: On exceptions, errors, and contract violations
On Friday, 3 October 2014 at 17:40:43 UTC, Sean Kelly wrote: A contract has preconditions and postconditions to validate different types of errors. Preconditions validate user input (caller error), and postconditions validate resulting state (callee error). Technically, a precondition validates correct argument passing from "caller", which is not quite the same as "user input". "User" in this context is really the "end user", and is *not* what contracts are made for. Also, I don't think "postconditions" are meant to check "callee" errors. That's what asserts do. Rather, postconditions are verifications that can ony occur *after* the call. For example, a function that takes an input range (no length), but says "input range shall have exactly this amount of items..." or "input shall be no bigger than some unknown value, which will cause a result overflow".
Re: Feedback Wanted on Homegrown @nogc WriteLn Alternative
On Friday, 3 October 2014 at 17:01:46 UTC, H. S. Teoh via Digitalmars-d wrote: For a compile-string that's statically fixed (i.e., writefln!"..."(...)), we can do a lot more than what ctFmt does. For example, we can parse the format at compile-time to extract individual formatting specifiers and intervening string fragments, and thereby transform the entire writefln call into a series of puts() and formattedWrite() calls. The take home point is that ctFmt would *also* parse the string at compile time. The difference is that it creates a run-time object, which still contains enough static information for a powerful write, yet still some run-time info to be able to swap them at runtime.
Re: Feedback Wanted on Homegrown @nogc WriteLn Alternative
On Friday, 3 October 2014 at 17:01:46 UTC, H. S. Teoh via Digitalmars-d wrote: On Fri, Oct 03, 2014 at 11:15:28AM +, monarch_dodra via Digitalmars-d wrote: On Thursday, 2 October 2014 at 23:32:32 UTC, H. S. Teoh via Digitalmars-d wrote: >Alright, today I drafted up the following proof of concept: > >[...] > >writefln!"Number: %d Tag: %s"(123, "mytag"); I had (amongst with others) thought about the possibility of "ct-write". I think an even more powerful concept, would rather having a "ct-fmt" object dirctly. Indeed writefln!"string" requires the actual format at compile time, and for the write to be done. It can't just validate that *any* arbitrary (but pre-defined) string can be used with a certain set of write arguments. I'm thinking: // //Define several format strings. auto english = ctFmt!"Today is %1$s %2$s"; auto french = ctFmt!"Nous sommes le %2$s %1$s"; //Verify homogeneity. static assert(is(typeof(english) == typeof(french))); //Chose your format. auto myFmt = doEnglish ? english : french; //Benefit. writfln(myFmt, Month.oct, 3); // I think this is particularly relevant in that it is these kinds of cases that are particularly tricky and easy to get wrong. So ctFmt would have to be a static type that contains static information about the number and types of formatting items it expects? Because otherwise, we won't be able to do checks like verifying at compile-time that the passed arguments match the given format. But if we're going to go in this direction, I'd also introduce named parameters instead of positional parameters, which would make translators' jobs easier. For example: ctFmt!"Today is %`day`s %`month`s" is far easier to translate correctly than: ctFmt!"Today is %1$s %2$s" where the translator may have no idea what %1$s and %2$s are supposed to refer to. For all they know, %1%s could be "our" and %2$s could be "anniversary". Right, but that would also require named parameter passing, which we don't really have. For "basic" usage, you'd just use: writefln(ctFmt!"Number: %d Tag: %s", 123, "mytag"); The hard part is finding the sweet spot in runtime/compile time data, to make those format strings runtime-type compatible. But it should be fairly doable. Personally, I prefer the shorter syntax for the most usual cases where the format string doesn't change: writefln!"Number: %d Tag: %s"(123, "mytag"); But ctFmt could also fit under this scheme when more flexibility is desired: we could pass it as a first parameter and leave the default CT parameter as "" (meaning, read args[0] for format string). So if args[0] is an instance of ctFmt, then we can do (more limited) compile-time checking, and if it's a runtime string, then fallback to the current behaviour. Well, we could also simply have writeln!str(args) => writefln(ctFmt!str, args) For a compile-string that's statically fixed (i.e., writefln!"..."(...)), we can do a lot more than what ctFmt does. For example, we can parse the format at compile-time to extract individual formatting specifiers and intervening string fragments, and thereby transform the entire writefln call into a series of puts() and formattedWrite() calls. With ctFmt, you can't extract the intervening string fragments beforehand, and you'll need runtime binding of formatting specifiers to arguments, because the exact format string chosen may vary at runtime, though they *can* be statically checked to be compatible at compile-time (so "X %1$s Y %2$s Z" is compatible with "P %2$s Q %1$s R", but "%d %d %d" is not compatible with "%f %(%s%)" because they expect a different number of arguments and argument types). So I see ctFmt as an object that encapsulates the expected argument types, but leaves the actual format string details to runtime, whereas passing in a string in the CT argument of writefln will figure out the format string details at compile-time, leaving only the actual formatting to be done at runtime. T Well, technically, `ctFmt` could still do some formatting. It can still cut up the format into an alternative series of strings and "to format objects". ctFmt would still know how many string fragments there are, and so would writeln. Writeln would still be able to generate nothing more than "puts", the only difference is that the actual string token is runtime defined, but I don't think that makes any change. Eg: ct!"hello %s World" becomes the type: struct { //Actual contents run-time defined, //but possibly pre-calculated during ctfe. string[2] fixedStrings; //Completely statically know. enum stri
Re: Division by zero
On Friday, 3 October 2014 at 13:29:18 UTC, Walter Bright wrote: In any case, a bugzilla issue should be filed for this. https://issues.dlang.org/show_bug.cgi?id=13569
Re: Division by zero
On Friday, 3 October 2014 at 12:55:35 UTC, Adam D. Ruppe wrote: On Friday, 3 October 2014 at 12:31:54 UTC, Marc Schütz wrote: "For integral operands of the / and % operators, [...]. If the divisor is zero, an Exception is thrown." It should probably just say that is implementation defined. I'm pretty sure it does throw an exception on Windows (at least 32 bit)... Technically, the doc is also wrong for windows, since it's an *Error* that is thrown: object.Error@(0): Integer Division by 0
Re: Feedback Wanted on Homegrown @nogc WriteLn Alternative
On Thursday, 2 October 2014 at 23:32:32 UTC, H. S. Teoh via Digitalmars-d wrote: Alright, today I drafted up the following proof of concept: [...] writefln!"Number: %d Tag: %s"(123, "mytag"); I had (amongst with others) thought about the possibility of "ct-write". I think an even more powerful concept, would rather having a "ct-fmt" object dirctly. Indeed writefln!"string" requires the actual format at compile time, and for the write to be done. It can't just validate that *any* arbitrary (but pre-defined) string can be used with a certain set of write arguments. I'm thinking: // //Define several format strings. auto english = ctFmt!"Today is %1$s %2$s"; auto french = ctFmt!"Nous sommes le %2$s %1$s"; //Verify homogeneity. static assert(is(typeof(english) == typeof(french))); //Chose your format. auto myFmt = doEnglish ? english : french; //Benefit. writfln(myFmt, Month.oct, 3); // I think this is particularly relevant in that it is these kinds of cases that are particularly tricky and easy to get wrong. For "basic" usage, you'd just use: writefln(ctFmt!"Number: %d Tag: %s", 123, "mytag"); The hard part is finding the sweet spot in runtime/compile time data, to make those format strings runtime-type compatible. But it should be fairly doable.
Re: @safe pure nothrow compiler inference
On Wednesday, 1 October 2014 at 15:12:41 UTC, Kagamin wrote: On Monday, 29 September 2014 at 14:40:34 UTC, Daniel N wrote: It can be done, Walter wanted to do it, but there was large resistance, mainly because library APIs would become unstable, possibly changing between every release. Huh? Templates are part of library API too, see std.algorithm. So what's the difference if the API consists of templated or non-templated functions? Why for one part of API it's ok to change with every release and for the other not ok? It's not that "it's OK for templates", it's that you *must* have inference. This was not an argument against having inference for normal functions.
Re: std.utf.decode @nogc please
On Wednesday, 1 October 2014 at 10:10:51 UTC, Robert burner Schadek wrote: lately when working on std.string I run into problems making stuff nogc as std.utf.decode is not nogc. https://issues.dlang.org/show_bug.cgi?id=13458 Also I would like a version of decode that takes the string not as ref. Something like: bool decode2(S,C)(S str, out C ret, out size_t strSliceIdx) if(isSomeString!S && isSomeChar!C) {} where true is returned if the decode worked and false otherwise. Ideas, Suggestions ... ? any takers? Kind of like the "non-throwing std.conv.to": I'm pretty sure that if you wrote your "tryDecode" function, then you could back-wards implement the old decode in terms of the new "tryDecode": dchar decode(ref str) { dchar ret; size_t idx; enforce(tryDecode(str, ret, idx)); str = str[idx .. $]; return ret; } The implementation of tryDecode would be pretty much the old one's implementation, exceptions replaced in favor of return false.
Re: std.utf.decode @nogc please
On Wednesday, 1 October 2014 at 10:51:25 UTC, Walter Bright wrote: On 10/1/2014 3:10 AM, Robert burner Schadek wrote: Ideas, Suggestions ... ? any takers? You can use .byDchar instead, which is nothrow @nogc. Being forced out of using exception just to be able to have the magic "@nogc" tag is the real issue here... The original request was mostly for @nogc, not necessarilly for nothrow.
Re: Creeping Bloat in Phobos
On Sunday, 28 September 2014 at 23:06:28 UTC, Walter Bright wrote: It's very hard to disable the autodecode when it is not needed, though the new .byCodeUnit has made that much easier. One issue with this though is that "byCodeUnit" is not actually an array. As such, by using "byCodeUnit", you have just as much chances of improving performance, as you have of *hurting* performance for algorithms that are string optimized. For example, which would be fastest: "hello world".find(' '); //(1) "hello world".byCodeUnit.find(' '); //(2) Currently, (1) is faster :/ This is a good argument though to instead use ubyte[] or std.encoding.AsciiString. What I think we (maybe) need though is std.encoding.UTF8Array, which explicitly means: This is a range containing UTF8 characters. I don't want decoding. It's an array you may memchr or slice operate on.
Re: Creeping Bloat in Phobos
On Sunday, 28 September 2014 at 23:48:54 UTC, Walter Bright wrote: I think it was you that suggested that instead of throwing on invalid UTF, that the replacement character be used instead? Or maybe not, I'm not quite sure. Regardless, the replacement character method is widely used and accepted practice. There's no reason to throw. This I'm OK to stand stand behind as acceptable change (should we decide to go with). It will kill the "auto-decode throws and uses the GC" argument.
Re: Creeping Bloat in Phobos
On Sunday, 28 September 2014 at 23:21:15 UTC, Walter Bright wrote: It's very simple for an algorithm to decode if it needs to, it just adds in a .byDchar adapter to its input range. Done. No special casing needed. The lines of code written drop in half. And it works with both arrays of chars, arrays of dchars, and input ranges of either. This just misses the *entire* familly of algorithms that operate on generic types, such as "map". EG: the totality of std.algorithm. Oops.
Re: Creeping Bloat in Phobos
On Sunday, 28 September 2014 at 23:06:28 UTC, Walter Bright wrote: Note that autodecode does not always happen - it doesn't happen for ranges of chars. It's very hard to look at piece of code and tell if autodecode is going to happen or not. Arguably, this means we need to unify the behavior of strings, and "string-like" objects. Pointing to an inconsistency doesn't mean the design is flawed and void.
Re: Local imports hide local symbols
On Tuesday, 23 September 2014 at 20:10:35 UTC, H. S. Teoh via Digitalmars-d wrote: Sounds reasonable. How would that be implemented, though? Currently, in the compiler, lookup is implemented via a linked list of Scope objects that contain, among other things, a symbol table for the symbols declared in that scope. A local import achieves locality by adding symbols to the current (i.e., innermost) Scope, since doing otherwise would cause those symbols to "spill" into the outer scopes and they will persist past the lifetime of the current scope. Arguably, that's not my problem... OTOH, it's this importing into the innermost scope that causes this issue to begin with, since by definition, the innermost scope takes precedence over outer scopes, so the imported symbols would shadow symbols declared in outer scopes. I think that's the issue here. Are we actually importing "into" the innermost scope, while shadowing any previous imports? AFAIK, that's a behavior which is reserved for selective imports. As I said, local imports, IMO, should behave in all aspects as a global import. It simply only exists during its scope, but is not actually any more "internal" than the rest. If a local import creates a symbol ambiguity, then it's ambiguous, and compilation ceases. I think that's the behavior we should be going for. Implementing what you suggest would either involve treating imported symbols separately (by having multiple parents per scope, which quickly devolves into a mess, or otherwise having sibling pointers to imported scopes, which also greatly complicates lookup logic), or sticking symbols into outer scopes and keeping track of which symbols were imported where so that they can be removed after we leave the current scope -- which is fragile and would again add tons of complications to the compiler. T Unfortunately, I don't know how the compiler works.
Re: Local imports hide local symbols
On Tuesday, 23 September 2014 at 19:18:08 UTC, H. S. Teoh via Digitalmars-d wrote: But this would cause a compile error: mod.d module mod; string x, y; main.d void main() { int x, y, z; import mod; x++;// Error: ambiguous symbol 'x', could be local // variable 'x' or mod.x } T How do you disambiguate to say "the x I want is the local one" ? IMO, simply make it that local imports work like global ones, but scoped. Global imports don't have this issue, why should local imports have special rules?
Re: What are the worst parts of D?
On Saturday, 20 September 2014 at 16:54:08 UTC, Andrei Alexandrescu wrote: On 9/20/14, 7:42 AM, Tofu Ninja wrote: On Saturday, 20 September 2014 at 12:39:23 UTC, Tofu Ninja wrote: What do you think are the worst parts of D? Oh another bad part of D is the attribute names with some being positive(pure) and some being negative(@nogc) and some of them not having an @ on them. If that's among the worst, yay :o). My pet peeves about D gravitate around the lack of a clear approach to escape analysis and the sometimes confusing interaction of qualifiers with constructors. For escape analysis, I think the limited form present inside constructors (that enforces forwarded this() calls to execute exactly once) is plenty fine and should be applied in other places as well. I think correct escape analysis + @safe + scope == win. BTW, remember all those people that bitch about rvalue to "const ref". D could be a language that provides rvalue to scope ref. 100% safe and practical. How awesome would that be?
Re: Example of the perils of binding rvalues to const ref
On Thursday, 18 September 2014 at 00:53:40 UTC, Andrei Alexandrescu wrote: On 9/17/14, 12:28 PM, IgorStepanov wrote: I want to place Foo(1) to buckets[nn].key without postblit call. Compiler can't help me now, however, I think, It can do it without language change. File an enhancement request with explanations and sample code, the works. This will be good. Thanks! -- Andrei I think it's this one: https://issues.dlang.org/show_bug.cgi?id=12684 Kind of required when you requested emplace from rvalues: https://issues.dlang.org/show_bug.cgi?id=12628
Re: Now, a critic of Stroustrup's choices
On Wednesday, 17 September 2014 at 09:21:13 UTC, eles wrote: But, OTOH, maybe it is a confusion in my head that comes from the fact that "constructing" an object means both allocating and constructing, while "destructing" means both deallocating and destructing. It usually is. I'm not sure what you are talking about. Most containers in C++ (and D) first allocate with an allocator, and then placement construct. I sometimes just feel that construction/destruction shall be separated form allocation/deallocation. Again, it usually is. AFAIK, the only thing is "vanilla new", which conveniently does both for you in a single convenient call. If you want to do *anything* else, then you have to manage both individually.
Re: Library Typedefs are fundamentally broken
On Wednesday, 17 September 2014 at 07:21:13 UTC, Andrej Mitrovic via Digitalmars-d wrote: On 9/17/14, bearophile via Digitalmars-d wrote: Andrei Alexandrescu: Add a sequence number as a uint, defaulted to 0. -- Andrei See discussion: https://d.puremagic.com/issues/show_bug.cgi?id=12100 It's a good thing you found GCC and VC implement this. I think it's another sign that we could use this feature. Technically, they implement it via macro, and the macro re-expands on every use. It's mostly useless outside of ".cpp" files: The identifiers are unstable cross compilation units. And if it appears in a .h, it'll be re-expanded to a different value on every include. If it appears in a macro, it'll be expanded to something different on every macro use too.
Re: Which patches/mods exists for current versions of the DMD parser?
On Tuesday, 16 September 2014 at 17:16:28 UTC, Iain Buclaw via Digitalmars-d wrote: On 8 September 2014 10:37, Daniel Murphy via Digitalmars-d Attempting to fork D's syntax is harmful to D. Please stop. You can't stop people from exercising their Freedom #1 (modify) and #3 (redistribute modified copies) of software under a free license. Right, but I think that fits in nicely with the "You have the right to do it, but I can decide you are an asshole for it". For instance, I can't stop Ketmar from bitching about the "problems" with D, and how his solutions are our godsend, but I can decide that he is also an entitled prick who's not even worth taking the time writing off. Anyway, it's never been of harm to anyone. Take Amber for instance, which is a very obvious fork of syntax, right down to a "What we fixed about D" page. https://bitbucket.org/larsivi/amber/wiki/Diff_D1 Iain. I'd say it's really a matter of how and why you are doing it, and how you are presenting it. The way Ola presented his work looked more like experiment and proof of concept. It's constructive. The changes (mostly) adhered to D's current philosophy. I think he was just trying to find out who was doing the same, and I have no trouble with it. I can see Dicebot's point of view, but I think it totally blew out of proportions after the 1st post. However, gratuitous (and deliberate) forking of the language just to address your own petty design issues I have more problems with. Sure you can do it, but I think that if you do, you should GTFO.
Re: Example of the perils of binding rvalues to const ref
On Tuesday, 16 September 2014 at 15:30:49 UTC, Andrei Alexandrescu wrote: http://www.slideshare.net/yandex/rust-c C++ code: std::string get_url() { return "http://yandex.ru";; } string_view get_scheme_from_url(string_view url) { unsigned colon = url.find(':'); return url.substr(0, colon); } int main() { auto scheme = get_scheme_from_url(get_url()); std::cout << scheme << "\n"; return 0; } string_view has an implicit constructor from const string& (see "basic_string_view(const basic_stringAllocator>& str) noexcept;" in https://isocpp.org/files/papers/N3762.html). The function get_url() returns an rvalue, which in turn gets bound to a reference to const and implicitly passed to string_view's constructor. The obtained view refers to a dead string. Andrei Arguably, the issue is not const ref binding to an rvalue itself, but rather taking (and *holding*) the address of a parameter that is passed by const ref. If you want to *hold* that reference, it should be explicitly passed by pointer. That and having the whole thing neatly packaged in an implicit constructor. If you are doing something that dangerous, at the very least, make it explicit. I mean, the example might as well just be: std::string_view get_scheme() { std::string myString = get_url(); return myString; //Boom } Exact same undefined result, without binding to rvalues. I prefered your smoking gun of: const int& a = max(1, 2); But again, the part of the issue here is the passing of references. If we made "auto ref" to mean "pass either an existing object, or binds to an rvalue (at call site, not via template overload)" and in the implementation, made the passed in argument "considered a local variable as if passed by value you may not escape", then I'm pretty sure we can have our cake and eat it. Proper escape analysis would help too.
Re: Setting array length to 0 discards reserved allocation?
On Tuesday, 16 September 2014 at 01:32:55 UTC, deadalnix wrote: We already are suing slices. We are creating the confusion by pretending there is 2 different concepts when there is only one. IMO, the term "slice" is not necessarily different, but rather, a *refinement* of the "dynamic array" term. Without changing any formal definition, just use the term slice whenever you can, and 95% of the ambiguity goes away.
Re: Escaping the Tyranny of the GC: std.rcstring, first blood
On Monday, 15 September 2014 at 13:15:28 UTC, Marc Schütz wrote: - Does not provide Forward range iteration that I can find. This makes it unuseable for algorithms: find (myRCString, "hello"); //Nope Also, adding "save" to make it forward might not be a good idea, since it would also mean it becomes an RA range (which it isn't). No, RA is not implied by forward. Right, but RCString already has the RA primitives (and hasLength), it's only missing ForwardRange traits to *also* become RandomAccess.
Re: Escaping the Tyranny of the GC: std.rcstring, first blood
On Monday, 15 September 2014 at 02:26:19 UTC, Andrei Alexandrescu wrote: So, please fire away. I'd appreciate it if you used RCString in lieu of string and note the differences. The closer we get to parity in semantics, the better. Thanks, Andrei ***Blocker thoughts*** (unless I'm misunderstood) - Does not provide Forward range iteration that I can find. This makes it unuseable for algorithms: find (myRCString, "hello"); //Nope Also, adding "save" to make it forward might not be a good idea, since it would also mean it becomes an RA range (which it isn't). - Does not provide any way to (even "unsafely") extract a raw array. Makes it difficult to interface with existing functions. It would also be important for "RCString aware" functions to be properly optimized (eg memchr for searching etc...) - No way to "GC-dup" the RCString. giving "dup"/"idup" members on RCstring, for when you really just need to revert to pure un-collected GC. Did I miss something? It seems actually *doing* something with an RCString is really difficult. ***Random implementation thought:*** "size_t maxSmall = 23" is (IMO) gratuitous: It can only lead to non-optimization and binary bloat. We'd end up having incompatible RCStrings, which is bad. At the very least, I'd say make it a parameter *after* the "realloc" function (as arguably, maxSmall depends on the allocation scheme, and not the other way around). In particular, it seems RCBuffer does not depend on maxSmall, so it might be possible to move that out of RCXString. ***Extra thoughts*** There have been requests for non auto-decoding strings. Maybe this would be a good opportunity for "RCXUString" ?
Re: C++ interop - what to do about long and unsigned long?
On Sunday, 14 September 2014 at 11:52:29 UTC, David Nadlinger wrote: On Sunday, 14 September 2014 at 04:26:30 UTC, Andrei Alexandrescu wrote: On 9/13/14, 8:43 PM, Manu via Digitalmars-d wrote: I agree. It should be the default in all cases, as unanimously agreed by this community, and overruled by Andrei post-implementation. There was no unanimity. -- Andrei Yep, there most definitely wasn't. David Correct me if I'm wrong, but I remember that there was unanimity that it *was* a better design decision. Changing the existing behavior, however, was what we were not unanimous about.
Re: rvalues->ref args
On Friday, 12 September 2014 at 16:48:28 UTC, Dmitry Olshansky wrote: 08-Sep-2014 16:46, Manu via Digitalmars-d пишет: Please can we move on a solution to this problem? It's driving me insane. I can't take any more of this! >_< Walter invented a solution that was very popular at dconf2013. I don't recall any problems emerging in post-NG-discussions. Ideally, we would move forward on a design for 'scope', like the promising (imo) proposal that appeared recently. That would solve this problem, and also many other existing safety problems, and even influence solutions relating to other critical GC/performance problems. IMO just legalese auto ref for normal functions and you are all set. The semantics end up to be pretty much the same as c++ const & does (not duplicating the function, like current template-style auto ref). Yeah, the whole function duplication thing is pretty bad. Auto ref should just create a wrapper that forwards, and the implementation always operate on references. With this approach in mind, auto ref for functions should be trivial, in thé sense that the function is compiled normally as a non template that takes refs, and the compiler only generates code/templates to capture r values.
Re: @nogc and exceptions
On Friday, 12 September 2014 at 03:37:10 UTC, Jakob Ovrum wrote: 2) The above really shows how beneficial dynamic memory allocation is for exceptions. A possibility would be to allocate exceptions on a non-GC heap, like the C heap (malloc) or a thread-local heap. Of course, without further amendments the onus is then on the catch-site to explicitly manage memory, which would silently break virtually all exception-handling code really badly. However, if we assume that most catch-sites *don't* escape references to exceptions from the caught chain, we could gracefully work around this with minimal and benevolent breakage: amend the compiler to implicitly insert a cleanup call at the end of each catch-block. The cleanup function would destroy and free the whole chain, but only if a flag indicates that the exception was allocated with this standard heap mechanism. Chains of exceptions with mixed allocation origin would have to be dealt with in some manner. If inside the catch-block, the chain is rethrown or sent in flight by a further exception, the cleanup call would simply not be reached and deferred to the next catch-site, and so on. Escaping references to caught exceptions would be undefined behaviour. To statically enforce this doesn't happen, exception references declared in catch-blocks could be made implicitly `scope`. This depends on `scope` actually working reasonably well. This would be the only breaking change for user code, and the fix is simply making a copy of the escaped exception. Anyway, I'm wondering what thoughts you guys have on this nascent but vitally important issue. What do we do about this? I think option "b)" is the right direction. However, I don't think it is reasonable to have the "catch" code be responsible for the cleanup proper, as that would lead to a closed design (limited allocation possibilities). I like the option of having "exception allocators" that can later be explicitly called in a "release all exceptions" style, or plugged into the GC, to be cleaned up automatically like any other GC allocated exception. This would make the exceptions themselves still @nogc, but the GC would have a hook to (potentially) collect them. For those that don't want that, then they can make calls to the cleanup at deterministic times. This, combined with the fact that we used an (unshared) allocator means the cleanup itself would be 0(1). Finally, if somebody *does* want to keep exceptions around, he would still be free to do so *provided* he re-allocates the exceptions himself using a memory scheme he chooses to use (a simple GC new, for example). ... well, either that, or have each exception carry a callback to its allocator, so that catch can do the cleanup, regardless of who did the allocation, and how. GC exceptions would have no callback, meaning a "catch" would still be @nogc. An existing code that escapes exceptions would not immediately break. Either way, some sort of custom (no-gc) allocator seems in order here.
Re: C++ interop - what to do about long and unsigned long?
On Wednesday, 10 September 2014 at 20:41:45 UTC, Walter Bright wrote: C++'s long and unsigned long can be accessed with c_long and c_ulong. Unfortunately, these are aliases and mangle to their underlying types. Meaning that there is no way to interface to a C++ function declared as: void foo(unsigned long); So, what to do about this? 1. elevate c_long and c_ulong into full fledged types. 2. create full fledged types __c_long and __c_ulong, and alias c_long and c_ulong to them. Having actual new types makes me a bit nervous in regards to how overloads will be handled, and how templates will be instantiated. I don't know how things happen behind the scenes, but (as I remember), writing "portable" C++ code to handle all possible integral primitives was a real hell, as on some systems, "foo(int)/foo(long)" would collide, whereas on others, you *need* both declared. Will this start happening in D too now...?
Re: @nogc, exceptions, generic containers... Issues.
On Tuesday, 9 September 2014 at 07:05:22 UTC, bearophile wrote: Meta: let ratio = try!(div(x, y)); becomes let ratio = match div(x, y) { Ok(val) => val, Err(msg) => { return Err; } } Maybe we need a similar solution for @nogc. Related: https://d.puremagic.com/issues/show_bug.cgi?id=6840 Bye, bearophile These are interesting, but they revolve more around avoiding the Exception altogether, rather than finding a solution to using exceptions in @nogc. In particular, these are pretty imperative solutions which (afaik) woulnd't work with constructors/destructors.
@nogc, exceptions, generic containers... Issues.
I'm starting this thread related to two issues I'm encountering in regards to avoiding the GC, and the new @nogc attribute. 1) Issue 1) The first issue is in regards to Throwables. The issue here is that they are allocated using the GC, so it is currently almost impossible to throw an exception in a @nogc context. This is (I think) a serious limitations. Do we have any plans, ideas, on how to solve this? A particularly relevant example of this issue is `RefCounted`: This struct uses malloc to ref count an object, give a deterministic life cycle, and avoid the GC. Yet, since malloc can fail, it does this: _store = cast(Impl*) enforce(malloc(Impl.sizeof)); Can you see the issue? This object which specifically avoids using the GC, end up NOT being @nogc. Any idea how to approach this problem? I know there are "workarounds", such as static pre-allocation, but that also comes with its own set of problems. Maybe we could change it to say it's not legal to "hold on" to exceptions for longer than they are being thrown? Then, we could create the exceptions via allocators, which could deterministically delete them at specific points in time (or by the GC, if it is still running)? Just a crazy idea... 2) Issue 2) The second issue is that data which is placed in non-GC *may* still need to be scanned, if it holds pointers. You can check for this with hasIndirections!T. This is what RefCounted and Array currently do. This is usually smart. There's a catch though. If the object you are storing happens to hold pointers, but NOT to GC data, they are still scanned. A tell-tale example of this problem is Array!int. You'd think it's @nogc, right? The issue is that Array has a "Payload" object, into which you place malloc'ed memory for your ints. The Payload itself is placed in a RefCounted object. See where this is going? Even though we *know* the Payload is malloc'ed, and references malloc'ed data, it is still added to the GC's ranges of scanned data. Even *if* we solved issue 1, then Array!int would still *not* be @nogc, even though it absolutely does not use the GC. Just the same, an Array!(RefCounted!int) would also be scanned by the GC, because RefCounted holds pointers... A *possible solution* to this problem would be to add an extra parameter to these templates called "ScanGC", which would be initialized to "hasIndirection!T". EG: struct Array(T, bool ScanGC = hasIndirections!T) Does this seem like a good idea? I don't really see any other way around this if we want generic code with manual memory management, that is "GC friendly" yet still useable in a @nogc context.
Re: What criteria do you take
On Saturday, 6 September 2014 at 02:30:50 UTC, Cassio Butrico wrote: What criteria do you take into consideration for the choice of a programming language. and why? does not mention what language would be, but what criteria led them to choose. In the words of Marshall Cline, "programming language selection is dominated by business considerations, not by technical considerations." http://earth.uni-muenster.de/~joergs/doc/cppfaq/big-picture.html#[6.4] So I'd just chose whatever I *know* that I *could* use to solve a problem. If it works perfect.
Re: kill the commas! (phobos code cleanup)
On Saturday, 6 September 2014 at 10:20:23 UTC, ketmar via Digitalmars-d wrote: On Sat, 6 Sep 2014 11:48:53 +0200 Marco Leise via Digitalmars-d wrote: You are a relic :) sure i am! ;-) i'm just waiting for 32-bit bytes. and white bytes are 8-bit, i'll use ebcdic^w one of available one-byte encodings. ;-) That sounds so much better than UTF-32. btw: are there fonts that can display all unicode? i doubt it (ok, maybe one). Fonts are encoding agnostic, your point is irrelevant. so we designed the thing that can't really use. ;-) We can and do: unicode is the only thing that could process text that could come from any client on earth, without choking on any character. This is all done without the need for font-display, which is on the burden of the final client, and their respective local needs.
Re: [OT] If programming languages were weapons
On Saturday, 6 September 2014 at 02:51:53 UTC, Meta wrote: On Tuesday, 2 September 2014 at 21:45:42 UTC, Brian Schott wrote: On Tuesday, 2 September 2014 at 08:29:25 UTC, Iain Buclaw wrote: In normal fashion, it's missing an entry for D. http://bjorn.tipling.com/if-programming-languages-were-weapons I'll let your imaginations do the work. Iain. https://www.youtube.com/watch?feature=player_detailpage&v=4M-0LFBP9AU#t=3670 http://imgur.com/BAiJKUS Not quite accurate. D is a samurai sword with a machine-gun attached that also has one of these bad boys on the other end: http://www.wimp.com/militaryshovel/ The problem is that it's many tools in one, some of which don't work as well as they could together, and make it a complicated tool to master (but extremely versatile when you do). It comes with an instruction manual that tells you the function of each different piece, but not how different pieces can be used together to make things easier. I'd say it's a prototype laser blaster that has been proven to work. However, it isn't quite ready for mass deployment amongst the troops, or acceptance with the brass.
Re: Fixing C-style alias declarations.
On Wednesday, 20 August 2014 at 00:17:06 UTC, ketmar via Digitalmars-d wrote: On Wed, 20 Aug 2014 00:05:04 + Brian Schott via Digitalmars-d wrote: i think that it's time to kill both c-like array declarations and old-style aliases w/o '=' (ok, let 'alias this' live for now). this will solve all problems. Good luck with that. C-style will remain if only to make it easy to port C code. Old style alias stays, because new style is what, 1 year old? Are you seriously suggesting we break *any* D code that is older than that? I'm sure Walter and Andrei will be REAL receptive to your suggestions.
Re: String to binary conversion
On Thursday, 4 September 2014 at 14:43:03 UTC, HeiHon wrote: Can we move digitalmars.D.learn to the top of the forums at forum.dlang.org? I would even put it in its separate area on that page with larger font, etc. :) +1 Given that it seems quite common for newbies to not ask their first questions in D.learn. Not *that* common. Heck, I find it's rare compared to a lot of other forums. I think the current setup is fine.
Re: [OT] GitHub now supports viewing diffs in split mode
On Wednesday, 3 September 2014 at 20:50:45 UTC, David Nadlinger wrote: https://github.com/blog/1884-introducing-split-diffs Now you don't need to fetch the PR or resort to hacks like Octosplit in order to view pull request diffs in side-by-side mode. Greatest news ever!
Re: Encapsulating trust
On Tuesday, 2 September 2014 at 17:20:06 UTC, Daniel Murphy wrote: This is Wrong! Any function that uses these wrappers is abusing @trusted. eg: import stdx.trusted; int* func(int x) @safe { return addrOf(x); } This functions is @safe, but happily returns an invalid pointer. This is possible because addrOf violates the requirement that @trusted functions must be completely @safe to call from an @safe function. That's a good point. Having syntax (or a wrapper function) to do the second wrapping automatically would violate @safe. If it was syntax, it would be banned in @safe. If it's a wrapping method like the proposed 'call', then it is a program error for it to be marked @trusted. Good points too. A very logical conclusion.
Re: Encapsulating trust
On Tuesday, 2 September 2014 at 14:33:53 UTC, Dmitry Olshansky wrote: 31-Aug-2014 17:47, Dmitry Olshansky пишет: Quite recently a lot of work has been done to make most of Phobos usable in @safe code. ... What do you guys think? Probably a lot of people missed the point that if we standardize a few idioms (dangerous but at least centralized) we at least can conveniently contain the "abuse" of @trusted to the select standard module. Else it *will* be abused in a multitude of ways anyway. I think it's probably hard to appreciate where you are coming from, until you've reviewed code for things such as Appender and/or emplace. I swear there was 1 point where roughly 25% of the lines of code in that thing where wrapped in a trusted lambda. 1 issue I find with your proposal, is (personally), I've seldom had to *call* unsafe functions in a trusted fashion, but rather, had to do unsafe *things*: if (capacity > slice.length) slice = @trusted(){return slice.ptr[0 .. slice.length + 1];}(); In such context, "call!" wouldn't help much. That said, there are also plenty of cases where we call memcpy (just grep "trustedMemcpy" in phobos), where your proposal would help. Also: There's already a help "addressOf" somewhere in phobos. It's meant mostly to take the address of property return values. Instead of providing "addressOf" in std.trusted, you could simply do a "call!" of the not-trusted generic "addressOf". Just a thought.
Re: [OT] If programming languages were weapons
On Tuesday, 2 September 2014 at 09:46:54 UTC, Chris wrote: On Tuesday, 2 September 2014 at 08:29:25 UTC, Iain Buclaw wrote: In normal fashion, it's missing an entry for D. http://bjorn.tipling.com/if-programming-languages-were-weapons I'll let your imaginations do the work. Iain. In a way it's good that it's missing an entry for D, because he ain't got nothing good to say about any of the languages listed, except maybe for C. I like the C++ and the JS entries, the thing about Rust is also quite good. The PHP entry one is cold.
Re: Encapsulating trust
On Monday, 1 September 2014 at 17:59:07 UTC, Dicebot wrote: On Monday, 1 September 2014 at 17:48:59 UTC, monarch_dodra wrote: I feels like you are missing the point of the @trusted lambda construct, in that is meant to be used in generic code, where you know a *piece* of a function is provably safe (eg: @trusted), but not all of it: The rest of the code depends on the inferred attributes of the parameter-dependent code. If your function is not generic, then just mark it as @trusted, and then that's that. I totally disagree. Marking whole function @trusted (unless those are extern(C)) is an abomination we should try to get rid of. Trusted lambda must encapsulate minimal amount of code possible together with all data validation if necessary.Anything else simply does not scale with maintenance and is likely to introduce holes in @safe. I meant it mostly in that the proposal to mark the entire function as @trusted isn't even *applicable* to template functions. I agree with you.
Re: Encapsulating trust
On Monday, 1 September 2014 at 16:36:04 UTC, Daniel Murphy wrote: I don't think this is a good idea. Each time @trusted is used, it should be on a function that is completely @safe to call. I think this is worth more than the cost in verbosity. Lambdas and nested functions are special in that they can't be called from other code, so they only have to be @safe in the context of the enclosing function. They do still need to make sure they don't violate @safe, otherwise the entire enclosing function will need to be manually checked. eg void fun(int a) @safe { ... p = @trusted () { return &a; } ... } This function is now essentially @trusted, because although the unsafe '&' operation was inside the trusted block, the @safe function now has a pointer it should not have been able to get. I feels like you are missing the point of the @trusted lambda construct, in that is meant to be used in generic code, where you know a *piece* of a function is provably safe (eg: @trusted), but not all of it: The rest of the code depends on the inferred attributes of the parameter-dependent code. If your function is not generic, then just mark it as @trusted, and then that's that. Another alternative I had proposed was one of being able to simply create blocks with attributes. EG: void foo(T)(T t) { t.doSomething(); //May or may not be safe. nothrow { ... //Do "critical" code that can't throw here. } @trusted { ... //This slice of code is trusted. } @safe @nogc { ... //Have the compiler enforce only @safe and @ngc code goes here. } return t; }
Re: [OT] Microsoft filled patent applications for scoped and immutable types
On Monday, 1 September 2014 at 05:56:33 UTC, Walter Bright wrote: What I don't intend to do is patent D's innovations. What D has done is our gift to the programming community. I'm also glad we're using github, as it is a fine way to document and timestamp the provenance of D's features. Isn't there some way to "open source" a patent? Or at least, make some sort of formal publication that this was invented, and may not be patented by someone else? Just because you don't want to "lock down" your inventions, doesn't mean they are free to take... Then again, it takes a certain kind of corporate greed to try to put a patent on things we'd have never thought of as "inventions". Did we patent UFCS yet? It's an invention. How about CTFE? That seems like a *huge* invention? What about generic tuples? No language I know of uses these. Static if? Let's patent that too while we're at it.
Re: Encapsulating trust
On Monday, 1 September 2014 at 07:13:47 UTC, Dmitry Olshansky wrote: auto ref return FTW I thought you had avoided that on purpose, in the sense that generic auto-ref input *and* output has been proven unsafe, in the sense that there are tons of ways for the compiler to accidently return a ref to something that is actually local. unaryFun!"a[0]" or unaryFun"a.field" is a perfect example of that. Or, say: int tube(int a){return a;} ref int tube(ref int a){return a;} call!tube(5); //Here - fails perfect forwarding of lvalues into rvalues. ? void inc(ref int a) { a += 1; } int b; call!inc(b); assert(b == 1); Works fine. Yes, but: call!inc(5) will *also* succeed. But background on this issue would be certain functions, such as "emplace" that could elide postblit entirely when asked to emplace from an rvalue. The issue is that such functions that use auto-ref have a tendency to lose that information. We could use "std.algorithm.forward", but that function is currently too expensive. That said, there are no real functions that would exploit this "perfect forwarding" anyways. I filed: https://issues.dlang.org/show_bug.cgi?id=12683 https://issues.dlang.org/show_bug.cgi?id=12684 After Andrei filed: https://issues.dlang.org/show_bug.cgi?id=12628 Which would be the first steps to really start making more efficient use of rvalues in D.
Re: Encapsulating trust
On Sunday, 31 August 2014 at 13:47:42 UTC, Dmitry Olshansky wrote: What do you guys think? I'd say add "trusted" to those function names: "trustedCall" "trustedAddrOf" Because: - "call" could mean a lot of things. It's not imidiatly obvious that it is meant for being trusted. - "addrOf" is *also* used as an alternative for "&", especially in generic code, when you need the address of an attribute, as that attribute could actually be a property function. EG: auto p = addrOf(r.front); Nothing here implies trust. Also, implementation wise, wouldn't it be possible to instead make `call` a template aliases itself away to `fun`, but with different attributes? The `auto fun(Args)(auto ref Args args)` tube approach has the disadvantages that it: - prevents return by ref - fails perfect forwarding of lvalues into rvalues.
Re: code cleanup in druntime and phobos
On Sunday, 31 August 2014 at 15:33:45 UTC, Iain Buclaw via Digitalmars-d wrote: The only change I have noticed as being part of github is a steady stream of monthly emails and phone calls (voice messages, I never answer them), be it universities conducting a study, or recruiters looking to interview me because they came across my profile. Sometimes its annoying, but reluctantly accepted as one of the perks of being on a social site. Iain. I've never gotten calls (I didn't give my number). I have been asked to participate in 1 or 2 studies though. I've also been contacted by a recruiter, but it got me an awesome sweet new job, so that's a perk, arguably. I was also later contacted by another recruiter for the same company. So that was kind of ego boosting.
Re: code cleanup in druntime and phobos
On Saturday, 30 August 2014 at 14:35:52 UTC, Ola Fosheim Grøstad wrote: On Saturday, 30 August 2014 at 14:32:01 UTC, Daniel Murphy wrote: Using github is similar to our requirement to match the code style when submitting patches. It's non-negotiable, because there's no good reason not to do it. You just remove those tabs, then get on with it. Here is a good reason: «I have no interest in learning github, and I personally don't care if you accept this patch, but here you have it in case you want to improve your system». Here is another good reason: «Figuring out the D process is wy down on my todo list, maybe sometime next month, next year, next…» I'm fine with people submitting patches in bugzilla, but they need to realize it's not the procedure. So it's "welcome help", but there's still the actual work that needs to be done by someone else: Not only the pull, but the review, sticking with the review, etc... I can also appreciate that filing a bug is work in itself. Doing that is already a step most people don't take. We just need to meet halfway, and not bitch about it: Both sides have or will provide work, and need to realize that about the other.
Re: [OT] Microsoft filled patent applications for scoped and immutable types
On Saturday, 30 August 2014 at 09:00:24 UTC, Jérôme M. Berger wrote: Timon Gehr wrote: On 08/28/2014 11:53 AM, "Jérôme M. Berger" wrote: ... I should have said that in D it is used when declaring an instance (i.e. at the place of the instance declaration) whereas in the patent it is used when declaring the type. For a patent lawyer, this will be enough to say that the patent is new. ... This works as expected: immutable class C{ // ... } Then we should be ok, assuming we can prove it already worked a year and a half ago. Jerome Who said anything about it having to work?
Re: Destroying structs (literally)
On Friday, 29 August 2014 at 18:12:04 UTC, Orvid King wrote: On 8/29/2014 12:41 AM, monarch_dodra wrote: Questions: - Can and will this work for arrays of structs? - When doing manual GC allocations (for whatever reason), how can we later tell the GC what destructor to call? Yes, this does work for arrays of structs. Provided that you've passed in the type info for the struct when doing the manual allocation, it should call the destructor without anything extra needing to be done on the user's part. Hum... by "manual" memory allocation. I meant this: GC.qalloc(newlen * T.sizeof, blockAttribute!T); That's what Appender does. Unless I'm mistaken, the Type info is not passed here? Furthermore, the Type info *can't* be passed...? > These questions combined are really aimed at Appender: I'm curious at if > and how any changes will have to be made to it. Appender already uses the type info's destroy, so it shouldn't have any issues, as I already had to account for that. This is news to me. Appender does not destroy anything. It makes no call to delete/destroy/release or whatnot. It simply just keeps allocating away. > Also question: Will this play nice wit exiting code that manually > destroys GC allocated structs? As I mentioned elsewhere, as long as the existing code is calling destroy, and not calling the finalizer directly, then yes, it will play nice, and only call the finalizer once. OK. Nice. Thanks. BTW: If Appender is "broken", that's OK (in the sense that it won't be any more "broken" than before). I just want as much information as possible about what I (we) will need to do to update it. In particular, Appender has an optimization that skips postblit to "relocate" when possible. If destructions start happening, then we'll need to make sure we first reset to "T.init", or we'll risk destroying a non-postblitted copy (which could be catastrophic in the case of RC'ed structs).
Re: Destroying structs (literally)
On Friday, 29 August 2014 at 18:12:04 UTC, Orvid King wrote: On 8/29/2014 12:41 AM, monarch_dodra wrote: Questions: - Can and will this work for arrays of structs? - When doing manual GC allocations (for whatever reason), how can we later tell the GC what destructor to call? Yes, this does work for arrays of structs. Provided that you've passed in the type info for the struct when doing the manual allocation, it should call the destructor without anything extra needing to be done on the user's part. Hum... by "manual" memory allocation. I meant this: GC.qalloc(newlen * T.sizeof, blockAttribute!T); That's what Appender does. Unless I'm mistaken, the Type info is not passed here? Furthermore, the Type info *can't* be passed...? > These questions combined are really aimed at Appender: I'm curious at if > and how any changes will have to be made to it. Appender already uses the type info's destroy, so it shouldn't have any issues, as I already had to account for that. This is news to me. Appender does not destroy anything. It makes no call to delete/destroy/release or whatnot. It simply just keeps allocating away. > Also question: Will this play nice wit exiting code that manually > destroys GC allocated structs? As I mentioned elsewhere, as long as the existing code is calling destroy, and not calling the finalizer directly, then yes, it will play nice, and only call the finalizer once. OK. Nice. Thanks. BTW: If Appender is "broken", that's OK (in the sense that it won't be any more "broken" than before). I just want as much information as possible about what I (we) will need to do to update it. In particular, Appender has an optimization that skips postblit to "relocate" when possible. If destructions start happening, then we'll need to make sure we first reset to "T.init", or we'll risk destroying a non-postblitted copy (which could be catastrophic in the case of RC'ed structs).
Re: Destroying structs (literally)
On Friday, 29 August 2014 at 09:08:07 UTC, Andrej Mitrovic via Digitalmars-d wrote: On 8/29/14, ponce via Digitalmars-d wrote: On Friday, 29 August 2014 at 02:21:07 UTC, Andrei Alexandrescu wrote: Dear community, are you ready for this? Yes! Whatever needs be done. Yeah destructors are a sore pain when they're unreliable. "May or may not be called" is just an awful semantic. That won't really change though, will it? AFAIK, it'll become: "will eventually be called at some unspecified point it time. The program may terminate before that happens, at which point, the destructor will never be called." ...right?
Re: Destroying structs (literally)
On Friday, 29 August 2014 at 02:38:54 UTC, H. S. Teoh via Digitalmars-d wrote: Maybe a more relevant question might be, is there any existing code that *isn't* broken by structs not being destructed? (D-structed, har har.) Well, this new change *could* greatly increase the amount of "allocation during destruction" errors we are getting. I've seen a fair share of these in learn, whereas a class destructor allocates. Structs will now also be more vulnerable to this problem too. I wouldn't be surprised if this pull instantaneously introduced a fair amount of breakage in client code.
Re: Destroying structs (literally)
On Friday, 29 August 2014 at 02:21:07 UTC, Andrei Alexandrescu wrote: Dear community, are you ready for this? https://issues.dlang.org/show_bug.cgi?id=2834 https://github.com/D-Programming-Language/druntime/pull/864 We must do it, and the way I see it the earlier the better. Shall we do it in 2.067? This is a significant change of behavior. Should we provide a temporary flag or attribute to disable it? Thanks, Andrei Questions: - Can and will this work for arrays of structs? - When doing manual GC allocations (for whatever reason), how can we later tell the GC what destructor to call? These questions combined are really aimed at Appender: I'm curious at if and how any changes will have to be made to it. Also question: Will this play nice wit exiting code that manually destroys GC allocated structs?
Re: [OT] Microsoft filled patent applications for scoped and immutable types
On Wednesday, 27 August 2014 at 09:20:49 UTC, Théo Bueno wrote: On Wednesday, 27 August 2014 at 09:08:24 UTC, Chris wrote: Of course, the whole lot of them! I only wonder who they're trying to attack here? It must be some sort of strategy to put someone they deem dangerous off his stride. Probably the open source community and / or a competitor. I don't know the laws in the US and don't know how serious this is. It probably can't just be ignored. Is there some other big company they're trying to get at with this? Maybe they're preparing a counter strike. Yeah, IMO these patents can't be a coincidence. Big companies file patents. All of them do. That's just the way it is. I wouldn't see anything more to it than that. It's not some conspiracy or corporate war. That's the way the game is played. We just need to make sure we don't become the losers here. It would help to have input from Walter here though: It's his language, and, AFAIK, he also happens to be savvy with this kind of stuff.
Re: Relaxing the definition of isSomeString and isNarrowString
On Monday, 25 August 2014 at 22:04:34 UTC, monarch_dodra wrote: I'll create a pull for it. https://github.com/D-Programming-Language/phobos/pull/2464
Re: Relaxing the definition of isSomeString and isNarrowString
On Monday, 25 August 2014 at 21:12:49 UTC, Andrei Alexandrescu wrote: On 8/25/14, 1:35 PM, monarch_dodra wrote: One issue this proposal seems to forget (and it's a problem that transcends D), is that the GC does not finalize structs. It will. Andrei Awesome.
Re: Relaxing the definition of isSomeString and isNarrowString
On Monday, 25 August 2014 at 21:11:52 UTC, Andrei Alexandrescu wrote: That escalated quickly. Chillax. Sorry. It's just that I've been seeing this claim way too much frequently, especially in learn, where there have been too many threads about trying to avoid decoding. Most often, for the wrong reasons. The use of that enum array is fine but somewhat fragile - a small refactoring may trigger an allocation. Declaring it as a simple 4-member TypeTuple would iliminate the (potential) issue, as a TT *can't* be runtime indexed. Seems like a good compromise. Trying a static immutable array may be in order. That would actually create an object, and potentially prevent optimizations (I think, maybe). The TT seems cleaner to me anyways. I'll create a pull for it.
Re: Relaxing the definition of isSomeString and isNarrowString
On Sunday, 24 August 2014 at 01:06:31 UTC, Andrei Alexandrescu wrote: Currently char[], wchar[], dchar[] and qualified variants fulfill the requirements of isSomeString. Also, char[], wchar[] and qualified variants fulfill the requirements of isNarrowString. Various algorithms in Phobos test for these traits to optimize away UTF decoding where unnecessary. I'm thinking of relaxing the definitions to all types that fulfill the following requirements: * are random access ranges * element type is some character * offer .ptr as a @system property that offers a pointer to the first character This would allow us to generalize the notion of string and offer optimizations for user-defined, not only built-in, strings. Thoughts? Andrei One issue is that strings are "auto decoded", yet a range of "char" is not. I don't see why ".ptr" is needed (or .length) for that matter. We could very well have a range of chars that doesn't of length, nor .ptr, but should still be handled like a sequence of decoded characters. I don't see how your proposal caters to that more generic problem that a UD char range is not auto-decoded (or more generally, that it will be handled *differently* from a string).
Re: Relaxing the definition of isSomeString and isNarrowString
On Sunday, 24 August 2014 at 12:24:03 UTC, Andrei Alexandrescu wrote: To that end I'm working on RCString, an industrial-strength string type that's much like string, just reference counted and with configurable allocation. It's safe, too - user code cannot casually extract references to string internals. By default allocation would use GC's primitives; one thing I learned to appreciate about our GC is its ability to free memory explicitly, which means RCString will free memory deterministically most of the time, yet if you leak some (e.g. by having RCString class members) the GC will pick up the litter. I think reference counting backed up by a GC that lifts litter and cycles and is a modern, emergent pattern that D could use to great effect. (Speaking of which: Some, but not all, types in std.container use reference counting. One other great area of improvement would be to guarantee that everything is std.container is reference counted. Containers are the perfect candidate for reference counting - they are typically large enough to make the reference counting overhead negligible by comparison with the typical work on them.) One issue this proposal seems to forget (and it's a problem that transcends D), is that the GC does not finalize structs. Your RC proposal is fine and good for strings, because the individual chars don't have destructors. But unless we migrate *everything* to using RC, we'd still be leaking non-memory resources. For example, "File" is reference counted, and I've seen people time and time again get had, because they use a "File[]". Oops. Imo, this is a big issue. Are there any plans to takle this problem? Another issue we encounter a lot with reference type objects that do RC, is one of initial initialisation/allocation. Currenly, D does not have default constructor. I understand why. But it makes it painfainly difficult to implement run-time initialize with no arguments, while avoiding user errors. This has been a problem time and time again for objects such as Appender, or T[U], in that aliasing only happens *after* the first operation. Has this been discussed again yet? We have "T.init". Why couldn't we have default construction, that can be explicitly skipped with "T a = T.init;"? I realize this derails the conversation a bit, but I think it is related enought to warrant mentioning.
Re: Relaxing the definition of isSomeString and isNarrowString
On Sunday, 24 August 2014 at 12:24:03 UTC, Andrei Alexandrescu wrote: Look e.g. at https://github.com/D-Programming-Language/phobos/blob/master/std/utf.d#L1074. That's a memory allocation each and every time decodeImpl is called. I'm not kidding. Take a look at http://goo.gl/p5pl3D vs. http://goo.gl/YL2iFN. It's egregious. Dmitry already replied, but I want to stress the reply. That's complete BS. decodeImpl does *not* allocate. That line of code has been commented on, tested and benched. The enum is only ever used for CT indexing, which will *not* allocate. I'm repeatedly seeing people complain that decoding is slow. Maybe. And that it "allocates". But that's BS. A lot of people have been repeating this false information now, and I find it worrysome that both you and Walter have made this claim now.
Re: Coding style on dlang.org
On Saturday, 23 August 2014 at 14:52:45 UTC, H. S. Teoh via Digitalmars-d wrote: I notice that the coding style used for code examples on dlang.org isn't always consistent, and they generally differ from Phobos examples. Should we adopt Phobos style for all code examples on dlang.org? T Do you have any specific examples? I'd assume the code is mostly old. I'd reject any pull that doesn't adhere to Phobos style, even if it's just ddoc'ed examples.
Re: DUB fails with 2.067 MinType
On Saturday, 23 August 2014 at 11:50:47 UTC, Nordlöw wrote: https://github.com/D-Programming-Language/dub/issues/402 Output is very sparse: DMD64 D Compiler v2.067-devel-1a10637 /opt/dmd/include/d2/std/algorithm.d(7168): Error: template instance std.algorithm.MinType!(uint, uint) recursive expansion Clues anyone? I looked at MinType's code. I see nothing in there that could explain this.
Re: C++'s std::rotate
On Monday, 11 August 2014 at 14:45:09 UTC, Andrei Alexandrescu wrote: On 8/11/14, 2:11 AM, "Nordlöw" wrote: On Monday, 11 August 2014 at 06:56:52 UTC, Dragos Carp wrote: bool sliceOf(T)(in T[] whole, in T[] slice) { return whole.ptr <= slice.ptr && whole.ptr + slice.length <= whole.ptr + slice.length; } Shouldn't the function arguments of sliceOf be reversed to given a more intuitive UCFS as if (slice.sliceOf(whole) { ... } isSliceOf -> yum While "sameHead" and "sameTail" *could* have a "good enough" generic implementation for ranges, there is absolutely no way to make "isSliceOf" or "overlap" work for a generic range. That said, sameHead and sameTail is just the iterator equivalent of "first1 == first2" and "last1 == last2", which is used a lot with iterators. You rarely see operator "<" used with iterators though, so I have doubts about why those two functions (isSliceOf and overlap) would actually be of any use.