Re: User Defined Attributes
On 2012-11-10 05:02, Walter Bright wrote: Meaning a given attribute can have only one, global, meaning. Isn't that true for any symbol. Can I have two std.stdio.writeln symbols in the same application? -- /Jacob Carlborg
Re: User Defined Attributes
Le 10/11/2012 05:02, Walter Bright a écrit : On 11/9/2012 6:28 PM, deadalnix wrote: Le 08/11/2012 11:56, Walter Bright a écrit : On 11/7/2012 11:27 PM, Jacob Carlborg wrote: On 2012-11-08 02:49, Walter Bright wrote: Yes, that makes the attribute global. I don't actually know how this works in Java but if you are forced to use the fully qualified name for the attribute it won't make the attribute global. A plugin would apply globally, wouldn't it? No it would apply on symbols qualified with a given attribute (provided by the plugin). Meaning a given attribute can have only one, global, meaning. Yes, it have to. What is the point of attaching an attribute is you cannot know what meaning it has ? If an attribute can have ambiguous meaning, then it defeat the whole point of having attribute.
Re: User Defined Attributes
On 11/10/2012 1:59 AM, Jacob Carlborg wrote: On 2012-11-10 05:02, Walter Bright wrote: Meaning a given attribute can have only one, global, meaning. Isn't that true for any symbol. Can I have two std.stdio.writeln symbols in the same application? Think of it this way. If I have myString.String, and use the strings in it as an attribute in one module, and in another use module use myString.String as an attribute with a totally different meaning, that will not work if plugins are used.
Re: User Defined Attributes
On 2012-11-10 20:04, Walter Bright wrote: Think of it this way. If I have myString.String, and use the strings in it as an attribute in one module, and in another use module use myString.String as an attribute with a totally different meaning, that will not work if plugins are used. I'm not entirely sure what you're meaning here but if you have two symbols with the same name in two different modules their fully qualified names won't be the same. If you're referring to using a string literal as an attribute then that would be a bad thing, like we have tried to explain. It's better to use a type which will have a unique name. If I have misunderstood what you're meaning could you provide a code example? -- /Jacob Carlborg
Re: User Defined Attributes
Le 10/11/2012 20:04, Walter Bright a écrit : On 11/10/2012 1:59 AM, Jacob Carlborg wrote: On 2012-11-10 05:02, Walter Bright wrote: Meaning a given attribute can have only one, global, meaning. Isn't that true for any symbol. Can I have two std.stdio.writeln symbols in the same application? Think of it this way. If I have myString.String, and use the strings in it as an attribute in one module, and in another use module use myString.String as an attribute with a totally different meaning, that will not work if plugins are used. Thinking of it this way don't make any sense. The whole point of an attribute is to tell a library about how to understand your code. If many library uses the same attribute, then it defeat the whole point of having attribute. This is why the discussion about disallowing any type as UDA exists in the first place.
Re: std.net.curl problem
Am Fri, 09 Nov 2012 21:27 -0800 schrieb Jonathan M Davis jmdavisp...@gmx.com: On Saturday, November 10, 2012 03:25:37 adi wrote: can anyone please let me know what i am doing wrong It looks like you're not linking against libcurl. You need to explicitly pass a flag to dmd to tell it to link. If you were on linux, it would be -L-lcurl. It's something similar for Windows, but I don't remember the exact syntax. - Jonathan M Davis No, the undefined symbols are D (mangled) symbols. It's probably this bug: http://d.puremagic.com/issues/show_bug.cgi?id=7561 But that should be fixed in dmd 2.060: https://github.com/D-Programming-Language/phobos/pull/613 @adi which dmd/phobos version did you use to compile your code?
Re: Pyd thread
On Saturday, 10 November 2012 at 06:22:57 UTC, Rob T wrote: I'm also gambling that the dll issue will be resolved in a reasonable amount of time, as the apps I'm building in D will require it. I'll need ddl's that are dynamically linked, and ddl's used as plugins that are dymaically loaded. I'm currently working on Linux almost exclusively, which also has the same problem. Unfortunately, I'm very new to D, so I doubt at this stage I can lend a hand to help solve problems like the dll issue. The best I can do for now is report on compiler bugs that I'm finding. Although I've only tested on Windows, I have a working DLL module example in LuaD [1][2]. The problems with DLLs are not as relevant for modules like these, and I don't imagine Python modules having too different needs. [1] https://github.com/JakobOvrum/LuaD/tree/master/example/dmodule [2] https://github.com/JakobOvrum/LuaD/blob/master/example/bin/module.lua
Re: New language name proposal
:p Muchas problemas: D-lang - not-invented-here! (golang, etc.) DPL - has no sound to it iirc dee was also a proposal in the past -- Marco
Re: std.net.curl problem
On Saturday, 10 November 2012 at 05:34:16 UTC, Jonathan M Davis wrote: On Saturday, November 10, 2012 03:25:37 adi wrote: can anyone please let me know what i am doing wrong It looks like you're not linking against libcurl. You need to explicitly pass a flag to dmd to tell it to link. If you were on linux, it would be -L-lcurl. It's something similar for Windows, but I don't remember the exact syntax. - Jonathan M Davis Thanks jonathan for the pointer if you could help me with the complete solution that would be great as i have no knowledge of c as such , i am using D version 2.0.60 and visuald as the IDE solution
Re: std.net.curl problem
On Saturday, 10 November 2012 at 08:41:20 UTC, Johannes Pfau wrote: Am Fri, 09 Nov 2012 21:27 -0800 schrieb Jonathan M Davis jmdavisp...@gmx.com: On Saturday, November 10, 2012 03:25:37 adi wrote: can anyone please let me know what i am doing wrong It looks like you're not linking against libcurl. You need to explicitly pass a flag to dmd to tell it to link. If you were on linux, it would be -L-lcurl. It's something similar for Windows, but I don't remember the exact syntax. - Jonathan M Davis No, the undefined symbols are D (mangled) symbols. It's probably this bug: http://d.puremagic.com/issues/show_bug.cgi?id=7561 But that should be fixed in dmd 2.060: https://github.com/D-Programming-Language/phobos/pull/613 @adi which dmd/phobos version did you use to compile your code? Hi Johannes pfau i am using dmd version 2.060
Re: Getting rid of dynamic polymorphism and classes
Am Thu, 08 Nov 2012 23:38:53 +0100 schrieb Tommi tommitiss...@hotmail.com: On Thursday, 8 November 2012 at 21:43:32 UTC, Max Klyga wrote: Dinamic polimorphism isn't gone anywhere, it was just shifted to delegates. But there's no restrictive type hierarchy that causes unnecessary coupling. Also, compared to virtual functions, there's no overhead from the vtable lookup. Shape doesn't need to search for the correct member function pointer, it already has it. It's either that, or else I've misunderstood how virtual functions work. They work like this: Each object has as a pointer to a table of method pointers. When you extend a class, the new method pointers are appended to the list and existing entries are replaced with overrides where you have them. So a virtual method 'draw()' may get slot 3 in that table and at runtime it is not much more than: obj.vftable[3](); These are three pointer dereferences (object, vftable entry 3, method), but no search. -- Marco
Re: Const ref and rvalues again...
On 9 November 2012 21:39, Jonathan M Davis jmdavisp...@gmx.com wrote: On Friday, November 09, 2012 15:55:12 Manu wrote: Does that actually make sense? Surely a function that receives a scope argument can return that argument, since it's only passing it back to the same function that already owns it... it knows it can trust that function, since it was received from that function. It can't. That would mean that the reference escaped. That would be particularly deadly for delegates. Think about what happens if the scoped delegate is put into a struct which is returned. struct Result { delegate del; ... } Result foo(scope delegate... bar) { .. return Result(bar); } auto baz() { Result r; { int n = 5; r = foo((){writeln(n);}); } r.del(); } baz has no idea where the delegate in r came from. It has no idea that it wasn't allocated as a closure. So, it's not going to allocate one, which means that the delegate refers to a part of the stack which won't exist anymore when the delegate actually gets called. If scope wasn't used, that wouldn't have been a problem, because a closure would have been allocated as soon as the delegate had been passed to foo, but because scope was used, it knows that the delegate won't escape, so it doesn't allocate the closure (since it's not necessary). But that only works because scope prevents escaping - including by the return value. So, the above code _must_ be invalid. Okay, makes sense. Any struct holding any reference types would be in the same boat, as would any class or AA. I don't follow the problem with reference args. surely they can be evaluated just fine? Just that nothing can escape the function... It's the fact that they can't escape the function which is the problem. If scope is working correctly, than any and all reference types which are passed to the function via a scope parameter (be it directly or within another object) cannot escape the function in any way shape or form. So, you can't assign any of them to any static or module-level variables (not generally a big deal) and you can't use any of them in the return value (often a big deal). Sometimes, that's exactly what you want, but in the general case, you don't want to prevent anything you pass into a function from being returned from it, so scope quickly becames overly restrictive. I'm still not buying this. Here's a common struct I will pass by ref (perhaps the most common struct in my industry): struct Vector { float, x,y,z,w; } struct Matrix { Vector xRow, yRow, zRow, wRow; } Vector mul( scope const ref Matrix m, scope const Vector v) { Vector v; // perform a matrix multiply against the vector... // this work uses every single field of the inputs given, but the result it produces has to references to the sources. // everything is operated on and copied to the output struct, which is returned. return result; } Why should this be a problem? The majority of my work-horse structs apply to this pattern. This is what I imagine 'scope' to be for... The main advantage I expect is that I can have confidence that passing rvalues (temporaries) is safe, and that external code won't take references to memory that I may not own/control. Is that not the point? Surely the problem that scope should be protecting against is a pointer to any part of the argument escaping. *Copies* of values contained in the argument/s are fine.
Re: Getting rid of dynamic polymorphism and classes
On Saturday, 10 November 2012 at 09:23:40 UTC, Marco Leise wrote: They work like this: Each object has as a pointer to a table of method pointers. When you extend a class, the new method pointers are appended to the list and existing entries are replaced with overrides where you have them. So a virtual method 'draw()' may get slot 3 in that table and at runtime it is not much more than: obj.vftable[3](); Is vftable essentially an array? So, it's just a matter of offsetting a pointer to get access to any particular slot in the table? If virtual method calls are really that fast to do, then I think the idiom in the code snippet of my first post is useless, and the idiom they represent in that video I linked to is actually pretty great. Note: In order to make that video's sound bearable, you have to cut out the highest frequencies of the sound and lower some of the middle ones. I happened to have this Realtek HD Audio Manager which made it simple. Using the city filter helped a bit too. Don't know what it did.
Re: Getting rid of dynamic polymorphism and classes
P.S.: A) The more time is spent inside the virtual method, the less noticeable the impact of the lookup. B) If you call one virtual method often inside a single function, compilers typically cache the method pointer in some register. -- Marco
[OT] Ubuntu 12.10 guest in VirtualBox completely broken
I just wanted you all to know that running Ubuntu 12.10 as a guest in VirtualBox is completely broken. I update by guest system from 12.04 to 12.10 and it's so slow it's not usable. This is a known issue: https://www.virtualbox.org/ticket/11107 That issue contains a pre-release of VirtualBox, I tried that on Mac OS X and it broken the Ubuntu guest even more. No title bar on the windows or any window frame actually. -- /Jacob Carlborg
Re: [ ArgumentList ] vs. @( ArgumentList )
Am Fri, 09 Nov 2012 15:28:40 +0100 schrieb deadalnix deadal...@gmail.com: nothrow is already a keyword (which is really inconsistent). I'm not sure what it does buy us, and both safe and nothrow are good candidates for lib implementation rather than compiler support. That requires that the compiler exposes all sorts of statistical data for every statement. E.g. to check if nothrow is violated you have to find statements that throw something and then check if there is a catch block around that, that catches it. If a statement is a function call, you would ask the compiler if that function throws. (In particular if it is a templated function with deduced 'nothrow' and '@safe'). And there you are at the point that you just duplicated the compiler code in the library. -- Marco
Re: Immutable and unique in C#
Am 09.11.2012 18:45, schrieb Kagamin: :) do they implement `inout`? Haven't seen it, yet ;) But seriously, they go a lot further with their `isolated` type qualifier. That together with their functions, which are basically weakly pure, opens up a lot of new opportunities for statically verified code without casting around all the time or bending the code structure so much that it hurts. The article is quick to read in it's core and _really_ anyone interested in D's development should read it. This pretty much what we have + the bits that are currently missing (especially concerning objects in the immutable/shared world). It fits perfectly with what's already there, it's proven to be sound and practical, and IMO it's definitely what should be implemented in D.
Re: [OT] Ubuntu 12.10 guest in VirtualBox completely broken
On 10/11/2012 10:14, Jacob Carlborg wrote: I just wanted you all to know that running Ubuntu 12.10 as a guest in VirtualBox is completely broken. I update by guest system from 12.04 to 12.10 and it's so slow it's not usable. This is a known issue: https://www.virtualbox.org/ticket/11107 That issue contains a pre-release of VirtualBox, I tried that on Mac OS X and it broken the Ubuntu guest even more. No title bar on the windows or any window frame actually. What a coincidence, I just installed VirtualBox and was looking for advice on what distro of linux to install... All I want to be able to do with my VM is be able to build GDC for my RasPi (via all the hoops that are necessary to get cross compilation working), not bothered about any GUI stuff or bells/whistles, suggestions for a good distro to go with are welcome! A...
Re: [ ArgumentList ] vs. @( ArgumentList )
Am 10.11.2012 11:21, schrieb Marco Leise: Am Fri, 09 Nov 2012 15:28:40 +0100 schrieb deadalnix deadal...@gmail.com: nothrow is already a keyword (which is really inconsistent). I'm not sure what it does buy us, and both safe and nothrow are good candidates for lib implementation rather than compiler support. That requires that the compiler exposes all sorts of statistical data for every statement. E.g. to check if nothrow is violated you have to find statements that throw something and then check if there is a catch block around that, that catches it. If a statement is a function call, you would ask the compiler if that function throws. (In particular if it is a templated function with deduced 'nothrow' and '@safe'). And there you are at the point that you just duplicated the compiler code in the library. Not duplicated, but moved - which results in a simpler compiler implementation, definitely a good thing. Of course those AST analytics/maccro functionality has to be added in return, but since this has a much broader scope and would make other features (string mixins) obsolete, even that is not quite clear in terms of weight as a counter argument. Anyway, I surely wouldn't expect this to happen anytime soon, but keeping this path open seems like a wise decision - it's a great opportunity to remove/not add features from/to the language without trading functinality or even syntax.
Re: Immutable and unique in C#
Sönke Ludwig: It fits perfectly with what's already there, it's proven to be sound and practical, and IMO it's definitely what should be implemented in D. Seems fit to be added to Remus then :-) Bye, bearophile
Re: Immutable and unique in C#
Enhancement request: http://d.puremagic.com/issues/show_bug.cgi?id=8993
Re: Immutable and unique in C#
On 11/10/2012 12:23 PM, Sönke Ludwig wrote: Am 09.11.2012 18:45, schrieb Kagamin: :) do they implement `inout`? Haven't seen it, yet ;) But seriously, they go a lot further with their `isolated` type qualifier. That together with their functions, which are basically weakly pure, opens up a lot of new opportunities for statically verified code without casting around all the time or bending the code structure so much that it hurts. The article is quick to read in it's core and _really_ anyone interested in D's development should read it. This pretty much what we have + the bits that are currently missing (especially concerning objects in the immutable/shared world). It fits perfectly with what's already there, it's proven to be sound and practical, and IMO it's definitely what should be implemented in D. Agreed. Do you file an enhancement request?
Re: [OT] Ubuntu 12.10 guest in VirtualBox completely broken
On 2012-11-10 12:30, Alix Pexton wrote: What a coincidence, I just installed VirtualBox and was looking for advice on what distro of linux to install... All I want to be able to do with my VM is be able to build GDC for my RasPi (via all the hoops that are necessary to get cross compilation working), not bothered about any GUI stuff or bells/whistles, suggestions for a good distro to go with are welcome! Ubuntu 12.04 was working perfectly fine for me. Another alternative could be Linux Mint. I haven't used it myself but I heard it's basically Ubuntu with less bells and whistles, i.e they're not using the Unity GUI. http://linuxmint.com/ -- /Jacob Carlborg
Re: UDAs - Restrict to User Defined Types?
Le 08/11/2012 11:56, simendsjo a écrit : On Thursday, 8 November 2012 at 09:05:31 UTC, Jacob Carlborg wrote: I think we should only allow user defined types marked with @attribute, i.e. @attribute struct foo {} @attribute class foo {} @attribute interface foo {} @attribute enum foo {} And so on. Or struct @foo {} interface @foo {} enum @foo {0 etc I love it ! Shorter and achieve the same result, very readable, that is THE solution.
Re: UDAs - Restrict to User Defined Types?
Le 08/11/2012 19:55, simendsjo a écrit : On Thursday, 8 November 2012 at 17:20:39 UTC, Jacob Carlborg wrote: On 2012-11-08 17:53, simendsjo wrote: I guess it depends. I find it easier to see that it's an attribute, especially when you annotate it. But it's harder to grep for. Is foo an attribute or not? @serializable @xmlRoot @attribute @displayName(foo) struct foo {} Is foo an attribute or not? @serializable @xmlRoot @displayName(foo) struct @foo {} I don't know really. In that bottom example, the struct declartion almost disappears among all the attributes. Yeah.. But at least you'll always know where to look. @[serializable, xmlRoot, attribute, displayName(foo)] struct foo {} @[serializable, xmlRoot, displayName(foo)] struct @foo {} but attribute could be required as the last type, and on a line of it's own, giving: @[serializable, xmlRoot, displayName(foo)] @attribute struct foo {} More special case isn't a good idea. We already have way too much of them.
Re: UDAs - Restrict to User Defined Types?
Le 08/11/2012 01:11, Timon Gehr a écrit : On 11/08/2012 12:18 AM, Walter Bright wrote: Started a new thread on this. On 11/7/2012 3:05 AM, Leandro Lucarella wrote: OK, that's another thing. And maybe a reason for listening to people having more experience with UDAs than you. For me the analogy with Exceptions is pretty good. The issues an conveniences of throwing anything or annotating a symbol with anything instead of just type are pretty much the same. I only see functions making sense to be accepted as annotations too (that's what Python do with annotations, @annotation symbol is the same as symbol = annotation(symbol), but is quite a different language). There's another aspect to this. D's UDAs are a purely compile time system, attaching arbitrary metadata to specific symbols. The other UDA systems I'm aware of appear to be runtime systems. This implies the use cases will be different - how, I don't really know. But I don't know of any other compile time UDA system. Experience with runtime systems may not be as applicable. Another interesting data point is CTFE. C++11 has CTFE, but it was deliberately crippled and burdened with constexpr. From what I read, this was out of fear that it would turn out to be an overused and overabused feature. Of course, this turned out to be a large error. One last thing. Sure, string attributes can (and surely would be) used for different purposes in different libraries. The presumption is that this would cause a conflict. But would it? There are two aspects to a UDA - the attribute itself, and the symbol it is attached to. In order to get the UDA for a symbol, one has to look up the symbol. There isn't a global repository of symbols in D. You'd have to say I want to look in module X for symbols. Why would you look in module X for an attribute that you have no reason to believe applies to symbols from X? How would an attribute for module X's symbols leak out of X on their own? It's not quite analogous to exceptions, because arbitrary exceptions thrown from module X can flow through your code even though you have no idea module X even exists. This is a valid point, and I think it does not really make sense to only exclude built-in types. Any type not intended for use as an attribute and that is exported to sufficiently many places can have the same behaviour. I'd vote no restrictions at all, or for requiring @attribute annotations on the user-defined type and ban user-defined types from being annotations that do not have that. I'd vote for requiring @attribute annotations on the user-defined type and ban user-defined types from being annotations that do not have that. That is really important to allow libs not to step on each other's toes.
Binary compatibility on Linux
What's the best way to achieve binary compatibility on Linux? For example, if I compile an application on, say Ubuntu 12.04, it will most likely not run on any older versions of Ubuntu but it will run on future versions. My current approach to solve this is to compile the application in the oldest version of Ubuntu I can find, in this case 6.x. This is starting to get a bit problematic: * The integration with VirtuaBox (I'm running Ubuntu as a guest) is pretty bad * DMD won't run of out of the box, I need to compile it. This is also making DVM basically useless * I can't clone the dlang repositories due to having a very old version of git installed * I can't compile git, I haven't investigated in why but probably due to the system is too old Is there some compiler/linker flags I can use when building to make the executable compatibility with older versions of Linux? Or is there a better way to solve this? -- /Jacob Carlborg
Re: [OT] Ubuntu 12.10 guest in VirtualBox completely broken
On 11/10/2012 04:11 AM, Jacob Carlborg wrote: On 2012-11-10 12:30, Alix Pexton wrote: What a coincidence, I just installed VirtualBox and was looking for advice on what distro of linux to install... All I want to be able to do with my VM is be able to build GDC for my RasPi (via all the hoops that are necessary to get cross compilation working), not bothered about any GUI stuff or bells/whistles, suggestions for a good distro to go with are welcome! Ubuntu 12.04 was working perfectly fine for me. Another alternative could be Linux Mint. I haven't used it myself but I heard it's basically Ubuntu with less bells and whistles, i.e they're not using the Unity GUI. http://linuxmint.com/ Ubuntu 12.10/gnome classic works well enough; just turn off compiz. I tried mint and ran into trouble while compiling llvm. make gobbled memory for a while, and then the desktop restarted itself (I guess?). All my windows: gone.
Re: UDAs - Restrict to User Defined Types?
Le 09/11/2012 07:53, Nick Sabalausky a écrit : On Thu, 08 Nov 2012 21:24:49 -0800 Jonathan M Davisjmdavisp...@gmx.com wrote: On Thursday, November 08, 2012 21:10:55 Walter Bright wrote: Many algorithms (at least the ones in Phobos do) already do a check to ensure the inputs are the correct kind of range. I don't think you'll get very far trying to use a range that isn't a range. Of course, you can always still have bugs in your range implementation. Given that a range requires a very specific set of functions, I find it highly unlikely that anything which isn't a range will qualify as one. It's far more likely that you screw up and a range isn't the right kind of range because one of the functions wasn't quite right. There is some danger in a type being incorrectly used with a function when that function requires and tests for only one function, or maybe when it requires two functions. But I would expect that as more is required by a template constraint, it very quickly becomes the case that there's no way that any type would ever pass it with similarly named functions that didn't do the same thing as what they were expected to do. It's just too unlikely that the exact same set of function names would be used for different things, especially as that list grows. And given that ranges are a core part of D's standard library, I don't think that there's much excuse for having a type that has the range functions but isn't supposed to be a range. So, I really don't see this as a problem. Looking at one set of interfaces in isolation, sure the chances might be low. (Just like the chances of name collisions when hygeine is lacking, and yet we thankfully have a real module system, instead of C's clumsy Well, it should usually work ok! garbage.) But it's a terrible precedent. Scale things up, use ducks as common practice, and all of a sudden you're right back into the same old land of no-hygeine. Bad, sloppy, lazy precedent. AND the presumed benefit of the duckness is minimal at best. Just not a good design, it makes all the wrong tradeoffs. UDA is a really good way to express that intent.
Re: [OT] Ubuntu 12.10 guest in VirtualBox completely broken
On 2012-11-10 17:14, Ellery Newcomer wrote: Ubuntu 12.10/gnome classic works well enough; just turn off compiz. I tried mint and ran into trouble while compiling llvm. make gobbled memory for a while, and then the desktop restarted itself (I guess?). All my windows: gone. I don't know if I can do that when basically the only thing that works are the desktop icons. Neither the top or left bar is displayed. I should perhaps go back to an older version of VritualBox. -- /Jacob Carlborg
Re: [OT] Ubuntu 12.10 guest in VirtualBox completely broken
On Saturday, 10 November 2012 at 11:30:31 UTC, Alix Pexton wrote: All I want to be able to do with my VM is be able to build GDC for my RasPi (via all the hoops that are necessary to get cross compilation working), not bothered about any GUI stuff or bells/whistles, suggestions for a good distro to go with are welcome! I use Arch Linux for this kind of stuff, and just ssh into my VMs instead of running X on them. I especially like Arch for this because it doesn't come with loads of bloat (for this setting) installed by default, yet is comfortable to use – at least if you are somewhat familiar with Linux already. David
Re: Binary compatibility on Linux
Am 10.11.2012 16:40, schrieb Jacob Carlborg: What's the best way to achieve binary compatibility on Linux? For example, if I compile an application on, say Ubuntu 12.04, it will most likely not run on any older versions of Ubuntu but it will run on future versions. My current approach to solve this is to compile the application in the oldest version of Ubuntu I can find, in this case 6.x. This is starting to get a bit problematic: * The integration with VirtuaBox (I'm running Ubuntu as a guest) is pretty bad * DMD won't run of out of the box, I need to compile it. This is also making DVM basically useless * I can't clone the dlang repositories due to having a very old version of git installed * I can't compile git, I haven't investigated in why but probably due to the system is too old Is there some compiler/linker flags I can use when building to make the executable compatibility with older versions of Linux? Or is there a better way to solve this? I guess the right answer is to have everything compiled statically, especially if you need compatibility across distributions. -- Paulo
Re: Settling rvalue to (const) ref parameter binding once and for all
Hear hear! I have dreams at night that look exactly like this proposal! :) I think I had one just last night, and woke up with a big grin on my face... 2) rvalues: prefer pass-by-value (moving: argument allocated directly on callee's stack (parameter) vs. pointer/reference indirection implied by pass-by-ref) Is this actually possible? Does the C/C++ ABI support such an action? GDC and LDC use the C ABI verbatim, so can this work, or will they have to, like usual, allocate on the caller's stack, and pass the ref through. I don't really see a significant disadvantage to that regardless. On 9 November 2012 20:05, martin ki...@libero.it wrote: Hi guys, I hope you don't mind that I'm starting yet another thread about this tedious issue, but I think the other threads are too clogged. Let me summarize my (final, I guess) proposal. I think it makes sense to compare it to C++ in order to anticipate and hopefully invalidate (mainly Andrei's) objections. parameter type | lvalue|rvalue | C++ D | C++ D |-**| T | copy copy | copy move T / ref T | refref | n/an/a out T (D only) |ref |n/a T (C++ only) | n/a | move auto ref T (D only) (*) |ref |ref |-**| const T | copy copy | copy move const T / const ref T | refref | refref (*) const T (C++ only)| n/a | move (*): proposed additions For lvalues in both C++ and D, there are 2 options: either copy the argument (pass-by-value) or pass it by ref. There's no real difference between both languages except for D's additional 'out' keyword and, with the proposed 'auto ref' syntax, an (imo negligible) ambiguity between 'ref T' and 'auto ref T' in D. Rvalues are a different topic though. There are 3 possibilites in general: copy, move and pass by ref. Copying rvalue arguments does not make sense - the argument won't be used by the caller after the invokation, so a copy is redundant and hurts performance. D corrects this design flaw of C++ (which had to introduce rvalue refs to add move semantics on top of the default copy semantics) and therefore only supports moving instead. C++ additionally supports pass-by-ref of rvalues to const refs, but not to mutable refs. I propose to allow pass-by-ref to both const (identical syntax as C++, it's perfectly safe and logical) and mutable refs (new syntax with 'auto ref' to emphasize that the parameter may be an rvalue reference, with related consequences such as potentially missing side effects). Regarding the required overloading priorities for the proposed additions to work properly, I propose: 1) lvalues: prefer pass-by-ref so: ref/out T - auto ref T (*) - const ref T - (const) T - const lvalues: const ref T - (const) T - mutable lvalues: ref/out T - auto ref T (*) - const ref T - (const) T 2) rvalues: prefer pass-by-value (moving: argument allocated directly on callee's stack (parameter) vs. pointer/reference indirection implied by pass-by-ref) so: (const) T - auto ref T (*) - const ref T (*) Finally, regarding templates, I'm in favor of dropping the current 'auto ref' semantics and propose to simply adopt the proposed semantics for consistency and simplicity and to avoid excessive code bloating. That shouldn't break existing code I hope (unless parameters have been denoted with 'const auto ref T', which would need to be changed to 'const ref T'). --- Before posting concerns about a perceived unsafety of binding rvalues to 'const ref' parameters, please try to find a plausible argument as to why the following is currently allowed: void foo(const ref T x); if (condition) { T tmp; foo(tmp); } // destruction of tmp but the following shortcut, eliminating 3 lines (depending on code formatting preferences ;)) and avoiding the pollution of the local namespace with a 'tmp' variable, shouldn't be allowed: if (condition) foo(T()); // rvalue destructed immediately after the call --- Let me also illustrate a deterministic allocation/destruction scheme for the compiler implementation/language specification: void foo(auto/const ref T a, auto/const ref T b); foo(T(), T()); /* order: 1) allocate argument a on caller's stack 2) allocate argument b on caller's stack 3) invoke foo() and pass the argument addresses (refs) 4) destruct b 5) destruct a */ I guess something like that is covered by the C++ specification for binding rvalues to const refs. --- Now please go ahead and shoot. :)
Re: Binary compatibility on Linux
On 10 November 2012 17:39, Paulo Pinto pj...@progtools.org wrote: Am 10.11.2012 16:40, schrieb Jacob Carlborg: What's the best way to achieve binary compatibility on Linux? For example, if I compile an application on, say Ubuntu 12.04, it will most likely not run on any older versions of Ubuntu but it will run on future versions. My current approach to solve this is to compile the application in the oldest version of Ubuntu I can find, in this case 6.x. This is starting to get a bit problematic: * The integration with VirtuaBox (I'm running Ubuntu as a guest) is pretty bad * DMD won't run of out of the box, I need to compile it. This is also making DVM basically useless * I can't clone the dlang repositories due to having a very old version of git installed * I can't compile git, I haven't investigated in why but probably due to the system is too old Is there some compiler/linker flags I can use when building to make the executable compatibility with older versions of Linux? Or is there a better way to solve this? I guess the right answer is to have everything compiled statically, especially if you need compatibility across distributions. -- Paulo Or ship the binary with it's dependencies all together as one package. -- Iain Buclaw *(p e ? p++ : p) = (c 0x0f) + '0';
Re: Getting rid of dynamic polymorphism and classes
Tommi wrote: If virtual method calls are really that fast to do, then I think the idiom in the code snippet of my first post is useless, and the idiom they represent in that video I linked to is actually pretty great. Virtual functions have other performance limitations, naming they can't be inlined. So small virtual calls do have a big impact if used often, especially (so I hear) on ARM processors which don't have as advanced branch-prediction machinery as x86 (again, I'm just repeating what I've heard before).
Re: Immutable and unique in C#
On Saturday, 10 November 2012 at 12:51:15 UTC, bearophile wrote: Sönke Ludwig: It fits perfectly with what's already there, it's proven to be sound and practical, and IMO it's definitely what should be implemented in D. Seems fit to be added to Remus then :-) Bye, bearophile Gladly, if you would help me to write the necessary structures to emulate the behaviour. ;)
Re: [OT] Ubuntu 12.10 guest in VirtualBox completely broken
On 10 November 2012 11:30, Alix Pexton alix.dot.pex...@gmail.dot.com wrote: On 10/11/2012 10:14, Jacob Carlborg wrote: I just wanted you all to know that running Ubuntu 12.10 as a guest in VirtualBox is completely broken. I update by guest system from 12.04 to 12.10 and it's so slow it's not usable. This is a known issue: https://www.virtualbox.org/ticket/11107 That issue contains a pre-release of VirtualBox, I tried that on Mac OS X and it broken the Ubuntu guest even more. No title bar on the windows or any window frame actually. What a coincidence, I just installed VirtualBox and was looking for advice on what distro of linux to install... All I want to be able to do with my VM is be able to build GDC for my RasPi (via all the hoops that are necessary to get cross compilation working), not bothered about any GUI stuff or bells/whistles, suggestions for a good distro to go with are welcome! A... You don't necessarily need a cross compiler to do the job. Set-up a raspbian chroot instead! There's some rough instructions here. http://superpiadventures.wordpress.com/2012/07/16/development-environment/ You are also able to debug programs through qemu, though there's a hurdle you have to jump through. http://tinkering-is-fun.blogspot.co.uk/2009/12/debugging-non-native-programs-with-qemu.html Regards, Iain. -- Iain Buclaw *(p e ? p++ : p) = (c 0x0f) + '0';
Re: Binary compatibility on Linux
Am Sat, 10 Nov 2012 16:40:37 +0100 schrieb Jacob Carlborg d...@me.com: What's the best way to achieve binary compatibility on Linux? For example, if I compile an application on, say Ubuntu 12.04, it will most likely not run on any older versions of Ubuntu but it will run on future versions. My current approach to solve this is to compile the application in the oldest version of Ubuntu I can find, in this case 6.x. This is starting to get a bit problematic: * The integration with VirtuaBox (I'm running Ubuntu as a guest) is pretty bad * DMD won't run of out of the box, I need to compile it. This is also making DVM basically useless * I can't clone the dlang repositories due to having a very old version of git installed * I can't compile git, I haven't investigated in why but probably due to the system is too old Is there some compiler/linker flags I can use when building to make the executable compatibility with older versions of Linux? Or is there a better way to solve this? crosstool-NG has a nice option Oldest supported ABI where you can enter an old GLIBC version and the compiler will generate executables compatible with this version (although it still uses a recent glibc). I have no idea how this works, but that's the best solution I have seen so far. (crosstool also has an option Disable symbols versioning which completely disables versioning. quite cool). Here are some links which might help: http://sourceware.org/ml/libc-help/2011-04/msg00032.html http://www.trevorpounds.com/blog/?tag=symbol-versioning
Re: Binary compatibility on Linux
On 2012-11-10 18:39, Paulo Pinto wrote: I guess the right answer is to have everything compiled statically, especially if you need compatibility across distributions. I just read somewhere that compiling it statically will make it _less_ compatible than compiling it dynamically. http://stackoverflow.com/questions/8657908/deploying-yesod-to-heroku-cant-build-statically/8658468#8658468 -- /Jacob Carlborg
Re: Binary compatibility on Linux
Al 10/11/12 16:40, En/na Jacob Carlborg ha escrit: What's the best way to achieve binary compatibility on Linux? For example, if I compile an application on, say Ubuntu 12.04, it will most likely not run on any older versions of Ubuntu but it will run on future versions. My current approach to solve this is to compile the application in the oldest version of Ubuntu I can find, in this case 6.x. This is starting to get a bit problematic: * The integration with VirtuaBox (I'm running Ubuntu as a guest) is pretty bad * DMD won't run of out of the box, I need to compile it. This is also making DVM basically useless * I can't clone the dlang repositories due to having a very old version of git installed * I can't compile git, I haven't investigated in why but probably due to the system is too old Is there some compiler/linker flags I can use when building to make the executable compatibility with older versions of Linux? Or is there a better way to solve this? Ubuntu 10.04.4 LTS is old enough? You can install/run dmd out of the box on it just installing the appropriate deb package. -- Jordi Sayol
Re: Binary compatibility on Linux
On 2012-11-10 19:54, Johannes Pfau wrote: crosstool-NG has a nice option Oldest supported ABI where you can enter an old GLIBC version and the compiler will generate executables compatible with this version (although it still uses a recent glibc). I have no idea how this works, but that's the best solution I have seen so far. (crosstool also has an option Disable symbols versioning which completely disables versioning. quite cool). Here are some links which might help: http://sourceware.org/ml/libc-help/2011-04/msg00032.html http://www.trevorpounds.com/blog/?tag=symbol-versioning I got the last link from someone in the D IRC channel. I just don't know how to do that with D. -- /Jacob Carlborg
Re: UDAs - Restrict to User Defined Types?
On 11/09/12 07:41, Nick Sabalausky wrote: Of course, it'd be even nicer still to have all this wrapped up in some language sugar (D3? ;) ) and just do something like: struct interface InputRange { // ... *declare* .empty, .front, .popFront here } struct interface ForwardRange : InputRange { // ... *declare* .save here } struct MyForwardRange : ForwardRange { // ... define .empty, .front, .popFront, .save here // Actually validated by the compiler } Which would then amount to what we're doing by hand up above. So kinda like Go, except not error-prone and ducky and all shitty. This would actually be backwards compatible and also relatively forward compatible - does not need to wait for a D3. However, while it's a step in the right direction, doing it like that would be to limiting. Compare with: interface template InputRange(T) { enum InputRange = __traits(compiles, {T r; /* check the i/f here */}); } struct MyInputRange : InputRange { enum empty = false; enum front = 42; void popFront() {} } Ie requiring a certain interface to be present is fine, requiring a certain set of function signatures is too restrictive (and could still be done inside the interface-template when necessary). I used interface-templates because this feature should not clash with /normal/ struct inheritance, once that one emerges. All the boilerplate in the InputRange template above could be made implicit and it could then look like this: interface template InputRange(T) { T r; bool e = r.empty; r.popFront(); auto f = r.front; } else throw(Not an input range); // static throw, assert, whatever. These i/f templates would also be useful in places where the current 'isInputRange' hack is used; would significantly improve both readability and error reporting (that's the reason for the else clause). artur
Re: Binary compatibility on Linux
On 2012-11-10 19:49, Jordi Sayol wrote: Ubuntu 10.04.4 LTS is old enough? I have no idea. I don't know how often people update their Linux systems and how compatible different distributions are. Sine I'm not using Linux as my primary platform I was hoping someone else could answer this. What is the oldest system I need to reasonably support? I'm mostly talking about tools and libraries for the D community here. -- /Jacob Carlborg
Re: Const ref and rvalues again...
On Saturday, November 10, 2012 13:21:42 Manu wrote: I'm still not buying this. Here's a common struct I will pass by ref (perhaps the most common struct in my industry): struct Vector { float, x,y,z,w; } struct Matrix { Vector xRow, yRow, zRow, wRow; } Vector mul( scope const ref Matrix m, scope const Vector v) { Vector v; // perform a matrix multiply against the vector... // this work uses every single field of the inputs given, but the result it produces has to references to the sources. // everything is operated on and copied to the output struct, which is returned. return result; } Why should this be a problem? The majority of my work-horse structs apply to this pattern. This is what I imagine 'scope' to be for... The main advantage I expect is that I can have confidence that passing rvalues (temporaries) is safe, and that external code won't take references to memory that I may not own/control. Is that not the point? Surely the problem that scope should be protecting against is a pointer to any part of the argument escaping. *Copies* of values contained in the argument/s are fine. H. scope on value types is pointless, because there are no references to escape, but if you pass by ref, then it does become possible for a pointer to the argument to escape, but I don't know that that's actually actually covered by scope. The description for scope in docs is that references in the pa rameter cannot be escaped (e.g. assigned to a global variable). And taking the address of a local variable (which is the only way that any sort of reference to the data could escape) is never @safe anyway. If you passed in a pointer, and scope were fully working, then you'd be protected against the pointer escaping, but passing by ref isn't really the same thing. I'd have thought that taking the address of a variable passed by ref would fall into pretty much the same camp as taking the address of any other local variable, which is completely unsafe to escape to the point that I'm not sure that there's any point in protecting against it. It's just completely stupid to do anyway and is definitely @system. Outside of taking the address of a ref parameter, taking the address of a local variable and escpaing it is _always_ going to result in garbage, and ref parameters aren't really references in the normal sense, so I don't know. You bring up a good point, but I don't know if it's applicable. Certainly, without the ref there (like is the case with the Vector that you're passing in), scope would never do anything, because it doesn't even theoretically have anything to do. It's purely a value type that's not even being passed by ref. In general though, putting scope on struct parameters would cause a lot of problems, because of arrays that they might hold and whatnot. Slices wouldn't be able to escape (and so copies of the struct wouldn't be able escape without deep copying, let alone the array itself). So, while scope may be very useful in some such cases (assuming that it worked), it's not necessarily something that you'd want as a matter of course. Part of it probably depends on your programming style though. If you have a lot of functions that take arguments and don't return anything that was in them ever, then scope is less of a big deal, but that's the sort of thing that happens a _lot_ in my experience, so scope would very quickly become extremely annoying. And actually, to make matters worse, I'm not sure that scope on delegates is working correctly. I thought that it was, but this code compiles: import std.stdio; void delegate() global; void foo(scope void delegate() del) { global = del; } void main() { { char[5] bar = hello; foo((){writeln(bar);}); } char[7] baz = goodbye; global(); } It also prints out hello, and if a closure had not been allocated, I would have at least half-expected it to print out goodb, because I'd have thought that baz would have been taking up the same memory that bar had been. So, it looks like scope may be completely and utterly broken at this point. I don't know. - Jonathan M Davis
Lenses-like in D
In my opinion it's interesting to look at other languages. Often in functional languages you have immutable records, that sometimes contain other inner immutable records. If you need to change fields, you usually create a copy of the record with just one modified field. To do this with a handy syntax they use lenses in Haskell in Scala and other languages. See, regarding Scala: http://blog.stackmob.com/2012/02/an-introduction-to-lenses-in-scalaz/ https://github.com/gseitz/Lensed So you have a get and set methods, where set returns a different record and leaves the original record unchanged. An example usage in Scala: case class Address(city: String) case class Person(name: String, address: Address) val yankee = Person(John, Address(NYC)) val mounty = Person.address.city.set(yankee, Montreal) Person.address.city.get(mounty) // == Montreal val cityLens: scalaz.Lens[Person, String] = Person.address.city As immutable structs/tuples become more common in D code, I think it's handy to have something similar in Phobos. Maybe just the set is enough for now. This is a start of a D implementation: import std.stdio, std.string, std.traits, std.array, std.typetuple; immutable struct Address { string city; } immutable struct Person { string name; Address address; } private bool withFieldVerify(string path, Data, Field)() { enum pathParts = path.replace(., ).split(); if (path.length 1) return false; mixin(alias TField = ~ Data.stringof ~ '.' ~ pathParts.join(.) ~ ;); return is(Unqual!(typeof(TField)) == Unqual!Field); } private string genReplacer(string path, Data, Field)() { enum pathParts = path.replace(., ).split(); string[] replacer; foreach (name; __traits(allMembers, Data)) replacer ~= (name == pathParts[0]) ? newField : (p. ~ name); return replacer.join(, ); } Data withField(string path, Data, Field)(Data p, Field newField) if (is(Data == struct) !__traits(hasMember, Data, __ctor) withFieldVerify!(path, Data, Field)()) { return mixin(Data( ~ genReplacer!(path, Data, Field)() ~ )); } void main() { auto yankee = Person(John, Address(NYC)); auto foo = yankee.withField!q{name}(Foo); writeln(foo); //auto mounty = yankee.withField!q{address.city}(Montreal); //writeln(mounty); // To be improved: this gives errors inside setFieldVerify: //auto mounty = yankee.withField!q{address foo}(Montreal); //assert(mounty.address.city == Montreal); //alias withCity = withField!q{address.city}; // shortcut //auto mounty2 = yankee.withCity(NYC); } Notes: - Lenses are meant to update only one field, at arbitrary nesting level. - This code is meant to work only on struct/tuple instances, the struct can't have explicit costructors. - The code should be improved so it avoids to generate error messages inside setFieldVerify. - withField/genReplacer probably have to become recursive, so withField becomes able to update nested fields like yankee.address.city. - withField() is probably meant to be usable on mutable struct instances too, but it's much more useful on immutable ones. - I think in D you can't enforce a class to have a dumb constructor (dumb means it just copies its input arguments into instance fields with the same type), so withField() can't be used on classes. - Only withField is public, the other names are module-private. I think this makes its usage simple. The usage syntax of withField is not wonderful, but I think it's acceptable. - All this is far from being the nice composable lenses of Haskell: http://www.haskellforall.com/2012/01/haskell-for-mainstream-programmers_28.html - Adding a related higher-order function that performs like this alter is possible, it takes another function in input and returns the record with the given function applied on the desired field: http://hackage.haskell.org/packages/archive/lenses/0.1.2/doc/html/Data-Lenses.html#v%3Aalter Bye, bearophile
Re: Binary compatibility on Linux
I would say supporting distributions which are no longer supported by the distributions itself is of very little value. So for Ubuntu the last still supported LTS version should be old enough. I think virtually nobody is using anything older, especially not 06.XX! And if they do, then they will have a whole bunch of other problems than not being able to use your program. Best regards, Robert On Sat, 2012-11-10 at 20:01 +0100, Jacob Carlborg wrote: On 2012-11-10 19:49, Jordi Sayol wrote: Ubuntu 10.04.4 LTS is old enough? I have no idea. I don't know how often people update their Linux systems and how compatible different distributions are. Sine I'm not using Linux as my primary platform I was hoping someone else could answer this. What is the oldest system I need to reasonably support? I'm mostly talking about tools and libraries for the D community here.
Re: Binary compatibility on Linux
On Sat, 10 Nov 2012 13:01:27 -0600, Jacob Carlborg d...@me.com wrote: On 2012-11-10 19:49, Jordi Sayol wrote: Ubuntu 10.04.4 LTS is old enough? I have no idea. I don't know how often people update their Linux systems and how compatible different distributions are. Sine I'm not using Linux as my primary platform I was hoping someone else could answer this. What is the oldest system I need to reasonably support? I'm mostly talking about tools and libraries for the D community here. Oldest system to reasonably support? I would say Debian Stable. It is used on a lot of server systems and isn't *too* far behind/old. If you need newer versions of packages, debian has its testing and experimental branches. I don't remember if they still are, but Ubuntu used to take a snapshot of Debian Sid to base its packages on. Linux Mint Debian gets its packages from debian testing I believe. The point I'm making is that Debian is pretty much the upstream repo. You can go as far as to test versions that haven't made it into Ubuntu or Mint yet. If people are using older versions than Debian Stable, then you should probably forget about them. Either they will cherry-pick the versions they need, or they are not interested in anything new and untested. Just my two cents as an ex server admin.
Re: deprecate deprecated?
On 09-11-2012 19:06, Kagamin wrote: On Friday, 9 November 2012 at 08:49:28 UTC, Walter Bright wrote: On 11/8/2012 12:13 AM, Don Clugston wrote: That *cannot* fix the problem. The problem is not with the deprecated attribute at all, it's with the command line switches. Of interest: http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2012/n3394.html Looks like another D feature moving into C++! I'd say, gcc's attribute system is used better than in D. http://gcc.gnu.org/onlinedocs/gcc-4.7.2/gcc/Function-Attributes.html#Function-Attributes They use attributes for const, pure, nothrow, dllimport, while D uses keywords. And they are tedious as hell to type. -- Alex Rønne Petersen a...@lycus.org http://lycus.org
Re: Binary compatibility on Linux
On Saturday, 10 November 2012 at 19:15:28 UTC, 1100110 wrote: If people are using older versions than Debian Stable, then you should probably forget about them. Either they will cherry-pick the versions they need, or they are not interested in anything new and untested. Just my two cents as an ex server admin. This seems like a reasonable policy to me as well. David
Re: std.signals2 proposal
On Fri, 2012-11-09 at 19:28 +0100, Kagamin wrote: Huh? I don't get it. Didn't you want weak ref semantics for signals? Why do you want strong ref semantics now? There is a distinction between the context of a delegate, which is used for parameter transformation or other advanced stuff and the final destination object. The first one is very likely that only the signal has a reference to it (think of lamdas), and thus the signal holds a strong ref to it. For the object, which method gets eventually invoked, the signal does not hold a strong ref, instead it simply drops the slot when the object gets deleted. In your example, to make it work with weak ref semantics with the new signal implementation: _tab.closed.connect(this, (obj, sender,args)=obj.Dispose()); instead of: _tab.closed.connect((sender,args)=this.Dispose()); (obj, sender,args)=obj.Dispose() is in this case just a function or a delegate with null ptr as context. But if there were a context the signal would keep it in memory. The object which gets explicitly passed to the delegate via obj, is only weakly referenced from the signal. The whole purpose is to make indirect connections to an objects method possible, for parameter transformations, parameter omissions, for providing additional parameters, ... If you want a direct connection you would use the simpler: signal.connect!Dispose(this); as explained in my initial post.
Re: Binary compatibility on Linux
On 2012-11-10 20:17, Robert wrote: I would say supporting distributions which are no longer supported by the distributions itself is of very little value. So for Ubuntu the last still supported LTS version should be old enough. I think virtually nobody is using anything older, especially not 06.XX! And if they do, then they will have a whole bunch of other problems than not being able to use your program. I just picked the 6.x version to be sure it was compatible with everything else. You say the latest LTS, but the LTS are supported for five years. Don't they release new LTS more often than that? According to this https://wiki.ubuntu.com/LTS They release a new LTS every two years and they're supported for five years. If I pick Ubuntu 12.04, which is the latest LTS, they still support 10.04 until 2013. -- /Jacob Carlborg
Re: Binary compatibility on Linux
Al 10/11/12 21:18, En/na Jacob Carlborg ha escrit: On 2012-11-10 20:17, Robert wrote: I would say supporting distributions which are no longer supported by the distributions itself is of very little value. So for Ubuntu the last still supported LTS version should be old enough. I think virtually nobody is using anything older, especially not 06.XX! And if they do, then they will have a whole bunch of other problems than not being able to use your program. I just picked the 6.x version to be sure it was compatible with everything else. You say the latest LTS, but the LTS are supported for five years. Don't they release new LTS more often than that? According to this https://wiki.ubuntu.com/LTS They release a new LTS every two years and they're supported for five years. If I pick Ubuntu 12.04, which is the latest LTS, they still support 10.04 until 2013. From Ubuntu 12.04 (April 2012), LTS has 5 years of support for Desktop and server versions. Before this, LTS for Desktop has 3 years support, so the last Ubuntu Desktop still supported is 10.04 (April 2010) and will finish in April 2013. -- Jordi Sayol
Re: Binary compatibility on Linux
You say the latest LTS, but the LTS are supported for five years. Don't they release new LTS more often than that? According to this https://wiki.ubuntu.com/LTS They release a new LTS every two years and they're supported for five years. I am sorry. I haven't quite said what I meant. I meant: The oldest still supported LTS of course :-) Also there are very few desktop users using such an old version, so if you don't target server systems, you might even relax your requirements further. But with the oldest still supported LTS release you are pretty much on the safe side. Best regards, Robert
Re: Binary compatibility on Linux
From Ubuntu 12.04 (April 2012), LTS has 5 years of support for Desktop and server versions. Before this, LTS for Desktop has 3 years support, so the last Ubuntu Desktop still supported is 10.04 (April 2010) and will finish in April 2013. s/last/oldest/ -- Jordi Sayol
Re: Binary compatibility on Linux
Am 10.11.2012 19:54, schrieb Jacob Carlborg: On 2012-11-10 18:39, Paulo Pinto wrote: I guess the right answer is to have everything compiled statically, especially if you need compatibility across distributions. I just read somewhere that compiling it statically will make it _less_ compatible than compiling it dynamically. http://stackoverflow.com/questions/8657908/deploying-yesod-to-heroku-cant-build-statically/8658468#8658468 Oh, the wonder of Linux based systems. :(
Re: Pyd thread
On 11/09/2012 11:33 PM, Russel Winder wrote: What is guaranteed now is that the ctypes package works in CPython and PyPy and would seem to be the right API for people interested in using Python with D. If the ctypes overhead is significant to the execution then writing a native code extension is probably the wrong solution to the problem? Never used ctypes. How difficult would it be to get python objects/functions to the extension side?
precise gc?
Hey party people! What is the current state? Is it enough to store a pointer in a ptrdiff_t variable instead of a pointer for the GC to ignore it or is my current trick of simply inverting its value required? If the trick is required, is it safe? Are there memory models in use where the inverted pointer value might also be in GC memory? Thanks! Robert
Re: precise gc?
On Sat, 10 Nov 2012 23:17:41 +0100 eskimo jfanati...@gmx.at wrote: Hey party people! What is the current state? Is it enough to store a pointer in a ptrdiff_t variable instead of a pointer for the GC to ignore it or is my current trick of simply inverting its value required? I'm not sure I understand why you would hide a pointer from the GC. Are there memory models in use where the inverted pointer value might also be in GC memory? Yes, that can happen in 32-bit.
Re: precise gc?
On Saturday, 10 November 2012 at 22:52:56 UTC, Nick Sabalausky wrote: On Sat, 10 Nov 2012 23:17:41 +0100 eskimo jfanati...@gmx.at wrote: Hey party people! What is the current state? Is it enough to store a pointer in a ptrdiff_t variable instead of a pointer for the GC to ignore it or is my current trick of simply inverting its value required? I'm not sure I understand why you would hide a pointer from the GC. It might happen by accident, like when you're doing something 'clever' like using them good old fashioned xor-linked lists...
Re: Settling rvalue to (const) ref parameter binding once and for all
On Saturday, 10 November 2012 at 18:20:56 UTC, Manu wrote: Hear hear! I have dreams at night that look exactly like this proposal! :) I think I had one just last night, and woke up with a big grin on my face... I'm glad I'm not alone on this. :) 2) rvalues: prefer pass-by-value (moving: argument allocated directly on callee's stack (parameter) vs. pointer/reference indirection implied by pass-by-ref) Is this actually possible? It surely is and would only require the compiler to map the location of the parameter in the callee's future stack frame to the caller's stack frame. I think this is how DMD implements it. Does the C/C++ ABI support such an action? GDC and LDC use the C ABI verbatim, so can this work, or will they have to, like usual, allocate on the caller's stack, and pass the ref through. I don't really see a significant disadvantage to that regardless. I'm not sure how C++ does it, I was wondering about that too actually after posting this. It quite possibly uses real references for '(const) T' (otherwise how would you be able to transform a passed lvalue reference to an rvalue reference via std::move() without copying it?), so my table may be incorrect for these 2 cells, although that wouldn't change anything really for the proposal, unless someone showed a use case for 'const T' where 'const T' doesn't fit (I certainly can't think of any).
Re: Immutable and unique in C#
Am Sat, 10 Nov 2012 15:41:21 +0100 schrieb Sönke Ludwig slud...@outerproduct.org: Enhancement request: http://d.puremagic.com/issues/show_bug.cgi?id=8993 Its true that we avoid shared because it isn't finalized and in its current state more or less a broken feature. It also highlights the bluntness of the casts again, that are needed to use it. *sigh* If it takes 5 full-time researchers to come up with that type system extension then be it. I hope it can be applied to D and resolves the 'shared' situation. -- Marco
Re: precise gc?
On Saturday, 10 November 2012 at 22:52:56 UTC, Nick Sabalausky wrote: On Sat, 10 Nov 2012 23:17:41 +0100 eskimo jfanati...@gmx.at wrote: Hey party people! What is the current state? Is it enough to store a pointer in a ptrdiff_t variable instead of a pointer for the GC to ignore it or is my current trick of simply inverting its value required? I'm not sure I understand why you would hide a pointer from the GC. For weak references mainly. For example, caching, weak event subscribers, and a fair few other things.
Re: Performance of hashes and associative arrays
Hello, Thanks for this complete answer. I will take a look to your code. Additionally, Ali gave me a really interesting link about hashing, good practices, what is efficient, etc. If you didn't read it, it might interest you. Here it is: http://eternallyconfuzzled.com/tuts/algorithms/jsw_tut_hashing.aspx Le 07/11/2012 13:10, Dan a écrit : On Wednesday, 7 November 2012 at 06:38:32 UTC, Raphaël Jakse wrote: [...] Ali agreed that concatenating strings each time would indeed be inefficient. He thought we might cache the value (third solution) : Interesting about caching the hashcode and on large classes could save you. But isn't the signature shown const? Would that compile? Indeed, I didn't tested this code, I just wanted to explicit what I wanted to say. Thank you for the notice. Also, what if you change the strings - you get the wrong hash? I suppose you could limit writes to properties and null the hashcode on any write. Yes, good question and good idea, I think I would do it this way. I didn't think about it. Questions are : - what is the most efficient solution, and in which case ? No string concatenation is good. I think a single pass on all important data (in most cases is all the data) is the goal. I'm not sure I understood well. You wanted to say that string contatenations are good, right ? I was thinking about a hash function that would take several arguments and hash them together. That would let take in account more than one string in the hash while avoiding concatenation. [...] Thanks Dan
Re: Compilable Recursive Data Structure ( was: Recursive data structure using template won't compile)
Timon Gehr wrote: In theory yes, but [...] What a pity. Because in the code given only the types Elem!0 and Elem!1 must be indeed initialized. The fact, that the specification of the template describes a family of types with an infinite number of members should not force the front end to check wether all those members are initializable. If the executable is not allowed to build new types, which seems to be canonically, it is sufficient for the front end to check the initializability of those members, for which an initialization is imperative. This polishes my claim in digitalmars.D.learn:40939. For me now it appears as a bug, when the front end assumes, that the executable is allowed to build new types. -manfred
Re: attribute bug?
On 2012-11-10 06:28, Jonathan M Davis wrote: package restricts access to the same package. D has no concept like C#'s internal, because it doesn't have assemblies. I'm not entirely sure how assemblies work in C# but couldn't one say that everything in D is internal unless explicitly marked as export? -- /Jacob Carlborg
Re: Recursive data structure using template won't compile
Rob T wrote: I want to create a simple recursive data structure as follows: struct R { int value; d_list!R Rlist; } I do not see any usage for the member `d_list!R Rlist'. Please explain. -manfred
Re: Compilable Recursive Data Structure ( was: Recursive data structure using template won't compile)
Nick Sabalausky wrote: I really don't see the relevance Please look at the definition of R: struct R { int value; d_list!R Rlist; } If no recursion was wanted the OP should have written: d_list!(R*) Rlist; In digitalmars.D.learn:40990 I already asked for an explanation. -manfred
Re: Compilable Recursive Data Structure ( was: Recursive data structure using template won't compile)
Rob T wrote: and the problem I'm experiencing is definitely a compiler bug I do not see that. Please work on my messages digitalmars.D.learn:40990 and digitalmars.D.learn:40991. -manfred
Re: Compilable Recursive Data Structure ( was: Recursive data structure using template won't compile)
Nick Sabalausky wrote: But the OP was never trying to do anything like that. See digitalmars.D.learn:40991 -manfred
Casting an array form an associative array
The following example: void main() { void[][size_t] aa; aa[1] = [1, 2, 3]; if (auto a = 1 in aa) { writeln(*(cast(int[]*) a)); writeln(cast(int[]) *a); } } Will print: [1, 2, 3, 201359280, 0, 0, 0, 0, 0, 0, 0, 0] [1, 2, 3] The first value seems to contain some kind of garbage. Why don't these two cases result in the same value? -- /Jacob Carlborg
Re: Compilable Recursive Data Structure ( was: Recursive data structure using template won't compile)
On Sat, 10 Nov 2012 10:33:39 + (UTC) Manfred Nowak svv1...@hotmail.com wrote: Nick Sabalausky wrote: I really don't see the relevance Please look at the definition of R: struct R { int value; d_list!R Rlist; } If no recursion was wanted the OP should have written: d_list!(R*) Rlist; Ok, I see what you're saying, but you're mistaken: That line d_list!R Rlist; is not a problematic recursion. Imagine if d_list had been defined like this: struct d_list(T) { int i; } Then would this still be problematic recursion?: struct R { d_list!R Rlist; } No, because R is never actually used anywhere in that d_list (only int is used). In this case, R is nothing more that part of the *name* of a particular instantiation of the d_list template. And indeed, just like the above example, the OP's definition of d_list also does *not* use R: struct d_list( T ) { node* head; node* tail; } Now, yes, that node type does use R (instead of R*), *but* head and tail are merely pointers to node, so it's ok. In digitalmars.D.learn:40990 I already asked for an explanation. Actually, my newsreader is kinda shitty, and (AFAIK) doesn't give me any way to lookup a message by ID, so I'm not really sure which message you're referring to :/
Re: Performance of hashes and associative arrays
On Saturday, 10 November 2012 at 07:55:18 UTC, Raphaël Jakse wrote: Hello, Thanks for this complete answer. I will take a look to your code. Ok - good. I've been using 2.061 which I just realized allows dup on an associative array, a feature which was not available in 2.060. So mixin(Dup) and the unit tests require 2.061. If you didn't read it, it might interest you. Here it is: I had not seen it and will read - thanks. Questions are : - what is the most efficient solution, and in which case ? No string concatenation is good. I think a single pass on all important data (in most cases is all the data) is the goal. I'm not sure I understood well. You wanted to say that string contatenations are good, right ? I just meant string concatenation is likely unnecessary. Imagine two one megabyte strings. It is easy to concat them and call the string hash function on the result but you have to create a 2Mb string first. Alternatively, you could hash each and combine the hash codes in some consistent way. I was thinking about a hash function that would take several arguments and hash them together. That would let take in account more than one string in the hash while avoiding concatenation. Yes. Thanks Dan
A refinement for pure implementation
Do you remember if Hara has implemented a patch to allow a2 to be immutable? int[] foo1(int x) pure { return null; } int[] foo2(string s) pure { return null; } void main() { immutable a1 = foo1(10); // OK immutable a2 = foo2(hello); // currently error } The idea behind this is that the type of s is different from the return type of foo2, so foo2 is a strongly pure function. Bye, bearophile
Re: Is there a way to initialize a non-assigned structure declaration (or is it a definition)?
On Sat, 10 Nov 2012 00:35:05 +0100 Too Embarrassed To Say khea...@eapl.org wrote: auto p3 = Parameterized!(int, double, bool, char)(57, 7.303, false, 'Z'); // compiles // but not // Parameterized!(int, double, bool, char)(93, 5.694, true, 'K') p4; That's as expected. Variable declarations are of the form: Type varName; // or Type varName = initialValue; (In the second form, auto is optionally allowed to stand in for the type.) And struct literals (ie the actual values of a struct type) are of the form: Type(params) So: - Parameterized is a template - Parameterized!(int, double, bool, char) is a type. - Parameterized!(int, double, bool, char)(93, 5.694, true, 'K') is a *value* of the above type, it's *not* a type. So when you say: Parameterized!(int, double, bool, char)(93, 5.694, true, 'K') p4; That's a value, not a type. So that's just like saying: 5 myInt; // or Hello myStr; Which doesn't make sense. What you wanted to say was: int myInt = 5; // or auto myInt = 5; // or string myStr = hello; // or auto myStr = hello; Therefore, you have to say: auto p3 = Parameterized!(int, double, bool, char)(93, 5.694, true, 'K'); Because *that* is of the form: Type varName = initialValue; If you want an easier way to do it, you can do this: alias Parameterized!(int, double, bool, char) MyType; auto p3 = MyType(93, 5.694, true, 'K') Or, like Ali said, you can make a convenience function.
Re: A refinement for pure implementation
On 11/10/2012 03:32 PM, bearophile wrote: Do you remember if Hara has implemented a patch to allow a2 to be immutable? int[] foo1(int x) pure { return null; } int[] foo2(string s) pure { return null; } void main() { immutable a1 = foo1(10); // OK immutable a2 = foo2(hello); // currently error } The idea behind this is that the type of s is different from the return type of foo2, so foo2 is a strongly pure function. Bye, bearophile It is strongly pure regardless of potential aliasing in the return value. This is a bug.
Re: Compilable Recursive Data Structure ( was: Recursive data structure using template won't compile)
On 11/10/2012 10:12 AM, Manfred Nowak wrote: Timon Gehr wrote: In theory yes, but [...] What a pity. Because in the code given only the types Elem!0 and Elem!1 must be indeed initialized. ... In this specific case, yes. But as this is an undecidable property in general, detecting and exploiting it would merely be an optimization that cannot generally be relied upon. Depending on how powerful it is, it would slow down analysis most of the time. Furthermore, I do not see use cases. I'll look into it when the front end is finished though.
Re: A refinement for pure implementation
Timon Gehr: It is strongly pure regardless of potential aliasing in the return value. This is a bug. This can't be strongly pure: int[] foo2(int[] a) pure { a[0]++; return a; } Bye, bearophile
Re: Casting an array form an associative array
On 11/10/2012 01:20 PM, Jacob Carlborg wrote: The following example: void main() { void[][size_t] aa; aa[1] = [1, 2, 3]; if (auto a = 1 in aa) { writeln(*(cast(int[]*) a)); writeln(cast(int[]) *a); } } Will print: [1, 2, 3, 201359280, 0, 0, 0, 0, 0, 0, 0, 0] [1, 2, 3] The first value seems to contain some kind of garbage. Why don't these two cases result in the same value? The length of an array is the number of elements. sizeof(void)==1 and sizeof(int)==4. The first example reinterprets the ptr and length pair of the void[] as a ptr and length pair of an int[]. The second example adjusts the length so that the resulting array corresponds to the same memory region.
Re: Compilable Recursive Data Structure ( was: Recursive data structure using template won't compile)
Timon Gehr wrote: But as this is an undecidable property in general I do not see, that the compiler has to solve the general case--- at least when compiling monolithic code and the executable is only allowed to use types which are initialized at compile time. Upon using several modules the modules might follow some restrictions and I am currently not able to specify that restrictions. -manfred
Re: A refinement for pure implementation
On 11/10/2012 05:21 PM, bearophile wrote: Timon Gehr: It is strongly pure regardless of potential aliasing in the return value. This is a bug. This can't be strongly pure: int[] foo2(int[] a) pure { a[0]++; return a; } Bye, bearophile The point was that the code you gave should work even without your proposed enhancement.
Re: Casting an array form an associative array
On 2012-11-10 17:48, Timon Gehr wrote: The length of an array is the number of elements. sizeof(void)==1 and sizeof(int)==4. The first example reinterprets the ptr and length pair of the void[] as a ptr and length pair of an int[]. The second example adjusts the length so that the resulting array corresponds to the same memory region. Ok, thanks for the explanation. -- /Jacob Carlborg
Re: A refinement for pure implementation
Timon Gehr: The point was that the code you gave should work even without your proposed enhancement. So my original question was: do you remember if Hara has already written a patch to fix that bug? :-) Bye, bearophile
Re: Is there a way to initialize a non-assigned structure declaration (or is it a definition)?
I appreciate all the helpful replies, but I've simplified things to what I belive is the core issue. In C++ (at the risk of becoming a heretic) the language allows me to do the following: struct SnonParameterized { public: int t; float u; SnonParameterized(int tparam, float uparam); }; SnonParameterized::SnonParameterized(int tparam, float uparam) { t = tparam; u = uparam; } SnonParameterized snp(5, 3.303); // this compiles with Visual C++ 2010 === Now with D, I try (what I think is identical semantics) the following: struct SnonParameterized { int t; float u; this(int t, float u) { this.t = t this.u = u; } } SnonParameterized cnp(5, 3.303); // fails compile with Error: found 'cnp' when expecting ';' following statement auto hi = SnonParameterized(5, 3.303); // compiles of course. I'm just trying to understand why D disallows the non-assignment syntax. Probably for a very good (and obvious) reason.
Re: Performance of hashes and associative arrays
On 11/07/2012 07:38 AM, Raphaël.Jakse raphael.ja...@gmail.com@puremagic.com wrote: We want to be able to get the hash of s. Therefore, we re-implement the toHash method of the Student class : OK, now I'm curious. Assuming I don't write a custom re-implementation, how would a custom struct or class be hashed? (What about a tuple?) I ask because I'm considering associative arrays where the key is a custom class or tuple as part of a current coding project.
Re: attribute bug?
On Saturday, November 10, 2012 08:28:00 goofwin wrote: I think that it is unsuccessful decision by the language designers, because object oriented code use virtual functions not much in most cases, thence it is useless and bad for performance or it causes developer to set public and protected functions as final explicitly very much. Object-oriented code not use virtual functions much? If you don't need virtual functions, then use a struct, not a class. Classes are polymorphic. Structs are not. In general, it doesn't make a lot of sense to use a class in D if you don't need polymorphism. And if you do need to but don't want the functions to be virtual (e.g. you want to use a class, because you want a reference type without going to the trouble of making a struct a reference type), then you can just make all of the class' public functions final. But that's not the normal case at all. - Jonathan M Davis
Re: Compilable Recursive Data Structure ( was: Recursive data structure using template won't compile)
On 11/09/12 23:45, Timon Gehr wrote: On 11/09/2012 10:24 PM, Philippe Sigaud wrote: Timon: The D front end I am developing can already handle it. Developed in D, I suppose? Yes. Public? License? URL? artur
Re: attribute bug?
On Saturday, November 10, 2012 11:03:31 Jacob Carlborg wrote: On 2012-11-10 06:28, Jonathan M Davis wrote: package restricts access to the same package. D has no concept like C#'s internal, because it doesn't have assemblies. I'm not entirely sure how assemblies work in C# but couldn't one say that everything in D is internal unless explicitly marked as export? I guess, but that's purely a Windows dll thing. Linux isn't stupid enough to be affected by export at all (export is one of my biggest pet peeves with Windows - I absolutely hate it; it's just caused me way too much trouble). - Jonathan M Davis
Re: Performance of hashes and associative arrays
On Saturday, 10 November 2012 at 18:18:07 UTC, Joseph Rushton Wakeling wrote: On 11/07/2012 07:38 AM, Raphaël.Jakse raphael.ja...@gmail.com@puremagic.com wrote: We want to be able to get the hash of s. Therefore, we re-implement the toHash method of the Student class : OK, now I'm curious. Assuming I don't write a custom re-implementation, how would a custom struct or class be hashed? (What about a tuple?) I ask because I'm considering associative arrays where the key is a custom class or tuple as part of a current coding project. Not sure I understand the question. But here is how I'm doing it. No guarantees, but output looks promising. Code following output. Thanks Dan - true false false true 7F053B2DCFC0 20883845 vs 20883845 - import std.stdio; import std.traits; import std.typecons; import opmix.mix; struct S { mixin(HashSupport); alias Tuple!(int, char, string) X; X x; char[] mutable; } void main() { S s1 = { tuple(3, 'a', foo.idup), ['a','b'].dup }; S s2 = { tuple(3, 'a', foo.idup), ['a','b'].dup }; writeln(s1==s2); s1.x[0]++; writeln(s1==s2); writeln(s1s2); writeln(s2s1); writeln(s1 in [ s1: 3 ]); s2.x[0]++; writeln(s1.toHash(), vs , s2.toHash()); }
Serialization library with support for circular references?
I've been using msgpack for a while, unfortunately I've just discovered it doesn't support serializing circular references (http://jira.msgpack.org/browse/MSGPACK-81), e.g.: import msgpack; class Foo { int x; Bar obj; } class Bar { int x; Foo obj; } void main() { auto foo = new Foo(); auto bar = new Bar(); foo.obj = bar; bar.obj = foo; ubyte[] data = msgpack.pack(foo); } Orange doesn't work with circular references either. Is there any other serialization library that supports this scenario? I'm looking for something fast, and binary format is ok, I don't need a user-readable format.
Re: Serialization library with support for circular references?
On 2012-11-10 13:42, Andrej Mitrovic wrote: Orange doesn't work with circular references either. Is there any other serialization library that supports this scenario? I'm looking for something fast, and binary format is ok, I don't need a user-readable format. If Orange doesn't work with circular references it's a bug, please file an issue: https://github.com/jacob-carlborg/orange Although Orange is probably not very fast and it currently only has an XML archive. -- /Jacob Carlborg
Re: Serialization library with support for circular references?
On 11/10/12, Jacob Carlborg d...@me.com wrote: Although Orange is probably not very fast and it currently only has an XML archive. Heh yeah I'm actually reading XML into classes and then want to serialize this for faster access when re-running the app, so it probably wouldn't be a good idea to serialize back to XML.
Re: Serialization library with support for circular references?
11/10/12, Andrej Mitrovic andrej.mitrov...@gmail.com wrote: I've been using msgpack for a while, unfortunately I've just discovered it doesn't support serializing circular references (http://jira.msgpack.org/browse/MSGPACK-81) Anyway I think I'll be able to hack-in circular reference support to msgpack, but it's not going to be compatible with the protocol (well a new Header type will have to be introduced). It's not a big issue for me since I only use it offline. I'll post a link to the fork when I'm done.
Re: Serialization library with support for circular references?
You can try vibe.d bson serialization. http://vibed.org/api/vibe.data.bson/serializeToBson On Saturday, 10 November 2012 at 21:23:03 UTC, Andrej Mitrovic wrote: 11/10/12, Andrej Mitrovic andrej.mitrov...@gmail.com wrote: I've been using msgpack for a while, unfortunately I've just discovered it doesn't support serializing circular references (http://jira.msgpack.org/browse/MSGPACK-81) Anyway I think I'll be able to hack-in circular reference support to msgpack, but it's not going to be compatible with the protocol (well a new Header type will have to be introduced). It's not a big issue for me since I only use it offline. I'll post a link to the fork when I'm done.
Re: Serialization library with support for circular references?
On 11/10/12, nixda b...@or.de wrote: You can try vibe.d bson serialization. http://vibed.org/api/vibe.data.bson/serializeToBson It doesn't handle them either. Anyway I've implemented it for msgpack (took a whole of 30 minutes, it's a great and readable codebase), I just have to write some more extensive unittests to make sure everything works ok.
Re: Performance of hashes and associative arrays
On 11/10/2012 05:37 AM, Joseph Rushton Wakeling wrote: On 11/07/2012 07:38 AM, Raphaël.Jakse raphael.ja...@gmail.com@puremagic.com wrote: We want to be able to get the hash of s. Therefore, we re-implement the toHash method of the Student class : OK, now I'm curious. Assuming I don't write a custom re-implementation, how would a custom struct or class be hashed? (What about a tuple?) I ask because I'm considering associative arrays where the key is a custom class or tuple as part of a current coding project. For classes, because they are reference types, it is the object identity that determines the hash value (probably the pointer of the actual object). As a result, even when the values of the members are the same, two objects have different hash values. For structs, because they are value types, it is the bytes that make up the object determine the hash value. But beware: The associative array members of structs are not hashed by values of their elements. (There was a long discussion recently on the the main newsgroup on this topic.) Ali
no size for type nothrow extern (Windows) int()
I want to call Windows api function VirtualAlloc. I include std.c.windows.windows but get the following error from dmd 2.061 (and 2.060): Error: no size for type nothrow extern (Windows) int() What does this message mean? If I provide my own prototype of the function, same error.
Re: no size for type nothrow extern (Windows) int()
On 11-11-2012 02:49, cal wrote: I want to call Windows api function VirtualAlloc. I include std.c.windows.windows but get the following error from dmd 2.061 (and 2.060): Error: no size for type nothrow extern (Windows) int() What does this message mean? If I provide my own prototype of the function, same error. Can you give a self-contained repro that illustrates the problem? Also, please use core.sys.windows.windows instead. The std.c.* package is going to be deprecated soon-ish. -- Alex Rønne Petersen a...@lycus.org http://lycus.org
Re: no size for type nothrow extern (Windows) int()
On Sunday, 11 November 2012 at 02:55:09 UTC, Alex Rønne Petersen wrote: Can you give a self-contained repro that illustrates the problem? Also, please use core.sys.windows.windows instead. The std.c.* package is going to be deprecated soon-ish. import core.sys.windows.windows; void main() { FARPROC addr1, addr2; auto ptr = VirtualAlloc(null, addr2-addr1, 0x1000, 0x40); } I just discovered while reducing it what the error message means: FARPROC is not void* like I thought, but (int function()), so it can't be used like I tried to do above. The compiler message was a little cryptic, but obvious in hindsight. Thanks!
Re: no size for type nothrow extern (Windows) int()
On Sunday, 11 November 2012 at 02:55:09 UTC, Alex Rønne Petersen wrote: Can you give a self-contained repro that illustrates the problem? Also, please use core.sys.windows.windows instead. The std.c.* package is going to be deprecated soon-ish. import core.sys.windows.windows; void main() { FARPROC addr1, addr2; auto ptr = VirtualAlloc(null, addr2-addr1, 0x1000, 0x40); } I just discovered while reducing it what the error message means: FARPROC is not void* like I thought, but (int function()), so it can't be used like I tried to do above. The compiler message was a little cryptic, but obvious in hindsight. Thanks!
Re: attribute bug?
On Saturday, 10 November 2012 at 18:35:57 UTC, Jonathan M Davis wrote: Object-oriented code not use virtual functions much? If you don't need virtual functions, then use a struct, not a class. Classes are polymorphic. Structs are not. In general, it doesn't make a lot of sense to use a class in D if you don't need polymorphism. And if you do need to but don't want the functions to be virtual (e.g. you want to use a class, because you want a reference type without going to the trouble of making a struct a reference type), then you can just make all of the class' public functions final. But that's not the normal case at all. - Jonathan M Davis You didn't understand me. I mean that only less 30% of public functions in typical class must be virtual in most cases. And fact, that i need use 'final' for 70% public functions, is not comfortable. It is my humble opinion. Sorry for my bad English.