Re: DIP11: Automatic downloading of libraries
On Mon, 20 Jun 2011 17:32:32 -0500, Andrei Alexandrescu wrote: > On 6/20/11 4:28 PM, Jacob Carlborg wrote: >> BTW has std.benchmark gone through the regular review process? > > I was sure someone will ask that at some point :o). The planned change > was to add a couple of functions, but then it got separated into its own > module. If several people think it's worth putting std.benchmark through > the review queue, let's do so. I'm sure the quality of the module will > be gained. I think we should. Also, now that TempAlloc isn't up for review anymore, and both std.log and std.path have to be postponed a few weeks, the queue is open. :) -Lars
Re: NG traffic overloadad?
On 2011-06-20 23:36, Jesse Phillips wrote: Is it just me or have others not been able to access the NG. Most recently it had a message say the load was 22.24. If it is getting too much traffic, that is pretty cool, but might not be good to continue. I had some problems accessing the NG. -- /Jacob Carlborg
Re: Should protection attributes disambiguate?
Nick Sabalausky Wrote: > And since visibility without access is useless I can think of a possibility that visibility of private symbols can indicate wrong access modifiers, though it's difficult to figure out a use case.
Re: Should protection attributes disambiguate?
Andrej Mitrovic Wrote: > Maybe the naming of the issue is wrong, but it has to be a bug: > http://d.puremagic.com/issues/show_bug.cgi?id=6180 > > If private symbols are not usable, then creating an object of a > private class type is a pretty major bug, methinks. Currently access modifiers are not implemented for user-defined types, lol.
Re: TempAlloc: an unusual request
On 6/20/11 8:57 PM, dsimcha wrote: On 6/20/2011 11:04 AM, Andrei Alexandrescu wrote: On 6/20/11 10:02 AM, dsimcha wrote: == Quote from Andrei Alexandrescu (seewebsiteforem...@erdani.org)'s article On 6/19/11 6:20 PM, dsimcha wrote: My other concern is that giving RegionAllocator reference semantics would, IIUC, require an allocation to allocate a RegionAllocator. Since TempAlloc is designed to avoid global GC locks/world stopping like the plauge, this is obviously bad. I am hoping we can arrange things such that a RegionAllocator created from scratch initializes a new frame, whereas subsequent copies of it use that same frame. Would that work? Andrei No. I don't want every creation of a new frame to require a GC heap allocation. I don't understand why such would be necessary. Andrei Maybe I'm missing something, but how else would a RegionAllocator have reference semantics? I'm thinking along the lines of an Algebraic containing the actual state and a pointer to the original object. Andrei
Re: TempAlloc: an unusual request
On Sun, 19 Jun 2011 12:07:20 -0400, Andrei Alexandrescu wrote: On 6/19/11 5:03 AM, Lars T. Kyllingstad wrote: On Sat, 18 Jun 2011 09:00:13 -0500, Andrei Alexandrescu wrote: I'd like to kindly request the author, the review manager, and the community that TempAlloc skips this round of reviews without inclusion in Phobos, to be reviewed again later. As review manager, I don't mind postponing the review until some later time. As a community member and Phobos user, I think it would of course be preferable if TempAlloc fits into a more general allocator interface. As an active TempAlloc user, I hope it doesn't take too long before said interface is decided upon. ;) -Lars I'm thinking of defining a general interface that catches malloc, the GC, scoped allocators a la TempAlloc, and possibly compound allocators a la reaps. I'll start with an example: struct Mallocator { /** Allocates a chunk of size s. */ static void * allocate(size_t s) { return enforce(malloc(s), new OutOfMemory); } /** Optional: frees a chunk allocated with allocate */ static void free(void *p) { .free(p); } /** Optional: frees all chunks allocated with allocate */ // static void freeAll(); /** Resizes a chunk allocated with allocate without moving. Required, but may be implemented to always return false. */ static bool resize(void* p, size_t newSize) { // Can't use realloc here return false; } I think resize should behave like/named-after GC.extend: static size_t extend(void* p, size_t mx, size_t sz);
Re: Should protection attributes disambiguate?
On 2011-06-20 19:45, Nick Sabalausky wrote: > "Jonathan M Davis" wrote in message > news:mailman.1064.1308621500.14074.digitalmar...@puremagic.com... > > > On 2011-06-20 17:36, Andrej Mitrovic wrote: > >> On 6/21/11, Jonathan M Davis wrote: > >> > That's not necessarily a bug > >> > >> Maybe the naming of the issue is wrong, but it has to be a bug: > >> http://d.puremagic.com/issues/show_bug.cgi?id=6180 > >> > >> If private symbols are not usable, then creating an object of a > >> private class type is a pretty major bug, methinks. > > > > Oh, yes. That's a bug. But having two symbols which clash where one is > > private > > and the other public isn't a bug. It may be a design decision which > > merits revisiting, but as I understand it, it's a natural consequence of > > the fact that access modifiers modify access, not visibility. > > Isn't visibility a form of access? > > Regardless, I think it's clear that the whole point of a private "access" > modifier is to make things *private* and safely encapsulated. The current > situation clearly breaks this. And since visibility without access is > useless, I don't see any reason to even get into the subtle semantics of > "visibility" vs "access" at all. It matters for stuff like NVI (Non-Virtual Inheritance). In that particular case, you overload a private function but you can't call it. You couldn't overload it if you couldn't see it. So, there _are_ cases where it matters, and it _is_ an important distinction. It's just that it matters infrequently enough that most people don't realize that there is such a distinction. But distinction or not, I don't see why we couldn't just make it so that any attempt to use a symbol where the clashing symbol can't be used anyway just doesn't clash by ignoring the private symbol in such cases. - Jonathan M Davis
Re: what to do with postblit on the heap?
On 2011-06-20 18:59, Michel Fortin wrote: > On 2011-06-20 18:12:11 -0400, "Steven Schveighoffer" > > said: > > On Mon, 20 Jun 2011 16:45:44 -0400, Michel Fortin > > > > wrote: > >> My feeling is that array appending and array assignment should be > >> considered a compiler issue first and foremost. The compiler needs to > >> be fixed, and once that's done the runtime will need to be updated > >> anyway to match the changes in the compiler. Your proposed fix for > >> array assignment is a good start for when the compiler will provide > >> the necessary info to the runtime, but applying it at this time will > >> just fix some cases by breaking a few others: net improvement zero. > > > > BTW, I now feel that your request to make a distinction between move > > and copy is not required. The compiler currently calls the destructor > > of temporaries, so it should also call postblit. I don't think it can > > make the distinction between array appending and simply calling some > > other function. > > Well, if > > a ~= S(); > > does result in a temporary which get copied and then destroyed, why > have move semantics at all? Move semantics are not just an > optimization, they actually change the semantics. If you have a struct > with a @disabled postblit, should it still be appendable? I would expect that to have move semantics. There's no need to create and destroy a temporary. It's completely wasteful. A copy should only be happening when a copy _needs_ to happen. It doesn't need to happen here. Now, depending on what ~= did internally (assuming that it were an overloaded operator), then a copy may end up occurring inside of the function, but that shouldn't happen for the built-in ~= operator, and a well-written overloaded ~= should avoid the need to copy as well. - Jonathan M Davis
Re: imports in functions
Aaaargh, I've used function imports, completely disregarding that they're not present in 2.053 and now I can't deploy code to github because people won't be able to compile it. Ahh the joys.. Well I'm glad I caught this now before embarrassing myself.
Re: Should protection attributes disambiguate?
"Jonathan M Davis" wrote in message news:mailman.1064.1308621500.14074.digitalmar...@puremagic.com... > On 2011-06-20 17:36, Andrej Mitrovic wrote: >> On 6/21/11, Jonathan M Davis wrote: >> > That's not necessarily a bug >> >> Maybe the naming of the issue is wrong, but it has to be a bug: >> http://d.puremagic.com/issues/show_bug.cgi?id=6180 >> >> If private symbols are not usable, then creating an object of a >> private class type is a pretty major bug, methinks. > > Oh, yes. That's a bug. But having two symbols which clash where one is > private > and the other public isn't a bug. It may be a design decision which merits > revisiting, but as I understand it, it's a natural consequence of the fact > that access modifiers modify access, not visibility. > Isn't visibility a form of access? Regardless, I think it's clear that the whole point of a private "access" modifier is to make things *private* and safely encapsulated. The current situation clearly breaks this. And since visibility without access is useless, I don't see any reason to even get into the subtle semantics of "visibility" vs "access" at all.
Re: TempAlloc: an unusual request
On 6/20/2011 11:10 AM, Andrei Alexandrescu wrote: On 6/20/11 10:01 AM, dsimcha wrote: == Quote from Andrei Alexandrescu (seewebsiteforem...@erdani.org)'s article I do agree with your choice of scan flags because your analysis of costs, benefits, and onus put on the user is compelling. In this case, however, it seems to me that functions that implicitly return stuff on the TempAlloc stack are paving the way towards messed-up modules that can't be reasoned about modularly. Thanks, Andrei There are two use cases for implicitly returning TempAlloc-allocated memory: 1. In a private API. If we provide an artifact good for private APIs but dangerous for true modular code, I think this is a weak argument. Hmm, I'd be fine with leaving out any blatantly obvious way to get implicit TempAlloc allocation as long as there's a backdoor in place so that it can be done if you really want to. For example, when allocating from RegionAllocator, you'd probably want to check that you're allocating from the last created RegionAllocator anyhow, at least in debug mode. Therefore, you'd probably want every RegionAllocator to have a pointer to the previous RegionAllocator. Then you could have a thread-local RegionAllocator.lastCreated. You could then allocate from RegionAllocator.lastCreated if you really, really wanted to. 2. I have a few data structures that I may clean up and submit as a proposal later (hash table, hash set, AVL tree) whose implementations are specifically optimized for TempAlloc. For example, the hash table is provisionally called StackHash. I'd really rather write: auto table = StackHash!(uint, double)(10); table[666] = 8675309; rather than: auto table = StackHash!(uint, double)(10); table.addElement(666, 8675309, someLongVerboseAllocator); Couldn't StackHash's constructor accept an allocator as an argument? Good idea, actually.
Re: what to do with postblit on the heap?
On 2011-06-20 16:07, Steven Schveighoffer wrote: > On Mon, 20 Jun 2011 18:43:30 -0400, Jonathan M Davis > > wrote: > > On 2011-06-20 15:12, Steven Schveighoffer wrote: > >> BTW, I now feel that your request to make a distinction between move and > >> copy is not required. The compiler currently calls the destructor of > >> temporaries, so it should also call postblit. I don't think it can make > >> the distinction between array appending and simply calling some other > >> function. > > > > If an object is moved, neither the postblit nor the destructor should be > > called. The object is moved, not copied and destroyed. I believe that > > TDPL is > > very specific on that. > > Well, I think in this case it is being copied. It's put on the stack, and > then copied to the heap inside the runtime function. The runtime could be > passed a flag indicating the append is really a move, but I'm not sure > it's a good choice. To me, not calling the postblit and dtor on a moved > struct is an optimization, no? And you can't re-implement these semantics > for a normal function. The one case I can think of is when an rvalue is > allowed to be passed by reference (which is exactly what's happening here). Well, going from the stack to the heap probably is a copy. But moves shouldn't be calling the postblit or the destructor, and you seemed to be saying that they should. The main place that a move would occur that I can think would be when returning a value from a function, which is very different. And I don't think that avoiding the postblit is necessarily just an optimization. If the postblit really is skipped, then it's probably possible to return an object which cannot legally be copied (presumably due to some combination of reference or pointer member variables and const or immutable), though that wouldn't exactly be a typical situation, even if it actually is possible. It _is_ primarily an optimization to move rather than copy and destroy, but I'm not sure that it's _just_ an optimization. > Is there anything a postblit is allowed to do that would break a struct if > you disabled the postblit in this case? I'm pretty sure internal pointers > are not supported, especially if move semantics do not call the postblit. If the struct had a pointer to a local member variable which the postblit would have deep-copied, then sure, not calling the postblit would screw with the struct. But that would screw with a struct which was returned from a function as well, and that's the prime place for the move semantics. That sort of struct is just plain badly designed, so I don't think that it's really something to worry about. I can't think of any other cases where it would be a problem though. Structs don't usually care where they live (aside from the issue of structs being designed to live on the stack and then not getting their destructor called because they're on the heap). - Jonathan M Davis
Re: what to do with postblit on the heap?
On 2011-06-20 18:12:11 -0400, "Steven Schveighoffer" said: On Mon, 20 Jun 2011 16:45:44 -0400, Michel Fortin wrote: My feeling is that array appending and array assignment should be considered a compiler issue first and foremost. The compiler needs to be fixed, and once that's done the runtime will need to be updated anyway to match the changes in the compiler. Your proposed fix for array assignment is a good start for when the compiler will provide the necessary info to the runtime, but applying it at this time will just fix some cases by breaking a few others: net improvement zero. BTW, I now feel that your request to make a distinction between move and copy is not required. The compiler currently calls the destructor of temporaries, so it should also call postblit. I don't think it can make the distinction between array appending and simply calling some other function. Well, if a ~= S(); does result in a temporary which get copied and then destroyed, why have move semantics at all? Move semantics are not just an optimization, they actually change the semantics. If you have a struct with a @disabled postblit, should it still be appendable? If the issue of array assignment is fixed, do you think it's worth putting the change in, and then filing a bug against the GC? I still think the current cases that "work" are fundamentally broken anyways. That depends. I'm not too sure currently whether the S destructor is called for this code: a ~= S(); If the compiler currently calls the destructor on the temporary S struct, then your patch is actually a fix because it balances constructors and destructors correctly for the appending part (the bug is then that compiler should use move semantics but is using copy instead). If it doesn't call the destructor then your patch does introduce a bug for this case. All in all, I don't think it's important enough to justify we waste hours debating in what order we should fix those bugs. Do what you think is right. If it becomes a problem or it introduces a bug here or there, we'll adjust, at worse that means a revert of your commit. As for the issue that destructors aren't called for arrays on the heap, it's a serious problem. But it's also a separate problem that concerns purely the runtime, as far as I am aware of. Is there someone working on it? I think we need precise scanning to get a complete solution. Another option is to increase the information the array runtime stores in the memory block (currently it only stores the "used" length) and then hook the GC to call the dtors. This might be a quick fix that doesn't require precise scanning, but it also fixes the most common case of allocating a single struct or an array of structs on the heap. The GC calling the destructor doesn't require precise scanning. Although it's true that both problems require adding type information to memory blocks, beyond that requirement they're both independent. It'd be really nice if struct destructors were called correctly. -- Michel Fortin michel.for...@michelf.com http://michelf.com/
Re: TempAlloc: an unusual request
On 6/20/2011 11:04 AM, Andrei Alexandrescu wrote: On 6/20/11 10:02 AM, dsimcha wrote: == Quote from Andrei Alexandrescu (seewebsiteforem...@erdani.org)'s article On 6/19/11 6:20 PM, dsimcha wrote: My other concern is that giving RegionAllocator reference semantics would, IIUC, require an allocation to allocate a RegionAllocator. Since TempAlloc is designed to avoid global GC locks/world stopping like the plauge, this is obviously bad. I am hoping we can arrange things such that a RegionAllocator created from scratch initializes a new frame, whereas subsequent copies of it use that same frame. Would that work? Andrei No. I don't want every creation of a new frame to require a GC heap allocation. I don't understand why such would be necessary. Andrei Maybe I'm missing something, but how else would a RegionAllocator have reference semantics?
Re: Should protection attributes disambiguate?
On 2011-06-20 17:36, Andrej Mitrovic wrote: > On 6/21/11, Jonathan M Davis wrote: > > That's not necessarily a bug > > Maybe the naming of the issue is wrong, but it has to be a bug: > http://d.puremagic.com/issues/show_bug.cgi?id=6180 > > If private symbols are not usable, then creating an object of a > private class type is a pretty major bug, methinks. Oh, yes. That's a bug. But having two symbols which clash where one is private and the other public isn't a bug. It may be a design decision which merits revisiting, but as I understand it, it's a natural consequence of the fact that access modifiers modify access, not visibility. - Jonathan M Davis
Re: Curl wrapper round two
On Mon, Jun 20, 2011 at 8:33 PM, jdrewsen wrote: > Den 18-06-2011 22:36, jdrewsen skrev: >> >> Hi, >> >> I've finally got through all the very constructive comments from the >> last review of the curl wrapper and performed the needed changes. >> >> Here is the github branch: >> https://github.com/jcd/phobos/tree/curl-wrapper >> >> And the generated docs: >> http://freeze.steamwinter.com/D/web/phobos/etc_curl.html > > I've made the changes as suggested from your comments and pushed to the > github branch above. > > Changes: > > * Change and delete individual headers when using static convenience methods > * Make keep-alive work when using static convenience methods > * Add as extra modifiable parameters on follow requests (keep-alive): > headers, method, url, postData > * Add verbose property to Protocol > * No dummy bool in constructors > > Comments are welcome > > /Jonas > Hi Jonas, Was reading your implementation but I had to context switch. Only go to line 145 :(. I see that you are refcounting by sharing a uint* but what about all the other private fields? What happens if you pass the Curl object around functions and those values are modified? Thanks, -Jose
Re: NG traffic overloadad?
"Andrej Mitrovic" wrote in message news:mailman.1062.1308616752.14074.digitalmar...@puremagic.com... > On 6/21/11, Nick Sabalausky wrote: >> maybe our resident alias-hopping troll's been doing more than just >> writing occasional posts? > > And I bet they wrote the DDOS script in D. Those bastages!
Re: NG traffic overloadad?
On 6/21/11, Nick Sabalausky wrote: > maybe our resident alias-hopping troll's been doing more than just > writing occasional posts? And I bet they wrote the DDOS script in D.
Re: Should protection attributes disambiguate?
On 6/21/11, Jonathan M Davis wrote: > That's not necessarily a bug Maybe the naming of the issue is wrong, but it has to be a bug: http://d.puremagic.com/issues/show_bug.cgi?id=6180 If private symbols are not usable, then creating an object of a private class type is a pretty major bug, methinks.
Re: Should protection attributes disambiguate?
"Jonathan M Davis" wrote in message news:mailman.1058.1308610718.14074.digitalmar...@puremagic.com... > On 2011-06-20 15:43, Nick Sabalausky wrote: >> "Nick Sabalausky" wrote in message >> news:itoiji$2hsk$1...@digitalmars.com... >> >> > "Peter Alexander" wrote in message >> > news:itog87$2ed9$1...@digitalmars.com... >> > >> >> I'm working on a fix to >> >> http://d.puremagic.com/issues/show_bug.cgi?id=6180 >> >> >> >> Essentially the problem boils down to: >> >> >> >> - Module A has a private symbol named x >> >> - Module B has a public symbol named x >> >> - Module C imports A and B and tries to use x unqualified >> >> >> >> Should the fact that B.x is public and A.x is private disambiguate the >> >> usage in module C to use B.x, or is that still an ambiguous >> >> unqualified >> >> usage of variable x that requires manual qualification/disambiguation? >> > >> > If something's private, it's supposed to be an "internal-only" sort of >> > thing. Private. Outside its own module, it shouldn't even be visibile >> > and >> > it's existence shouldn't have any effect. So I'd say unqualfied use of >> > x >> > inside C should definitely be allowed and resolve to B.x. >> >> I'd add that IMO, to do otherwise would break encapsulation (or at least >> put a big ugly dent in it). > > Well, except that access modifiers are _access_ modifiers. They indicate > whether you can _use_ a particular symbol, not whether you can see it. So, > the > fact that a symbol is private has _zero_ affect on whether other modules > can > see it. > Technically, maybe, but there's not much point in seeing it if you can't access it. > That being said, the only situation that I can think of where it would > cause a > problem for private to be used as part part of the disambiguation process > would be if you're trying to use a symbol which is private (not realizing > that > it's private) and end up using a symbol with the same name which you > actually > can access, and it manages to compile, and you end up using a symbol other > than the one that you intended. Now, I wouldn't expect that to be a big > problem - particularly since in most cases, the symbols likely wouldn't be > able to be used the same and compile - but it is at least theoretically a > concern. > Yea, certainly true. But you can have problems the other way too: A module introduces a new private class/function/variable solely for it's own personal use, and it just happens to have the same name as a public member of some other module. So all the code that uses that public member ends up broken, just because the internal details of something else got changed. So like you, I think the benefits of not letting private stuff participate in overload resolution are well worth the risk of that rare scenario you describe. > Personally, I think that the gain in making it so that the compiler can > assume > that a private symbol doesn't enter into overload sets or whatnot would > far > outweigh that one, particular problem. >
Re: Curl wrapper round two
Den 18-06-2011 22:36, jdrewsen skrev: Hi, I've finally got through all the very constructive comments from the last review of the curl wrapper and performed the needed changes. Here is the github branch: https://github.com/jcd/phobos/tree/curl-wrapper And the generated docs: http://freeze.steamwinter.com/D/web/phobos/etc_curl.html I've made the changes as suggested from your comments and pushed to the github branch above. Changes: * Change and delete individual headers when using static convenience methods * Make keep-alive work when using static convenience methods * Add as extra modifiable parameters on follow requests (keep-alive): headers, method, url, postData * Add verbose property to Protocol * No dummy bool in constructors Comments are welcome /Jonas
Re: what to do with postblit on the heap?
On Mon, 20 Jun 2011 18:43:30 -0400, Jonathan M Davis wrote: On 2011-06-20 15:12, Steven Schveighoffer wrote: BTW, I now feel that your request to make a distinction between move and copy is not required. The compiler currently calls the destructor of temporaries, so it should also call postblit. I don't think it can make the distinction between array appending and simply calling some other function. If an object is moved, neither the postblit nor the destructor should be called. The object is moved, not copied and destroyed. I believe that TDPL is very specific on that. Well, I think in this case it is being copied. It's put on the stack, and then copied to the heap inside the runtime function. The runtime could be passed a flag indicating the append is really a move, but I'm not sure it's a good choice. To me, not calling the postblit and dtor on a moved struct is an optimization, no? And you can't re-implement these semantics for a normal function. The one case I can think of is when an rvalue is allowed to be passed by reference (which is exactly what's happening here). Is there anything a postblit is allowed to do that would break a struct if you disabled the postblit in this case? I'm pretty sure internal pointers are not supported, especially if move semantics do not call the postblit. -Steve
Re: Should protection attributes disambiguate?
On 2011-06-20 15:58, Andrej Mitrovic wrote: > Note that there is a bug with private symbols right now. If you have a > private class in module A, a public function with the same name in > module B and a module C that imports them both, the two names will > clash. I've reported this recently. That's not necessarily a bug, though we may want to change D's behavior in this regard. Making the symbols private makes it so that you can't access them, not so that you can't see them. What Peter is asking for is essentially that the compiler ignore private symbols when it disambiguates symbols, which would fix your bug. But I believe that the behavior that you're seeing is in fact what is currently supposed to happen. - Jonathan M Davis
Re: NG traffic overloadad?
"Steven Schveighoffer" wrote in message news:op.vxecghw0eav7ka@localhost.localdomain... > On Mon, 20 Jun 2011 17:36:22 -0400, Jesse Phillips > wrote: > >> Is it just me or have others not been able to access the NG. Most >> recently it had a message say the load was 22.24. If it is getting too >> much traffic, that is pretty cool, but might not be good to continue. > > I'm getting the following error from Opera regularly: > > 400 loadav [innwatch:hiload] 2912 gt 2000 > > Seems like it has something to do with the load. I wonder if it's a DOS > going on... > Ever since around the time TDPL came out I've had occasional times when I'll get that. Then it always comes back after not too long. Mildy annoying sometimes, but I figure I probably spend too much time here anyway ;) Not saying it is or isn't because of TDPL, but that was around when D got a surge of increased interest ('course, that surge of interest probably had a lot to do with TDPL, though). Of course, you could be right about the DOS, too - maybe our resident alias-hopping troll's been doing more than just writing occasional posts?
Re: Should protection attributes disambiguate?
On 2011-06-20 15:43, Nick Sabalausky wrote: > "Nick Sabalausky" wrote in message > news:itoiji$2hsk$1...@digitalmars.com... > > > "Peter Alexander" wrote in message > > news:itog87$2ed9$1...@digitalmars.com... > > > >> I'm working on a fix to > >> http://d.puremagic.com/issues/show_bug.cgi?id=6180 > >> > >> Essentially the problem boils down to: > >> > >> - Module A has a private symbol named x > >> - Module B has a public symbol named x > >> - Module C imports A and B and tries to use x unqualified > >> > >> Should the fact that B.x is public and A.x is private disambiguate the > >> usage in module C to use B.x, or is that still an ambiguous unqualified > >> usage of variable x that requires manual qualification/disambiguation? > > > > If something's private, it's supposed to be an "internal-only" sort of > > thing. Private. Outside its own module, it shouldn't even be visibile and > > it's existence shouldn't have any effect. So I'd say unqualfied use of x > > inside C should definitely be allowed and resolve to B.x. > > I'd add that IMO, to do otherwise would break encapsulation (or at least > put a big ugly dent in it). Well, except that access modifiers are _access_ modifiers. They indicate whether you can _use_ a particular symbol, not whether you can see it. So, the fact that a symbol is private has _zero_ affect on whether other modules can see it. That being said, the only situation that I can think of where it would cause a problem for private to be used as part part of the disambiguation process would be if you're trying to use a symbol which is private (not realizing that it's private) and end up using a symbol with the same name which you actually can access, and it manages to compile, and you end up using a symbol other than the one that you intended. Now, I wouldn't expect that to be a big problem - particularly since in most cases, the symbols likely wouldn't be able to be used the same and compile - but it is at least theoretically a concern. Personally, I think that the gain in making it so that the compiler can assume that a private symbol doesn't enter into overload sets or whatnot would far outweigh that one, particular problem. - Jonathan M Davis
Re: Should protection attributes disambiguate?
Note that there is a bug with private symbols right now. If you have a private class in module A, a public function with the same name in module B and a module C that imports them both, the two names will clash. I've reported this recently.
Re: imports in functions
On 6/21/11, Nick Sabalausky wrote: > UFCS barely works at all in D2. Another example: Couple that with the classically buggy 'with' statement and you've got yourself a party! \o/
Re: NG traffic overloadad?
On Mon, 20 Jun 2011 17:36:22 -0400, Jesse Phillips wrote: Is it just me or have others not been able to access the NG. Most recently it had a message say the load was 22.24. If it is getting too much traffic, that is pretty cool, but might not be good to continue. I'm getting the following error from Opera regularly: 400 loadav [innwatch:hiload] 2912 gt 2000 Seems like it has something to do with the load. I wonder if it's a DOS going on... -Steve
Re: DIP 11: trial partial implementation
"Jacob Carlborg" wrote in message news:itoeia$2bs3$1...@digitalmars.com... > On 2011-06-20 21:32, Nick Sabalausky wrote: >> "Jacob Carlborg" wrote in message >> news:itljh6$d4l$1...@digitalmars.com... >>> On 2011-06-19 20:31, Nick Sabalausky wrote: "Jacob Carlborg" wrote in message news:itkp2l$1ru0$1...@digitalmars.com... > On 2011-06-19 02:10, Adam D. Ruppe wrote: >> http://arsdnet.net/dcode/build2.d >> >> * Be fast. It loops dmd like my old build.d. (I can't find a better >> way to do it. Even rdmd always runs dmd at least twice - check >> its source!) > > That shouldn't be necessary. > > First run: > > * Run the compiler once with the -deps flag to collect the > dependencies > * Run the compiler again to compile everything > * Cache dependencies > > Later runs: > > * Run the compiler once with the -deps flag and compile everything Using the -deps flag to *just* get the deps is very fast. Much faster than a full compile. >>> >>> I understand that that would be faster when the dependencies have >>> changed >>> but if they haven't then you just have to run the compiler once. Don't >>> know what would be best to do though. >>> >>> BTW, to just get the dependencies, would that be with the -deps and -c >>> flags? Is the a better way? I mean if you just specify the -deps flag it >>> will do a full compilation. Seems to me that skipping linking (-c flag) >>> is >>> a little too much as well for what's actually necessary. Would be good >>> to >>> have a flag that does only what's absolutely necessary for tracking >>> dependencies. >>> >> >> What I meant was that doing a deps-only run is fast enough that doing it >> every time shouldn't be a problem. >> >> However, I am starting to wonder if RDMD's functionality should built >> into >> DMD (ideally in a way that LDC/GDC wouldn't have to re-implement it >> themselves). DDMD does take about a minute or so to compile, and while >> the >> deps-only run is the faster part, it's not insignificant. But then, maybe >> that's just due to some fixable inefficiency in DMD? There's very little >> templates/ctfe involved. > > I guess one could add the "-o-" (do not write object file) flag as well. > Don't know how much that would help though. I guess DMD does need to do > most of the process it normally does on the files due to static if, mixins > and other meta programming features. > It would have to do most/all of the front-end work, but I wouldn't think it should have to do any of the backend work or any optimizations (not sure where those lie, a little in each?).
Re: imports in functions
"Andrej Mitrovic" wrote in message news:mailman.1030.1308543325.14074.digitalmar...@puremagic.com... >I think I've just found the first bug: > > module test; > void main() > { > } > > void foo() > { >import std.utf; >"bla".toUTF16z; > } > > Error: undefined identifier module test.toUTF16z > > UFCS doesn't work with function imports. This is a bug, right? UFCS barely works at all in D2. Another example: Regression(2.020) Array member call syntax can't find matches in current class http://d.puremagic.com/issues/show_bug.cgi?id=4525
Re: Should protection attributes disambiguate?
"Peter Alexander" wrote in message news:itog87$2ed9$1...@digitalmars.com... > I'm working on a fix to http://d.puremagic.com/issues/show_bug.cgi?id=6180 > > Essentially the problem boils down to: > > - Module A has a private symbol named x > - Module B has a public symbol named x > - Module C imports A and B and tries to use x unqualified > > Should the fact that B.x is public and A.x is private disambiguate the > usage in module C to use B.x, or is that still an ambiguous unqualified > usage of variable x that requires manual qualification/disambiguation? If something's private, it's supposed to be an "internal-only" sort of thing. Private. Outside its own module, it shouldn't even be visibile and it's existence shouldn't have any effect. So I'd say unqualfied use of x inside C should definitely be allowed and resolve to B.x.
Re: Should protection attributes disambiguate?
"Nick Sabalausky" wrote in message news:itoiji$2hsk$1...@digitalmars.com... > "Peter Alexander" wrote in message > news:itog87$2ed9$1...@digitalmars.com... >> I'm working on a fix to >> http://d.puremagic.com/issues/show_bug.cgi?id=6180 >> >> Essentially the problem boils down to: >> >> - Module A has a private symbol named x >> - Module B has a public symbol named x >> - Module C imports A and B and tries to use x unqualified >> >> Should the fact that B.x is public and A.x is private disambiguate the >> usage in module C to use B.x, or is that still an ambiguous unqualified >> usage of variable x that requires manual qualification/disambiguation? > > If something's private, it's supposed to be an "internal-only" sort of > thing. Private. Outside its own module, it shouldn't even be visibile and > it's existence shouldn't have any effect. So I'd say unqualfied use of x > inside C should definitely be allowed and resolve to B.x. > I'd add that IMO, to do otherwise would break encapsulation (or at least put a big ugly dent in it).
Re: what to do with postblit on the heap?
On 2011-06-20 15:12, Steven Schveighoffer wrote: > On Mon, 20 Jun 2011 16:45:44 -0400, Michel Fortin > > wrote: > > On 2011-06-20 10:34:14 -0400, "Steven Schveighoffer" > > > > said: > >> I have submitted a fix for bug 5272, > >> http://d.puremagic.com/issues/show_bug.cgi?id=5272 "Postblit not called > >> on copying due to array append" > >> > >> However, I am starting to realize that one of the major reasons for > >> > >> postblit is to match it with an equivalent dtor. > >> > >> This works well when the struct is on the stack -- the posblit for > >> > >> instance increments a reference counter, then the dtor decrements the > >> ref counter. > >> > >> But when the data is on the heap, the destructor is *not* called. So > >> > >> what happens to any ref-counted data that is on the heap? It's never > >> decremented. Currently though, it might still work, because postblit > >> isn't called when the data is on the heap! So no increment, no > >> decrement. > >> > >> I think this is an artificial "success". However, if the pull request > >> > >> I initiated is accepted, then postblit *will* be called on heap > >> allocation, for instance if you append data. This will further > >> highlight the fact that the destructor is not being called. > >> > >> So is it worth adding calls to postblit, knowing that the complement > >> > >> destructor is not going to be called? I can see in some cases where it > >> > >> would be expected, and I can see other cases where it will be > >> > >> difficult to deal with. IMO, the difficult cases are already broken > >> anyways, but it just seems like they are not. > >> > >> The other part of this puzzle that is missing is array assignment, > >> > >> for example a[] = b[] does not call postblits. I cannot fix this > >> because _d_arraycopy does not give me the typeinfo. > >> > >> Anyone else have any thoughts? I'm mixed as to whether this patch > >> > >> should be accepted without more comprehensive GC/compiler reform. I > >> feel its a step in the right direction, but that it will upset the > >> balance in a few places (particularly ref-counting). > > > > My feeling is that array appending and array assignment should be > > considered a compiler issue first and foremost. The compiler needs to be > > fixed, and once that's done the runtime will need to be updated anyway > > to match the changes in the compiler. Your proposed fix for array > > assignment is a good start for when the compiler will provide the > > necessary info to the runtime, but applying it at this time will just > > fix some cases by breaking a few others: net improvement zero. > > BTW, I now feel that your request to make a distinction between move and > copy is not required. The compiler currently calls the destructor of > temporaries, so it should also call postblit. I don't think it can make > the distinction between array appending and simply calling some other > function. If an object is moved, neither the postblit nor the destructor should be called. The object is moved, not copied and destroyed. I believe that TDPL is very specific on that. - Jonathan M Davis
Re: DIP11: Automatic downloading of libraries
On 6/20/11 4:28 PM, Jacob Carlborg wrote: See my reply to Dmitry. I see this as a dogfood issue. If there are things that should be in Phobos and aren't, it would gain everybody to add them to Phobos. Anyhow, it all depends on what you want to do with the tool. If it's written in D1, we won't be able to put it on the github D-programming-language/tools (which doesn't mean it won't become widespread). BTW has std.benchmark gone through the regular review process? I was sure someone will ask that at some point :o). The planned change was to add a couple of functions, but then it got separated into its own module. If several people think it's worth putting std.benchmark through the review queue, let's do so. I'm sure the quality of the module will be gained. Andrei
Re: DIP11: Automatic downloading of libraries
On 21.06.2011 1:36, Jacob Carlborg wrote: On 2011-06-20 22:45, Dmitry Olshansky wrote: On 20.06.2011 23:39, Nick Sabalausky wrote: "Dmitry Olshansky" wrote in message news:itn2el$2t2v$1...@digitalmars.com... On 20.06.2011 12:25, Jacob Carlborg wrote: On 2011-06-19 22:28, Dmitry Olshansky wrote: Why having name as run-time parameter? I'd expect more like (given there is Target struct or class): //somewhere at top Target cool_lib, ...; then: with(cool_lib) { flags = "-L-lz"; } I'd even expect special types like Executable, Library and so on. The user shouldn't have to create the necessary object. If it does, how would the tool get it then? If we settle on effectively evaluating orbspec like this: //first module module orb_orange; mixin(import ("orange.orbspec")); // // builder entry point void main() { foreach(member; __traits(allMembers, orb_orange)) { static if(typeof(member) == Target){ //do necessary actions, sort out priority and construct a worklist } else //static if (...) //...could be others I mentioned { } } //all the work goes there } Should be straightforward? Alternatively with local imports we can pack it in a struct instead of separate module, though errors in script would be harder to report (but at least static constructors would be controlled!). More adequatly would be, of course, to pump it to dmd from stdin... Target would be part of Orb. Why not just make Target's ctor register itself with the rest of Orb? Nice thinking, but default constructors for structs? Of course, it could be a class... Then probably there could be usefull derived things like these Executable, Library, etc. I really don't like that the users needs to create the targets. The good thing about Ruby is that the user can just call a function and pass a block to the function. Then the tool can evaluate the block in the context of an instance. The user would never have to care about instances. I'm not getting what's wrong with it. Your magical block is still getting some _name_ as string right? I suspect it's even an advantage if you can't type pass arbitrary strings to a block only proper instances, e.g. it's harder to mistype a name due to a type checking. What's so good about having to type all these name over and over again without keeping track of how many you inadvertently referenced? Taking your example, what if I typed name2 instead of name here, what would be the tool actions: target "name" do |t| t.flags = "-L-lz" end Create new target and set it's flags? I can't see a reasonable error checking to disambiguate it at all. More then that now I'm not sure what it was supposed to do in the first place - update flags of existing Target instance with name "name" ? Right now I think it could be much better to initialize them in the first place. IMHO every time I create a build script I usually care about number of targets and their names. P.S. Also about D as config language : take into account version statements, here they make a lot of sense. -- Dmitry Olshansky
Re: what to do with postblit on the heap?
On Mon, Jun 20, 2011 at 7:12 PM, Steven Schveighoffer wrote: > On Mon, 20 Jun 2011 16:45:44 -0400, Michel Fortin > wrote: >> On 2011-06-20 10:34:14 -0400, "Steven Schveighoffer" >> said: >> >>> I have submitted a fix for bug 5272, >>>  http://d.puremagic.com/issues/show_bug.cgi?id=5272 "Postblit not called on >>>  copying due to array append" >>>  However, I am starting to realize that one of the major reasons for >>>  postblit is to match it with an equivalent dtor. >>>  This works well when the struct is on the stack -- the posblit for >>>  instance increments a reference counter, then the dtor decrements the ref >>>  counter. >>>  But when the data is on the heap, the destructor is *not* called.  So >>> what  happens to any ref-counted data that is on the heap?  It's never >>>  decremented.  Currently though, it might still work, because postblit >>>  isn't called when the data is on the heap!  So no increment, no decrement. >>>  I think this is an artificial "success".  However, if the pull request I >>>  initiated is accepted, then postblit *will* be called on heap allocation, >>>  for instance if you append data.  This will further highlight the fact >>>  that the destructor is not being called. >>>  So is it worth adding calls to postblit, knowing that the complement >>>  destructor is not going to be called?  I can see in some cases where it >>>  would be expected, and I can see other cases where it will be difficult to >>>  deal with.  IMO, the difficult cases are already broken anyways, but it >>>  just seems like they are not. >>>  The other part of this puzzle that is missing is array assignment, for >>>  example a[] = b[] does not call postblits.  I cannot fix this because >>>  _d_arraycopy does not give me the typeinfo. >>>  Anyone else have any thoughts?  I'm mixed as to whether this patch >>> should  be accepted without more comprehensive GC/compiler reform.  I feel >>> its a  step in the right direction, but that it will upset the balance in a >>> few  places (particularly ref-counting). >> >> My feeling is that array appending and array assignment should be >> considered a compiler issue first and foremost. The compiler needs to be >> fixed, and once that's done the runtime will need to be updated anyway to >> match the changes in the compiler. Your proposed fix for array assignment is >> a good start for when the compiler will provide the necessary info to the >> runtime, but applying it at this time will just fix some cases by breaking a >> few others: net improvement zero. > > BTW, I now feel that your request to make a distinction between move and > copy is not required.  The compiler currently calls the destructor of > temporaries, so it should also call postblit.  I don't think it can make the > distinction between array appending and simply calling some other function. > > If the issue of array assignment is fixed, do you think it's worth putting > the change in, and then filing a bug against the GC?  I still think the > current cases that "work" are fundamentally broken anyways. > > For instance, in a ref-counted struct, if you appended it to an array, then > removed all the stack-based references, the ref count goes to zero, even > though the array still has a reference (I think someone filed a bug against > std.stdio.File for this). > >> As for the issue that destructors aren't called for arrays on the heap, >> it's a serious problem. But it's also a separate problem that concerns >> purely the runtime, as far as I am aware of. Is there someone working on it? > > I think we need precise scanning to get a complete solution.  Another option > is to increase the information the array runtime stores in the memory block > (currently it only stores the "used" length) and then hook the GC to call > the dtors.  This might be a quick fix that doesn't require precise scanning, > but it also fixes the most common case of allocating a single struct or an > array of structs on the heap. > > -Steve > Also, I don't think this problem is specific to array. I think that AAs are also not calling postblit and dtor. In one of my project I have an AA to what essentially is a RefCounted and it doesn't increase the ref count.
Re: what to do with postblit on the heap?
On Mon, 20 Jun 2011 16:45:44 -0400, Michel Fortin wrote: On 2011-06-20 10:34:14 -0400, "Steven Schveighoffer" said: I have submitted a fix for bug 5272, http://d.puremagic.com/issues/show_bug.cgi?id=5272 "Postblit not called on copying due to array append" However, I am starting to realize that one of the major reasons for postblit is to match it with an equivalent dtor. This works well when the struct is on the stack -- the posblit for instance increments a reference counter, then the dtor decrements the ref counter. But when the data is on the heap, the destructor is *not* called. So what happens to any ref-counted data that is on the heap? It's never decremented. Currently though, it might still work, because postblit isn't called when the data is on the heap! So no increment, no decrement. I think this is an artificial "success". However, if the pull request I initiated is accepted, then postblit *will* be called on heap allocation, for instance if you append data. This will further highlight the fact that the destructor is not being called. So is it worth adding calls to postblit, knowing that the complement destructor is not going to be called? I can see in some cases where it would be expected, and I can see other cases where it will be difficult to deal with. IMO, the difficult cases are already broken anyways, but it just seems like they are not. The other part of this puzzle that is missing is array assignment, for example a[] = b[] does not call postblits. I cannot fix this because _d_arraycopy does not give me the typeinfo. Anyone else have any thoughts? I'm mixed as to whether this patch should be accepted without more comprehensive GC/compiler reform. I feel its a step in the right direction, but that it will upset the balance in a few places (particularly ref-counting). My feeling is that array appending and array assignment should be considered a compiler issue first and foremost. The compiler needs to be fixed, and once that's done the runtime will need to be updated anyway to match the changes in the compiler. Your proposed fix for array assignment is a good start for when the compiler will provide the necessary info to the runtime, but applying it at this time will just fix some cases by breaking a few others: net improvement zero. BTW, I now feel that your request to make a distinction between move and copy is not required. The compiler currently calls the destructor of temporaries, so it should also call postblit. I don't think it can make the distinction between array appending and simply calling some other function. If the issue of array assignment is fixed, do you think it's worth putting the change in, and then filing a bug against the GC? I still think the current cases that "work" are fundamentally broken anyways. For instance, in a ref-counted struct, if you appended it to an array, then removed all the stack-based references, the ref count goes to zero, even though the array still has a reference (I think someone filed a bug against std.stdio.File for this). As for the issue that destructors aren't called for arrays on the heap, it's a serious problem. But it's also a separate problem that concerns purely the runtime, as far as I am aware of. Is there someone working on it? I think we need precise scanning to get a complete solution. Another option is to increase the information the array runtime stores in the memory block (currently it only stores the "used" length) and then hook the GC to call the dtors. This might be a quick fix that doesn't require precise scanning, but it also fixes the most common case of allocating a single struct or an array of structs on the heap. -Steve
Should protection attributes disambiguate?
I'm working on a fix to http://d.puremagic.com/issues/show_bug.cgi?id=6180 Essentially the problem boils down to: - Module A has a private symbol named x - Module B has a public symbol named x - Module C imports A and B and tries to use x unqualified Should the fact that B.x is public and A.x is private disambiguate the usage in module C to use B.x, or is that still an ambiguous unqualified usage of variable x that requires manual qualification/disambiguation?
Re: DIP11: Automatic downloading of libraries
On 2011-06-20 22:45, Dmitry Olshansky wrote: On 20.06.2011 23:39, Nick Sabalausky wrote: "Dmitry Olshansky" wrote in message news:itn2el$2t2v$1...@digitalmars.com... On 20.06.2011 12:25, Jacob Carlborg wrote: On 2011-06-19 22:28, Dmitry Olshansky wrote: Why having name as run-time parameter? I'd expect more like (given there is Target struct or class): //somewhere at top Target cool_lib, ...; then: with(cool_lib) { flags = "-L-lz"; } I'd even expect special types like Executable, Library and so on. The user shouldn't have to create the necessary object. If it does, how would the tool get it then? If we settle on effectively evaluating orbspec like this: //first module module orb_orange; mixin(import ("orange.orbspec")); // // builder entry point void main() { foreach(member; __traits(allMembers, orb_orange)) { static if(typeof(member) == Target){ //do necessary actions, sort out priority and construct a worklist } else //static if (...) //...could be others I mentioned { } } //all the work goes there } Should be straightforward? Alternatively with local imports we can pack it in a struct instead of separate module, though errors in script would be harder to report (but at least static constructors would be controlled!). More adequatly would be, of course, to pump it to dmd from stdin... Target would be part of Orb. Why not just make Target's ctor register itself with the rest of Orb? Nice thinking, but default constructors for structs? Of course, it could be a class... Then probably there could be usefull derived things like these Executable, Library, etc. I really don't like that the users needs to create the targets. The good thing about Ruby is that the user can just call a function and pass a block to the function. Then the tool can evaluate the block in the context of an instance. The user would never have to care about instances. -- /Jacob Carlborg
NG traffic overloadad?
Is it just me or have others not been able to access the NG. Most recently it had a message say the load was 22.24. If it is getting too much traffic, that is pretty cool, but might not be good to continue.
Re: DIP 11: trial partial implementation
On 2011-06-20 21:32, Nick Sabalausky wrote: "Jacob Carlborg" wrote in message news:itljh6$d4l$1...@digitalmars.com... On 2011-06-19 20:31, Nick Sabalausky wrote: "Jacob Carlborg" wrote in message news:itkp2l$1ru0$1...@digitalmars.com... On 2011-06-19 02:10, Adam D. Ruppe wrote: http://arsdnet.net/dcode/build2.d * Be fast. It loops dmd like my old build.d. (I can't find a better way to do it. Even rdmd always runs dmd at least twice - check its source!) That shouldn't be necessary. First run: * Run the compiler once with the -deps flag to collect the dependencies * Run the compiler again to compile everything * Cache dependencies Later runs: * Run the compiler once with the -deps flag and compile everything Using the -deps flag to *just* get the deps is very fast. Much faster than a full compile. I understand that that would be faster when the dependencies have changed but if they haven't then you just have to run the compiler once. Don't know what would be best to do though. BTW, to just get the dependencies, would that be with the -deps and -c flags? Is the a better way? I mean if you just specify the -deps flag it will do a full compilation. Seems to me that skipping linking (-c flag) is a little too much as well for what's actually necessary. Would be good to have a flag that does only what's absolutely necessary for tracking dependencies. What I meant was that doing a deps-only run is fast enough that doing it every time shouldn't be a problem. However, I am starting to wonder if RDMD's functionality should built into DMD (ideally in a way that LDC/GDC wouldn't have to re-implement it themselves). DDMD does take about a minute or so to compile, and while the deps-only run is the faster part, it's not insignificant. But then, maybe that's just due to some fixable inefficiency in DMD? There's very little templates/ctfe involved. I guess one could add the "-o-" (do not write object file) flag as well. Don't know how much that would help though. I guess DMD does need to do most of the process it normally does on the files due to static if, mixins and other meta programming features. -- /Jacob Carlborg
Re: DIP11: Automatic downloading of libraries
On 2011-06-20 15:28, Andrei Alexandrescu wrote: On 6/20/11 6:35 AM, Jacob Carlborg wrote: On 2011-06-20 10:59, Dmitry Olshansky wrote: On 20.06.2011 12:25, Jacob Carlborg wrote: On 2011-06-19 22:28, Dmitry Olshansky wrote: Why having name as run-time parameter? I'd expect more like (given there is Target struct or class): //somewhere at top Target cool_lib, ...; then: with(cool_lib) { flags = "-L-lz"; } I'd even expect special types like Executable, Library and so on. The user shouldn't have to create the necessary object. If it does, how would the tool get it then? If we settle on effectively evaluating orbspec like this: //first module module orb_orange; mixin(import ("orange.orbspec")); // // builder entry point void main() { foreach(member; __traits(allMembers, orb_orange)) { static if(typeof(member) == Target){ //do necessary actions, sort out priority and construct a worklist } else //static if (...) //...could be others I mentioned { } } //all the work goes there } Should be straightforward? Alternatively with local imports we can pack it in a struct instead of separate module, though errors in script would be harder to report (but at least static constructors would be controlled!). More adequatly would be, of course, to pump it to dmd from stdin... I had no idea that you could do that. It seems somewhat complicated and like a hack. Also note that Orbit is currently written in D1, which doesn't have __traits. std.benchmark (https://github.com/D-Programming-Language/phobos/pull/85) does that, too. Overall I believe porting Orbit to D2 and making it use D2 instead of Ruby in configuration would increase its chances to become popular and accepted in tools/. Andrei See my reply to Dmitry. BTW has std.benchmark gone through the regular review process? -- /Jacob Carlborg
Re: DIP11: Automatic downloading of libraries
On 2011-06-20 14:49, Dmitry Olshansky wrote: On 20.06.2011 16:35, Dmitry Olshansky wrote: On 20.06.2011 15:35, Jacob Carlborg wrote: On 2011-06-20 10:59, Dmitry Olshansky wrote: On 20.06.2011 12:25, Jacob Carlborg wrote: On 2011-06-19 22:28, Dmitry Olshansky wrote: Why having name as run-time parameter? I'd expect more like (given there is Target struct or class): //somewhere at top Target cool_lib, ...; then: with(cool_lib) { flags = "-L-lz"; } I'd even expect special types like Executable, Library and so on. The user shouldn't have to create the necessary object. If it does, how would the tool get it then? If we settle on effectively evaluating orbspec like this: //first module module orb_orange; mixin(import ("orange.orbspec")); // // builder entry point void main() { foreach(member; __traits(allMembers, orb_orange)) { static if(typeof(member) == Target){ //do necessary actions, sort out priority and construct a worklist } else //static if (...) //...could be others I mentioned { } } //all the work goes there } Should be straightforward? Alternatively with local imports we can pack it in a struct instead of separate module, though errors in script would be harder to report (but at least static constructors would be controlled!). More adequatly would be, of course, to pump it to dmd from stdin... I had no idea that you could do that. It seems somewhat complicated and like a hack. Also note that Orbit is currently written in D1, which doesn't have __traits. Well, everything about compile-time introspection could be labeled like a hack. In fact I just seen the aforementioned "hack" on a much grander scale being used in upcoming std module, see std.benchmarking: https://github.com/D-Programming-Language/phobos/pull/85/files#L1R577 And personally hacks should look ugly or they are just features or at best shortcuts ;) Personal things aside I still suggest you to switch it to D2. I can understand if Phobos is just not up to snuff for you yet (btw cute curl wrapper is coming in a matter of days). But other then that... just look at all these candies ( opDispatch anyone? ) :) And even if porting is a piece of work, I suspect there a lot of people out there that would love to help this project. (given the lofty goal that config would be written in D, and not Ruby) Just looked through the source , it seems like you are doing a lot of work that's already been done in Phobos, so it might be worth doing a port to D2. Some simple wrappers might be needed, but ultimately: First I have to say that these simple models are no reason to port to D2. Second, here are a couple of other reasons: * These modules (at least some of them) are quite old, pieces of some of them originate back from 2007 (before D2) * These modules also started out as common API for Phobos and Tango functions * Some of these modules also contains specific functions and names for easing Java and C++ porting Overall I like the API of the modules, some functions are aliases for Tango/Phobos functions with names I like better and some are just wrappers with a new API. util.traits --> std.traits As far as I can see, most of these functions don't exist in std.traits. core.array --> std.array + std.algorithm When I work with arrays I want to work with arrays not some other kind of type like a range. I do understand the theoretical idea about having containers and algorithm separated but in practice I've never needed it. io.path --> std.file & std.path Some of these exist in std.file and some don't. orgb.util.OptinoParser --> std.getopt This is a wrapper for the Tango argument parse, because I like this API better. util.singleton should probably be pulled into Phobos, but a thread safe shared version. Yes, but it isn't in Phobos yet. -- /Jacob Carlborg
Re: what to do with postblit on the heap?
On 2011-06-20 10:34:14 -0400, "Steven Schveighoffer" said: I have submitted a fix for bug 5272, http://d.puremagic.com/issues/show_bug.cgi?id=5272 "Postblit not called on copying due to array append" However, I am starting to realize that one of the major reasons for postblit is to match it with an equivalent dtor. This works well when the struct is on the stack -- the posblit for instance increments a reference counter, then the dtor decrements the ref counter. But when the data is on the heap, the destructor is *not* called. So what happens to any ref-counted data that is on the heap? It's never decremented. Currently though, it might still work, because postblit isn't called when the data is on the heap! So no increment, no decrement. I think this is an artificial "success". However, if the pull request I initiated is accepted, then postblit *will* be called on heap allocation, for instance if you append data. This will further highlight the fact that the destructor is not being called. So is it worth adding calls to postblit, knowing that the complement destructor is not going to be called? I can see in some cases where it would be expected, and I can see other cases where it will be difficult to deal with. IMO, the difficult cases are already broken anyways, but it just seems like they are not. The other part of this puzzle that is missing is array assignment, for example a[] = b[] does not call postblits. I cannot fix this because _d_arraycopy does not give me the typeinfo. Anyone else have any thoughts? I'm mixed as to whether this patch should be accepted without more comprehensive GC/compiler reform. I feel its a step in the right direction, but that it will upset the balance in a few places (particularly ref-counting). My feeling is that array appending and array assignment should be considered a compiler issue first and foremost. The compiler needs to be fixed, and once that's done the runtime will need to be updated anyway to match the changes in the compiler. Your proposed fix for array assignment is a good start for when the compiler will provide the necessary info to the runtime, but applying it at this time will just fix some cases by breaking a few others: net improvement zero. As for the issue that destructors aren't called for arrays on the heap, it's a serious problem. But it's also a separate problem that concerns purely the runtime, as far as I am aware of. Is there someone working on it? -- Michel Fortin michel.for...@michelf.com http://michelf.com/
Re: DIP11: Automatic downloading of libraries
On 20.06.2011 23:39, Nick Sabalausky wrote: "Dmitry Olshansky" wrote in message news:itn2el$2t2v$1...@digitalmars.com... On 20.06.2011 12:25, Jacob Carlborg wrote: On 2011-06-19 22:28, Dmitry Olshansky wrote: Why having name as run-time parameter? I'd expect more like (given there is Target struct or class): //somewhere at top Target cool_lib, ...; then: with(cool_lib) { flags = "-L-lz"; } I'd even expect special types like Executable, Library and so on. The user shouldn't have to create the necessary object. If it does, how would the tool get it then? If we settle on effectively evaluating orbspec like this: //first module module orb_orange; mixin(import ("orange.orbspec")); // // builder entry point void main() { foreach(member; __traits(allMembers, orb_orange)) { static if(typeof(member) == Target){ //do necessary actions, sort out priority and construct a worklist } else //static if (...) //...could be others I mentioned { } } //all the work goes there } Should be straightforward? Alternatively with local imports we can pack it in a struct instead of separate module, though errors in script would be harder to report (but at least static constructors would be controlled!). More adequatly would be, of course, to pump it to dmd from stdin... Target would be part of Orb. Why not just make Target's ctor register itself with the rest of Orb? Nice thinking, but default constructors for structs? Of course, it could be a class... Then probably there could be usefull derived things like these Executable, Library, etc. -- Dmitry Olshansky
Re: Rename std.string.toStringz?
Btw, to! currently converts from char* to string so it is only fair that it should also convert from string to char*.
RAII implementation for Socket and Selector
Hi everyone, For the past few of days I have been working on a RAII implementation for Socket and Selector. Sockets are a ref counted wrapper around the socket handle which closes the handle once the ref count goes to zero. It provides safe methods for bind, connect, listen, accept, recv, send and close. The module also provides helper methods for creating sockets for a tcp server, tcp client, udp server and udp client. The helper method used the Address struct which is basically wrapper around getaddrinfo. As of right now the module provides support for ipv4 and ipv6. On top of Socket we have Selector which can be used to safely wait on more than one socket. To register on a selector call the register method in Socket. Use Socket.unregister to unregister the socket. Sockets are automatically unregistered when closed. The current implementation for selector only support epoll (sorry Windows and BSD users) but I am highly confident that it can be ported to other platforms. I plan to it at a future date but there are currently some serious issues with DMD and druntime that invalidate the strong/weak ref counting design. It works well enough to pass the unittests but I had to do a lot of hacks which I hope I can remove once DMD and druntime are fixed. I should also mention that the design was influenced by Java's NIO and Ruby's socket implementation. Here is the code: https://github.com/jsancio/phobos/blob/socket/std/net/socket.d. It doesn't have any documentation right now. I wont be able to work on it for the next couple of weeks but comments are welcome. Thanks! -Jose
Re: DIP11: Automatic downloading of libraries
On 2011-06-20 14:35, Dmitry Olshansky wrote: On 20.06.2011 15:35, Jacob Carlborg wrote: On 2011-06-20 10:59, Dmitry Olshansky wrote: On 20.06.2011 12:25, Jacob Carlborg wrote: On 2011-06-19 22:28, Dmitry Olshansky wrote: Why having name as run-time parameter? I'd expect more like (given there is Target struct or class): //somewhere at top Target cool_lib, ...; then: with(cool_lib) { flags = "-L-lz"; } I'd even expect special types like Executable, Library and so on. The user shouldn't have to create the necessary object. If it does, how would the tool get it then? If we settle on effectively evaluating orbspec like this: //first module module orb_orange; mixin(import ("orange.orbspec")); // // builder entry point void main() { foreach(member; __traits(allMembers, orb_orange)) { static if(typeof(member) == Target){ //do necessary actions, sort out priority and construct a worklist } else //static if (...) //...could be others I mentioned { } } //all the work goes there } Should be straightforward? Alternatively with local imports we can pack it in a struct instead of separate module, though errors in script would be harder to report (but at least static constructors would be controlled!). More adequatly would be, of course, to pump it to dmd from stdin... I had no idea that you could do that. It seems somewhat complicated and like a hack. Also note that Orbit is currently written in D1, which doesn't have __traits. Well, everything about compile-time introspection could be labeled like a hack. In fact I just seen the aforementioned "hack" on a much grander scale being used in upcoming std module, see std.benchmarking: https://github.com/D-Programming-Language/phobos/pull/85/files#L1R577 And personally hacks should look ugly or they are just features or at best shortcuts ;) I personally think that just because Phobos uses these features will not make them less "hackish". Personal things aside I still suggest you to switch it to D2. I can understand if Phobos is just not up to snuff for you yet (btw cute curl wrapper is coming in a matter of days). But other then that... just look at all these candies ( opDispatch anyone? ) :) And even if porting is a piece of work, I suspect there a lot of people out there that would love to help this project. (given the lofty goal that config would be written in D, and not Ruby) D2 has many new cool feature and I would love to use some of them, but every time I try they don't work. I'm tried of using a language that's not ready. I still think Tango is a better library and I like it better than Phobos. Although Phobos is doing a great job of filling in the feature gaps in every new release. -- /Jacob Carlborg
Re: DIP11: Automatic downloading of libraries
"Dmitry Olshansky" wrote in message news:itn2el$2t2v$1...@digitalmars.com... > On 20.06.2011 12:25, Jacob Carlborg wrote: >> On 2011-06-19 22:28, Dmitry Olshansky wrote: >> >>> Why having name as run-time parameter? I'd expect more like (given there >>> is Target struct or class): >>> //somewhere at top >>> Target cool_lib, ...; >>> >>> then: >>> with(cool_lib) { >>> flags = "-L-lz"; >>> } >>> >>> I'd even expect special types like Executable, Library and so on. >> >> The user shouldn't have to create the necessary object. If it does, how >> would the tool get it then? >> > If we settle on effectively evaluating orbspec like this: > //first module > module orb_orange; > mixin(import ("orange.orbspec")); > // > > // builder entry point > void main() > { > foreach(member; __traits(allMembers, orb_orange)) > { > static if(typeof(member) == Target){ > //do necessary actions, sort out priority and construct a > worklist > } > else //static if (...) //...could be others I mentioned > { > } > } > //all the work goes there > } > > Should be straightforward? Alternatively with local imports we can pack it > in a struct instead of separate module, though errors in script would be > harder to report (but at least static constructors would be controlled!). > More adequatly would be, of course, to pump it to dmd from stdin... > Target would be part of Orb. Why not just make Target's ctor register itself with the rest of Orb?
Re: Rename std.string.toStringz?
Andrej Mitrovic wrote: > We could use std.conv.to to convert strings to char pointers and > vice-versa, however I know of at least two functions which have > different semantics which complicate things: > > std.string.toStringz only appends a null and returns the pointer, > std.windows.charset.toMBSz converts the string to the Windows 8-bit > charset and then appends the null and returns the pointer. > > So what would to!(char*)(str) do? > > The opposite, which currently works: > to!(string)(charPtr); > > assumes the char pointer is just a null-terminated string. IIRC there > were bugs in std.registry that were recently fixed where > to!string(char*) was used instead of the fromMBSz function. > > So assuming to!() will always do the right thing could be a bad idea. Using char to hold anything different from an UTF8 code point is a bad idea. It is like using a pointer to store an integer value. Cheers, -Timon
Re: Rename std.string.toStringz?
We could use std.conv.to to convert strings to char pointers and vice-versa, however I know of at least two functions which have different semantics which complicate things: std.string.toStringz only appends a null and returns the pointer, std.windows.charset.toMBSz converts the string to the Windows 8-bit charset and then appends the null and returns the pointer. So what would to!(char*)(str) do? The opposite, which currently works: to!(string)(charPtr); assumes the char pointer is just a null-terminated string. IIRC there were bugs in std.registry that were recently fixed where to!string(char*) was used instead of the fromMBSz function. So assuming to!() will always do the right thing could be a bad idea.
Re: Rename std.string.toStringz?
On 2011-06-20 12:20, Steven Schveighoffer wrote: > On Mon, 20 Jun 2011 15:15:36 -0400, Jonathan M Davis > > wrote: > > On 2011-06-20 10:43, Steven Schveighoffer wrote: > >> On Mon, 20 Jun 2011 09:23:22 -0400, Andrei Alexandrescu > >> > >> wrote: > >> > Technically you're right. Yet I think it's pretty widespread that a > >> > >> sole > >> > >> > char* means a zero-terminated string. > >> > >> I think it's pretty widespread that you shouldn't be using > >> zero-terminated > >> strings ;) > >> > >> But I suppose it makes sense that to can convert from a char[] to a char > >> *, and if it does, it doesn't hurt to do the safest thing. I think it > >> should be discouraged, however, in favor of doing toUTFz which is more > >> descriptive as a function name. > > > > So, you're arguing that we should introduce toUTFz for converting > > character > > arrays to zero-terminated strings, and then have std.conv.to use it when > > converting from character arrays to character pointers? > > Exactly. The reason for to calling it is because that is the safest > option (albeit not completely safe). Well, in general, we try and avoid having multiple ways to do the same thing like that, but in this case, it does seem to me like it's probably the way to go. - Jonathan M Davis
Re: DIP 11: trial partial implementation
"Jacob Carlborg" wrote in message news:itljh6$d4l$1...@digitalmars.com... > On 2011-06-19 20:31, Nick Sabalausky wrote: >> "Jacob Carlborg" wrote in message >> news:itkp2l$1ru0$1...@digitalmars.com... >>> On 2011-06-19 02:10, Adam D. Ruppe wrote: http://arsdnet.net/dcode/build2.d * Be fast. It loops dmd like my old build.d. (I can't find a better way to do it. Even rdmd always runs dmd at least twice - check its source!) >>> >>> That shouldn't be necessary. >>> >>> First run: >>> >>> * Run the compiler once with the -deps flag to collect the dependencies >>> * Run the compiler again to compile everything >>> * Cache dependencies >>> >>> Later runs: >>> >>> * Run the compiler once with the -deps flag and compile everything >> >> Using the -deps flag to *just* get the deps is very fast. Much faster >> than a >> full compile. > > I understand that that would be faster when the dependencies have changed > but if they haven't then you just have to run the compiler once. Don't > know what would be best to do though. > > BTW, to just get the dependencies, would that be with the -deps and -c > flags? Is the a better way? I mean if you just specify the -deps flag it > will do a full compilation. Seems to me that skipping linking (-c flag) is > a little too much as well for what's actually necessary. Would be good to > have a flag that does only what's absolutely necessary for tracking > dependencies. > What I meant was that doing a deps-only run is fast enough that doing it every time shouldn't be a problem. However, I am starting to wonder if RDMD's functionality should built into DMD (ideally in a way that LDC/GDC wouldn't have to re-implement it themselves). DDMD does take about a minute or so to compile, and while the deps-only run is the faster part, it's not insignificant. But then, maybe that's just due to some fixable inefficiency in DMD? There's very little templates/ctfe involved.
Re: Is it too late to change the name of this language?
"Mike James" wrote in message news:itn6h2$3og$1...@digitalmars.com... > "Benjamin Lindley" wrote in message > news:itgcgc$2lnk$1...@digitalmars.com... >> I'm new to this language, and so far, I really like it. But that name is >> unsearchable. Don't you guys think that hinders the language from >> catching on? Yes, you can search for D Programming Language, but that >> doesn't help find pages where the author only calls it D. Is it too late >> to change the name? Possibly deprecate it? > > You could call it Symbol... > > The Language Formally Known As D. I'm detecting a reference to "The artist formerly known as 'the artist formerly known as Prince'". That's a long name, but he earned it ;)
Re: Rename std.string.toStringz?
On Mon, 20 Jun 2011 15:15:36 -0400, Jonathan M Davis wrote: On 2011-06-20 10:43, Steven Schveighoffer wrote: On Mon, 20 Jun 2011 09:23:22 -0400, Andrei Alexandrescu wrote: > Technically you're right. Yet I think it's pretty widespread that a sole > char* means a zero-terminated string. I think it's pretty widespread that you shouldn't be using zero-terminated strings ;) But I suppose it makes sense that to can convert from a char[] to a char *, and if it does, it doesn't hurt to do the safest thing. I think it should be discouraged, however, in favor of doing toUTFz which is more descriptive as a function name. So, you're arguing that we should introduce toUTFz for converting character arrays to zero-terminated strings, and then have std.conv.to use it when converting from character arrays to character pointers? Exactly. The reason for to calling it is because that is the safest option (albeit not completely safe). -Steve
Re: Rename std.string.toStringz?
On 2011-06-20 10:43, Steven Schveighoffer wrote: > On Mon, 20 Jun 2011 09:23:22 -0400, Andrei Alexandrescu > > wrote: > > Technically you're right. Yet I think it's pretty widespread that a sole > > char* means a zero-terminated string. > > I think it's pretty widespread that you shouldn't be using zero-terminated > strings ;) > > But I suppose it makes sense that to can convert from a char[] to a char > *, and if it does, it doesn't hurt to do the safest thing. I think it > should be discouraged, however, in favor of doing toUTFz which is more > descriptive as a function name. So, you're arguing that we should introduce toUTFz for converting character arrays to zero-terminated strings, and then have std.conv.to use it when converting from character arrays to character pointers? - Jonathan M Davis
Re: what to do with postblit on the heap?
On 2011-06-20 11:56, Jose Armando Garcia wrote: > On Mon, Jun 20, 2011 at 12:03 PM, bearophile wrote: > > Steven Schveighoffer: > > A solution is to add this information at runtime, a type tag to structs > > that have a postblit and/or destructor. But then structs aren't PODs any > > more. There are other places to store this information, like in some > > kind of associative array. > > What are PODs? Plain Old Datatype. It's a user-defined data type with member variables but no functions. It just holds data. - Jonathan M Davis
Re: what to do with postblit on the heap?
On Mon, Jun 20, 2011 at 12:03 PM, bearophile wrote: > Steven Schveighoffer: > A solution is to add this information at runtime, a type tag to structs that > have a postblit and/or destructor. But then structs aren't PODs any more. > There are other places to store this information, like in some kind of > associative array. What are PODs?
Re: what to do with postblit on the heap?
On Mon, Jun 20, 2011 at 11:34 AM, Steven Schveighoffer wrote: > But when the data is on the heap, the destructor is *not* called. Â So what > happens to any ref-counted data that is on the heap? Â It's never > decremented. Â Currently though, it might still work, because postblit isn't > called when the data is on the heap! Â So no increment, no decrement. What? That makes it impossible/difficult to do RAII and invalidates my strong/weak ref counting design for sockets and selector. Is there a technical reason why the GC is not calling dtor? or is this because we haven't gotten around implementing this? Thanks, -Jose
Re: Rename std.string.toStringz?
On Mon, 20 Jun 2011 09:23:22 -0400, Andrei Alexandrescu wrote: Technically you're right. Yet I think it's pretty widespread that a sole char* means a zero-terminated string. I think it's pretty widespread that you shouldn't be using zero-terminated strings ;) But I suppose it makes sense that to can convert from a char[] to a char *, and if it does, it doesn't hurt to do the safest thing. I think it should be discouraged, however, in favor of doing toUTFz which is more descriptive as a function name. -Steve
Re: Yet another slap on the hand by implicit bool to int conversions
Btw, the reason why I've made a sloppy mistake was because the code was: if (mmioRead (hmmio, (LPSTR) &drum, sizeof (DRUM)) != sizeof (DRUM)) sizeof(type) had to be converted to type.sizeof, and that's where I screwed things up. Maybe it was just an overreaction. This could be a rare bug.. But sizeof is checkable at compile time, so a lint-like tool could catch these comparisons that don't make much sense. As for making bool->int a warning, it's probably overkill. Not even GCC complains about it with all warnings turned on, so I guess it's not a common error.
Re: what to do with postblit on the heap?
Steven Schveighoffer: > But my immediate question is -- is it better to half-fix the problem by > committing my changes, or leave the issue alone? I suggest to leave the issue alone. > Any solution that fixes the GC problem will have to store the typeinfo > somehow associated with the block. I think we may have more traction for > this problem with a precise GC. > > I don't think the right route is to store type info inside the struct > itself. This added overhead is not necessary for when the struct is > stored on the stack. > This is a possibility, making a struct only usable if it's inside another > such struct or inside a class, or on the stack. Given that D is a system language, and the general usefulness and ubiquity of structs, a third possibility is to do both and add an attribute to help enforcing what can't be done on PODs, or add more runtime info _on request_ where the programmer wants more flexible structs. This solves the situation, but has the disadvantage of increasing D complexity a little. Bye, bearophile
Re: Rename std.string.toStringz?
On 2011-06-20 06:23, Andrei Alexandrescu wrote: > On 6/20/11 7:23 AM, Steven Schveighoffer wrote: > > On Sun, 19 Jun 2011 09:20:17 -0400, Andrei Alexandrescu > > > > wrote: > >> On 6/18/11 5:42 PM, Jonathan M Davis wrote: > >>> On 2011-06-18 06:35, Andrei Alexandrescu wrote: > On 6/18/11 4:59 AM, Jonathan M Davis wrote: > > I'll look at renaming toUTF16z to toWStringz to match toStringz (as > > was > > suggested by a couple of people in this thread) > > That should be a template toUTFz that takes either char*, wchar*, or > dchar*. > >>> > >>> A good point. Are you arguing that toStringz should be replaced by > >>> such a > >>> construct? Or that it should simply exist in addition to toStringz? > >>> Also, we _could_ make it so that such a template would take the > >>> mutabality of > >>> the pointer as well (e.g. toUTF!(char*)(str), toUTF!(const(char)*), > >>> etc.), > >>> which would allow it to be used in cases where you actually want a > >>> mutable > >>> string (which toStringz doesn't do). > >>> > >>> - Jonathan M Davis > >> > >> I think that's a good idea, which would address that StackOverflow > >> problem too. > >> > >> The way I'd probably suggest we go about it is as a universal > >> transcoder. Define std.conv.to with strings of any width and > >> qualification as input and with pointers to characters of any width as > >> output. It is implied that the conversion entails adding a terminating > >> zero. > >> > >> string a = "hello"; > >> auto p = to!(wchar*)(a); // change width and qualifier > > > > I don't like relying on an implication is a zero character is added. A > > char * pointer may or may not be zero terminated (that is one of the > > issues with C), so you can't really designate a type to mean "zero > > terminated". > > Technically you're right. Yet I think it's pretty widespread that a sole > char* means a zero-terminated string. I don't know. I can see it being argued either way. I don't know why anyone would would use a char*, wchar*, dchar*, etc. except for passing to C functions. But the lack of explicitness could be a problem. And there would be no guarantee that all character pointers are zero-terminated strings, which could cause problems if it's assumed that they are. So, I don't know. I suppose that we could just go the route of doing both. std.conv.to could call toUTFz in the case of casts to char*, wchar*, dchar*, etc. It's not exactly ideal, but then you can be either explicit or implicit. But we generally try and avoid doing that sort of thing... - Jonathan M Davis
Re: 'auto pure' is up, how about 'auto @safe'?
On 2011-06-20 03:23, Walter Bright wrote: > On 6/20/2011 2:42 AM, KennyTM~ wrote: > > But I think it should be able to detect @safe as well: > Sure, and nothrow too. But I thought I'd start with purity. Well, gotta start somewhere, but we definitely need the other two as well. So, does this mean that any template function where all of the calls that it makes are pure should now be pure without being marked as pure? - Jonathan M Davis
Re: Yet another slap on the hand by implicit bool to int conversions
On Mon, 20 Jun 2011 11:10:58 -0400, bearophile wrote: Steven Schveighoffer: In order to have a fix for something like this, you need the error to be near 100% invalid. Like nobody ever writes this as *valid* code: if(cond); no matter what cond is. My enhancement request was about redundancies in the code, that sometimes hide implicit errors, they can't be 'near 100% invalid'. In this case I am not looking for explicit errors, some/many of the redundancies aren't bugs. What I'm saying is, if the compiler errors out when a good percentage of the cases are valid, it will become a nuisance error, and continually ignored/worked around. We do not want nuisance errors. Even if you make it a error only with -w, I think people will just stop using -w. It needs to be: a) almost always an error when it is encountered b) very easily worked around, but with code that isn't common. I think with the extra parentheses idea, b is satisfied, but a is not. I can't say that there isn't a case where a is not satisfied by some specific criteria, but I haven't seen it yet. -Steve
Re: Is there a standard way to define to for user-defined types?
On 6/20/11 7:14 AM, kenji hara wrote: I have tried to conversion test. import std.conv; void main() { tests1(); tests2(); testc1(); testc2(); } struct SS1{ ST1 opCast(T:ST1)(){ return ST1(); } } struct ST1{ } void tests1() { SS1 s1; auto t1a = cast(ST1)(s1); // -> s1.opCast!ST1() // auto t1b = to!ST1(s1); // NG } struct SS2{ } struct ST2{ this(SS2 source){} } void tests2() { SS2 s2; auto t2a = cast(ST2)(s2); // -> ST2(s2) == ctor call // auto t2b = to!ST2(s2); // NG } class CS1{ CT1 opCast(T:CT1)(){ return new CT1(); } } class CT1{ } void testc1() { CS1 s1 = new CS1(); auto t1a = cast(CT1)(s1); // -> s1.opCast!CT1() auto t1b = to!CT1(s1); // T toImpl(T, S)(S value) if (is(S : Object)&& is(T : Object)) // -> cast(CT1)(s1) // -> s1.opCast!CT1() } class CS2{ } class CT2{ static CT2 opCall(CS2 source){ return new CT2(); } } //class CT2{ this(CS2 source){} } // Unfortunately, ctor is not called by CastExp. void testc2() { CS2 s2 = new CS2(); auto t2a = cast(CT2)(s2); // -> CT2(s2) == CT2.opCall(s2) // auto t2b = to!CT2(s2); // compiled, but runtime error occurs // T toImpl(T, S)(S value) if (is(S : Object)&& is(T : Object)) // -> cast(CT1)(s1) // -> null } Some of thoughts of me: 1. std.conv.to should support built-in casting behavior, at least on struct object. This feature was recently fixed. See bug5897. Agreed. I simply forgot about that. 2. Using to!T() member function by std.conv.to is unnecessary feature, because it can be replaced by opCast!T. I think that's fine too. 3. For class object, I guess that is need to support 'Conversion Interface that Intrude into TargetType'. I think 'conversion constructor call' is good fit to it. Not sure I understand that. Kenji, you may want to package your 1 and 2 into a pull request, and 3 in a separate pull request so we all can take a look. Thanks, Andrei
Re: TempAlloc: an unusual request
On 6/20/11 10:01 AM, dsimcha wrote: == Quote from Andrei Alexandrescu (seewebsiteforem...@erdani.org)'s article I do agree with your choice of scan flags because your analysis of costs, benefits, and onus put on the user is compelling. In this case, however, it seems to me that functions that implicitly return stuff on the TempAlloc stack are paving the way towards messed-up modules that can't be reasoned about modularly. Thanks, Andrei There are two use cases for implicitly returning TempAlloc-allocated memory: 1. In a private API. If we provide an artifact good for private APIs but dangerous for true modular code, I think this is a weak argument. 2. I have a few data structures that I may clean up and submit as a proposal later (hash table, hash set, AVL tree) whose implementations are specifically optimized for TempAlloc. For example, the hash table is provisionally called StackHash. I'd really rather write: auto table = StackHash!(uint, double)(10); table[666] = 8675309; rather than: auto table = StackHash!(uint, double)(10); table.addElement(666, 8675309, someLongVerboseAllocator); Couldn't StackHash's constructor accept an allocator as an argument? I think there are good ways to solve these issues if we start from a shared view that implicit region allocation has a lot going against it. In that case, I'm sure we'll find creative solutions. Right now it seems we're engaging in a pattern of debating disadvantages of passing regions explicitly, which ignores the disadvantages of having implicit ranges. Thanks, Andrei
Re: what to do with postblit on the heap?
On Mon, 20 Jun 2011 11:03:27 -0400, bearophile wrote: Steven Schveighoffer: The other part of this puzzle that is missing is array assignment, for example a[] = b[] does not call postblits. I cannot fix this because _d_arraycopy does not give me the typeinfo. This seems fixable. Is it possible to rewrite _d_arraycopy? The compiler is the one passing the parameters to _d_arraycopy, so even if I change _d_arraycopy to accept a TypeInfo, the compiler needs to be fixed to send the TypeInfo. I think this is really a no-brainer, because currently what is passed is the element size, which is contained within the TypeInfo. I will be filing a bug on that. But currently, I can't fix it. Anyone else have any thoughts? I think the current situation is not acceptable. This is a problem quite worse than _d_arraycopy because here some information is missing. Isn't this is the same problem with struct destructors? This is an easy fix -- the typeinfo contains information of whether or not and how to run the postblit. The larger problem is the GC not calling the destructor. But my immediate question is -- is it better to half-fix the problem by committing my changes, or leave the issue alone? A solution is to add this information at runtime, a type tag to structs that have a postblit and/or destructor. But then structs aren't PODs any more. There are other places to store this information, like in some kind of associative array. Any solution that fixes the GC problem will have to store the typeinfo somehow associated with the block. I think we may have more traction for this problem with a precise GC. I don't think the right route is to store type info inside the struct itself. This added overhead is not necessary for when the struct is stored on the stack. Another solution is to forbid what the compiler can't guarantee. If a struct is going to be used only where its type is known, then it's allowed to have postblit and destructor. Is it possible to enforce this? I think it is. Here an @annotation is useful to better manage this contract between programmer and compiler. This is a possibility, making a struct only usable if it's inside another such struct or inside a class, or on the stack. -Steve
Re: Yet another slap on the hand by implicit bool to int conversions
Steven Schveighoffer: > In order to have a fix for something like this, you need the error to be > near 100% invalid. Like nobody ever writes this as *valid* code: > > if(cond); > > no matter what cond is. My enhancement request was about redundancies in the code, that sometimes hide implicit errors, they can't be 'near 100% invalid'. In this case I am not looking for explicit errors, some/many of the redundancies aren't bugs. Bye, bearophile
Re: TempAlloc: an unusual request
On 6/20/11 10:02 AM, dsimcha wrote: == Quote from Andrei Alexandrescu (seewebsiteforem...@erdani.org)'s article On 6/19/11 6:20 PM, dsimcha wrote: My other concern is that giving RegionAllocator reference semantics would, IIUC, require an allocation to allocate a RegionAllocator. Since TempAlloc is designed to avoid global GC locks/world stopping like the plauge, this is obviously bad. I am hoping we can arrange things such that a RegionAllocator created from scratch initializes a new frame, whereas subsequent copies of it use that same frame. Would that work? Andrei No. I don't want every creation of a new frame to require a GC heap allocation. I don't understand why such would be necessary. Andrei
Re: Rename std.string.toStringz?
> No, it would be inconsistent from a user's point of view, since for all > other types, to!Foo(xyz) returns a Foo, and not something else. > > David Well, yes, to!c_str(xyz) returns a c string, and not something else.
Re: TempAlloc: an unusual request
== Quote from Andrei Alexandrescu (seewebsiteforem...@erdani.org)'s article > On 6/19/11 6:20 PM, dsimcha wrote: > > My other concern is that giving RegionAllocator reference semantics > > would, IIUC, require an allocation to allocate a RegionAllocator. Since > > TempAlloc is designed to avoid global GC locks/world stopping like the > > plauge, this is obviously bad. > I am hoping we can arrange things such that a RegionAllocator created > from scratch initializes a new frame, whereas subsequent copies of it > use that same frame. Would that work? > Andrei No. I don't want every creation of a new frame to require a GC heap allocation.
Re: what to do with postblit on the heap?
Steven Schveighoffer: > The other part of this puzzle that is missing is array assignment, for > example a[] = b[] does not call postblits. I cannot fix this because > _d_arraycopy does not give me the typeinfo. This seems fixable. Is it possible to rewrite _d_arraycopy? > Anyone else have any thoughts? I think the current situation is not acceptable. This is a problem quite worse than _d_arraycopy because here some information is missing. Isn't this is the same problem with struct destructors? A solution is to add this information at runtime, a type tag to structs that have a postblit and/or destructor. But then structs aren't PODs any more. There are other places to store this information, like in some kind of associative array. Another solution is to forbid what the compiler can't guarantee. If a struct is going to be used only where its type is known, then it's allowed to have postblit and destructor. Is it possible to enforce this? I think it is. Here an @annotation is useful to better manage this contract between programmer and compiler. Bye, bearophile
Re: TempAlloc: an unusual request
== Quote from Andrei Alexandrescu (seewebsiteforem...@erdani.org)'s article > I do agree with your choice of scan flags because your analysis of > costs, benefits, and onus put on the user is compelling. In this case, > however, it seems to me that functions that implicitly return stuff on > the TempAlloc stack are paving the way towards messed-up modules that > can't be reasoned about modularly. > Thanks, > Andrei There are two use cases for implicitly returning TempAlloc-allocated memory: 1. In a private API. 2. I have a few data structures that I may clean up and submit as a proposal later (hash table, hash set, AVL tree) whose implementations are specifically optimized for TempAlloc. For example, the hash table is provisionally called StackHash. I'd really rather write: auto table = StackHash!(uint, double)(10); table[666] = 8675309; rather than: auto table = StackHash!(uint, double)(10); table.addElement(666, 8675309, someLongVerboseAllocator);
Re: Yet another slap on the hand by implicit bool to int conversions
On Mon, 20 Jun 2011 09:42:45 -0400, Daniel Gibson wrote: Am 20.06.2011 15:31, schrieb Steven Schveighoffer: On Mon, 20 Jun 2011 09:17:56 -0400, Daniel Gibson wrote: Am 20.06.2011 14:47, schrieb Steven Schveighoffer: On Sun, 19 Jun 2011 19:42:22 -0400, bearophile wrote: Timon Gehr: Maybe DMD could warn on nonsense of the form x != x. Right. This thread is very good food for this enhancement request of mine: http://d.puremagic.com/issues/show_bug.cgi?id=5540 Vote for this enhancement :-) I don't think this is a good idea. Generic code could result in an error where there shouldn't be. For example: int foo(T)() { if(T.sizeof == char.sizeof) ... } Would this fail to compile where T == char? What about if T == ubyte? Generic programming sometimes results in silly code that is perfectly acceptable as generic code, and we need to take this into account before making decisions assuming a person is writing the code. -Steve It probably makes more sense to use static if in that case - and static if could be an exception for these rules. static if has different semantics than if (it doesn't create a scope), so maybe I want to use if. But even so, static if is just as likely to contain these bugs as a normal if. Both use an expression to determine whether the if should run or not, and there are quite a few constructs that can be used in a static if expression. Disallowing some things in if that are allowed in static if at least catch most bugs of this kind because normal if is more common. Maybe, as an alternative, "nonsensical" expressions could be allowed if template parameters are involved. Don't know how hard it is to implement this, though. Another alternative: nonsensical expressions need to be enclosed by an extra pair of parenthesis. I think I have seen this for cases like if( (x=foo()) ) {}, meaning if( foo() != 0), but x is assigned that value at the same time (so you could as well write "if( (x=foo()) != 0)", but I don't remember in what language (D doesn't allow it). Maybe it was a warning of g++? Another point, let's say you know the size of an int is 4. And you write something like: if(read(stream, ptr, int.sizeof == 4)) where you meant to write: if(read(stream, ptr, int.sizeof) == 4) How can the compiler "catch" this? It's the equivalent of int.sizeof == int.sizeof, but it's not exactly written that way. Another example: if(read(stream, &x, x.sizeof == int.sizeof)) // x is of type int It just seems too arbitrary to me to say exact_expression == exact_expression is invalid. It doesn't solve all the cases where you mistype something, and it can throw errors that are nuisances. In order to have a fix for something like this, you need the error to be near 100% invalid. Like nobody ever writes this as *valid* code: if(cond); no matter what cond is. -Steve
Re: Yet another slap on the hand by implicit bool to int conversions
Steven Schveighoffer: > I don't think this is a good idea. Generic code could result in an error > where there shouldn't be. For example: > > int foo(T)() > { > if(T.sizeof == char.sizeof) > ... > } > > Would this fail to compile where T == char? What about if T == ubyte? I was thinking more about a warning than an error. But you are right, my enhancement request 5540 seems premature, I/we have to think some more about it. Bye and sorry, bearophile
what to do with postblit on the heap?
I have submitted a fix for bug 5272, http://d.puremagic.com/issues/show_bug.cgi?id=5272 "Postblit not called on copying due to array append" However, I am starting to realize that one of the major reasons for postblit is to match it with an equivalent dtor. This works well when the struct is on the stack -- the posblit for instance increments a reference counter, then the dtor decrements the ref counter. But when the data is on the heap, the destructor is *not* called. So what happens to any ref-counted data that is on the heap? It's never decremented. Currently though, it might still work, because postblit isn't called when the data is on the heap! So no increment, no decrement. I think this is an artificial "success". However, if the pull request I initiated is accepted, then postblit *will* be called on heap allocation, for instance if you append data. This will further highlight the fact that the destructor is not being called. So is it worth adding calls to postblit, knowing that the complement destructor is not going to be called? I can see in some cases where it would be expected, and I can see other cases where it will be difficult to deal with. IMO, the difficult cases are already broken anyways, but it just seems like they are not. The other part of this puzzle that is missing is array assignment, for example a[] = b[] does not call postblits. I cannot fix this because _d_arraycopy does not give me the typeinfo. Anyone else have any thoughts? I'm mixed as to whether this patch should be accepted without more comprehensive GC/compiler reform. I feel its a step in the right direction, but that it will upset the balance in a few places (particularly ref-counting). -Steve
Re: TempAlloc: an unusual request
On 6/19/11 6:20 PM, dsimcha wrote: My other concern is that giving RegionAllocator reference semantics would, IIUC, require an allocation to allocate a RegionAllocator. Since TempAlloc is designed to avoid global GC locks/world stopping like the plauge, this is obviously bad. I am hoping we can arrange things such that a RegionAllocator created from scratch initializes a new frame, whereas subsequent copies of it use that same frame. Would that work? Andrei
Re: Yet another slap on the hand by implicit bool to int conversions
Am 20.06.2011 15:31, schrieb Steven Schveighoffer: > On Mon, 20 Jun 2011 09:17:56 -0400, Daniel Gibson > wrote: > >> Am 20.06.2011 14:47, schrieb Steven Schveighoffer: >>> On Sun, 19 Jun 2011 19:42:22 -0400, bearophile >>> wrote: >>> Timon Gehr: > Maybe DMD could warn on nonsense of the form x != x. Right. This thread is very good food for this enhancement request of mine: http://d.puremagic.com/issues/show_bug.cgi?id=5540 Vote for this enhancement :-) >>> >>> I don't think this is a good idea. Generic code could result in an >>> error where there shouldn't be. For example: >>> >>> int foo(T)() >>> { >>>if(T.sizeof == char.sizeof) >>> ... >>> } >>> >>> Would this fail to compile where T == char? What about if T == ubyte? >>> >>> Generic programming sometimes results in silly code that is perfectly >>> acceptable as generic code, and we need to take this into account before >>> making decisions assuming a person is writing the code. >>> >>> -Steve >> >> It probably makes more sense to use static if in that case - and static >> if could be an exception for these rules. > > static if has different semantics than if (it doesn't create a scope), > so maybe I want to use if. But even so, static if is just as likely to > contain these bugs as a normal if. Both use an expression to determine > whether the if should run or not, and there are quite a few constructs > that can be used in a static if expression. > Disallowing some things in if that are allowed in static if at least catch most bugs of this kind because normal if is more common. Maybe, as an alternative, "nonsensical" expressions could be allowed if template parameters are involved. Don't know how hard it is to implement this, though. Another alternative: nonsensical expressions need to be enclosed by an extra pair of parenthesis. I think I have seen this for cases like if( (x=foo()) ) {}, meaning if( foo() != 0), but x is assigned that value at the same time (so you could as well write "if( (x=foo()) != 0)", but I don't remember in what language (D doesn't allow it). Maybe it was a warning of g++? Cheers, - Daniel
Re: TempAlloc: an unusual request
On 6/19/11 9:48 PM, dsimcha wrote: On 6/19/2011 8:00 PM, Andrei Alexandrescu wrote: On 06/19/2011 06:20 PM, dsimcha wrote: It's ok to allow an object to replace frameInit and frameFree for conformance to the general allocator interface, but I'd need to keep frameInit and frameFree around. They're useful for functions that return pointers to TempAlloc-allocated memory. I don't want to have to pass in a RegionAllocator object to each of these because it's too verbose. I want to be able to do: void doStuff() { TempAlloc.frameInit(); scope(exit) TempAlloc.frameFree(); auto arr = getArray(); } uint[] getArray() { return TempAlloc.newArray!(uint[])(5); } My other concern is that giving RegionAllocator reference semantics would, IIUC, require an allocation to allocate a RegionAllocator. Since TempAlloc is designed to avoid global GC locks/world stopping like the plauge, this is obviously bad. I was actually glad of that particular outcome... ??? What outcome? I was glad that one needs to pass a TempAlloc object down to the function, instead of that function silently returning memory allocated with TempAlloc. Generally I have a very strong stance against stuff that's simultaneously terse, implied, and unsafe. In fact the moment you mentioned that the one reason against passing TempAlloc objects down is verboseness, I interpreted that as a good argument why it's _good_ to do that. I do agree with your choice of scan flags because your analysis of costs, benefits, and onus put on the user is compelling. In this case, however, it seems to me that functions that implicitly return stuff on the TempAlloc stack are paving the way towards messed-up modules that can't be reasoned about modularly. Thanks, Andrei
Re: Yet another slap on the hand by implicit bool to int conversions
On Mon, 20 Jun 2011 09:17:56 -0400, Daniel Gibson wrote: Am 20.06.2011 14:47, schrieb Steven Schveighoffer: On Sun, 19 Jun 2011 19:42:22 -0400, bearophile wrote: Timon Gehr: Maybe DMD could warn on nonsense of the form x != x. Right. This thread is very good food for this enhancement request of mine: http://d.puremagic.com/issues/show_bug.cgi?id=5540 Vote for this enhancement :-) I don't think this is a good idea. Generic code could result in an error where there shouldn't be. For example: int foo(T)() { if(T.sizeof == char.sizeof) ... } Would this fail to compile where T == char? What about if T == ubyte? Generic programming sometimes results in silly code that is perfectly acceptable as generic code, and we need to take this into account before making decisions assuming a person is writing the code. -Steve It probably makes more sense to use static if in that case - and static if could be an exception for these rules. static if has different semantics than if (it doesn't create a scope), so maybe I want to use if. But even so, static if is just as likely to contain these bugs as a normal if. Both use an expression to determine whether the if should run or not, and there are quite a few constructs that can be used in a static if expression. -Steve
Re: DIP11: Automatic downloading of libraries
On 6/20/11 6:35 AM, Jacob Carlborg wrote: On 2011-06-20 10:59, Dmitry Olshansky wrote: On 20.06.2011 12:25, Jacob Carlborg wrote: On 2011-06-19 22:28, Dmitry Olshansky wrote: Why having name as run-time parameter? I'd expect more like (given there is Target struct or class): //somewhere at top Target cool_lib, ...; then: with(cool_lib) { flags = "-L-lz"; } I'd even expect special types like Executable, Library and so on. The user shouldn't have to create the necessary object. If it does, how would the tool get it then? If we settle on effectively evaluating orbspec like this: //first module module orb_orange; mixin(import ("orange.orbspec")); // // builder entry point void main() { foreach(member; __traits(allMembers, orb_orange)) { static if(typeof(member) == Target){ //do necessary actions, sort out priority and construct a worklist } else //static if (...) //...could be others I mentioned { } } //all the work goes there } Should be straightforward? Alternatively with local imports we can pack it in a struct instead of separate module, though errors in script would be harder to report (but at least static constructors would be controlled!). More adequatly would be, of course, to pump it to dmd from stdin... I had no idea that you could do that. It seems somewhat complicated and like a hack. Also note that Orbit is currently written in D1, which doesn't have __traits. std.benchmark (https://github.com/D-Programming-Language/phobos/pull/85) does that, too. Overall I believe porting Orbit to D2 and making it use D2 instead of Ruby in configuration would increase its chances to become popular and accepted in tools/. Andrei
Re: Rename std.string.toStringz?
On 6/20/11 7:23 AM, Steven Schveighoffer wrote: On Sun, 19 Jun 2011 09:20:17 -0400, Andrei Alexandrescu wrote: On 6/18/11 5:42 PM, Jonathan M Davis wrote: On 2011-06-18 06:35, Andrei Alexandrescu wrote: On 6/18/11 4:59 AM, Jonathan M Davis wrote: I'll look at renaming toUTF16z to toWStringz to match toStringz (as was suggested by a couple of people in this thread) That should be a template toUTFz that takes either char*, wchar*, or dchar*. A good point. Are you arguing that toStringz should be replaced by such a construct? Or that it should simply exist in addition to toStringz? Also, we _could_ make it so that such a template would take the mutabality of the pointer as well (e.g. toUTF!(char*)(str), toUTF!(const(char)*), etc.), which would allow it to be used in cases where you actually want a mutable string (which toStringz doesn't do). - Jonathan M Davis I think that's a good idea, which would address that StackOverflow problem too. The way I'd probably suggest we go about it is as a universal transcoder. Define std.conv.to with strings of any width and qualification as input and with pointers to characters of any width as output. It is implied that the conversion entails adding a terminating zero. string a = "hello"; auto p = to!(wchar*)(a); // change width and qualifier I don't like relying on an implication is a zero character is added. A char * pointer may or may not be zero terminated (that is one of the issues with C), so you can't really designate a type to mean "zero terminated". Technically you're right. Yet I think it's pretty widespread that a sole char* means a zero-terminated string. Andrei
Re: Yet another slap on the hand by implicit bool to int conversions
Am 20.06.2011 14:47, schrieb Steven Schveighoffer: > On Sun, 19 Jun 2011 19:42:22 -0400, bearophile > wrote: > >> Timon Gehr: >> >>> Maybe DMD could warn on nonsense of the form x != x. >> >> Right. This thread is very good food for this enhancement request of >> mine: >> http://d.puremagic.com/issues/show_bug.cgi?id=5540 >> >> Vote for this enhancement :-) > > I don't think this is a good idea. Generic code could result in an > error where there shouldn't be. For example: > > int foo(T)() > { >if(T.sizeof == char.sizeof) > ... > } > > Would this fail to compile where T == char? What about if T == ubyte? > > Generic programming sometimes results in silly code that is perfectly > acceptable as generic code, and we need to take this into account before > making decisions assuming a person is writing the code. > > -Steve It probably makes more sense to use static if in that case - and static if could be an exception for these rules. Cheers, - Daniel
Re: Article discussing Go, could well be D
On 2011-06-20 14:02, Johannes Pfau wrote: Jacob Carlborg wrote: Currently I have the three-part-version as default and then a custom version (which basically can contain anything). The reason for the three-part-version scheme is explained in the wiki. So it's to have defined semantics for version changes, to standardize thing like api breakage. I think this makes sense, although it forces a special versioning scheme on users it might be worth it. It doesn't force a version scheme, you can always use the a custom version but then you won't be able to use the "~>" operator. Which is the whole reason for using this version scheme. It might really be overkill. But consider this example: package FOO requires libjson>= 0.0.1 as a dynamic library. package BAR requires latest libjson from git as a dynamic library. now FOO could use libjson-git, but how does the package manager know that? It cannot know whether the git version is more recent than 0.0.1. It's also not possible to install both libraries at a time, as both are dynamic libraries with the same name. We now have a conflict where you can only install FOO or BAR, but not both. Ok, I think I understand now. Thanks for the explanation. -- /Jacob Carlborg
TempAlloc review cancelled
This is just to inform you all that the review and subsequent vote for inclusion of David Simcha's TempAlloc in Phobos has been cancelled, pending the design of a general allocator interface. Thanks to everyone who posted reviews and comments. I am sure it will all be taken into account in the new design. -Lars
Re: DIP11: Automatic downloading of libraries
On Sat, 18 Jun 2011 23:33:29 -0400, Daniel Murphy wrote: "Jacob Carlborg" wrote in message news:iti35g$2r4r$2...@digitalmars.com... That seems cool. But, you would want to write the pluing in D and that's not possible yet on all platforms? Or should everything be done with extern(C), does that work? Yeah, it won't be possible to do it all in D until we have .so's working on linux etc, which I think is a while off yet. Although this could be worked around by writing a small loader in c++ and using another process (written in D) to do the actual work. Maybe it would be easier to build dmd as a shared lib (or a static lib) and just provide a different front... My point is that the compiler can quite easily be modified to allow it to pass pretty much anything (missing imports, pragma(lib), etc) to a build tool, and it should be fairly straightforward for the build tool to pass things back in (adding objects to the linker etc). This could allow single pass full compilation even when the libraries need to be fetched off the internet. It could also allow seperate compilation of several source files at once, without having to re-do parsing+semantic each time. Can dmd currently do this? Most importantly it keeps knowledge about urls and downloading files outside the compiler, where IMO it does not belong. Note the current proposal does exactly what you are looking for, but does it via processes and command line instead of dlls. This opens up numerous avenues of implementation (including re-using already existing utilities), plus keeps it actually separated (i.e. a dll/so can easily corrupt the memory of the application, whereas a separate process cannot). -Steve
Re: DIP11: Automatic downloading of libraries
Jacob Carlborg wrote: > I had no idea that you could do that. It seems somewhat complicated > and like a hack. There's nothing really hacky about that - it's a defined and fairly complete part of the language. It's simpler than it looks too... the syntax is slightly long, but conceptually, you're just looping over an array of members. Combined with the stuff in std.traits to make it a little simpler, there's lots of nice stuff you can do in there.
Re: Is there a standard way to define to for user-defined types?
On 20.06.2011 7:56, Paul D. Anderson wrote: Jonathan M Davis Wrote: For instance, if I want to make it legal to pass a core.time.TickDuration to to!(core.time.Duration) instead of casting it (which is actually why I've been think of this issue), what is the standard way to do that? Or isn't there one? I'm not aware of one. And if there isn't one, how should we do it? I can think of 3 possible ways: 1. Overload to in the module with the type being converted from. So, for instance, core.time would have an overload for to which takes a TickDuration and returns a Duration (either that or std.datetime if it didn't work to have that in druntime for some reason). I'm not sure if that'll cause problems with overload sets or not though. 2. Make it so that std.conv.to can do its thing based on opCast. If a type overloads opCast, then std.conv.to can use that opCast to do the conversion (but only if opCast is defined, not for just any cast which may or may not be valid). 3. Make it so that user-defined types have a semi-standard member function (e.g. to) which std.conv.to looks for and uses for conversions if it's there. Which of those would you consider to be the best? Or can you think of another, better way? It seems to me that we need an essentially standard way of defining conversions which use to. Otherwise, the only option is to use opCast, and while there's nothing wrong with overloading opCast, it would definitely be preferable to use to for safe conversions. Thoughts? - Jonathan M Davis I'd also like to see a solution to this, primarily because std.conv.to is so useful. I don't think #1 works. At least, I've tried it and it gets confused on overloading. But maybe I didn't do it right. Along this same line, is there a way to write a ToImpl that std.conv can recognize? Again, I've tried this without success, but that doesn't mean it can't be done. #2 is problematic because I might want to to! things I might not want to cast to. But it would be better than nothing. I'd vote for #3. It's not much different than having std.conv look for a toString() function, which is great for to!string In my case, I've created an alternative conversion module and only implemented the types I need; then I've added the types I defined. But I know this will bite me when I try to integrate with other modules. I don't care much how this is implemented but it would be a very useful tool. Paul I think option #3 could be great, and the name of that templated function could be ... to! -- Dmitry Olshansky
Re: Is there a standard way to define to for user-defined types?
On Sun, 19 Jun 2011 23:22:23 -0400, Jonathan M Davis wrote: For instance, if I want to make it legal to pass a core.time.TickDuration to to!(core.time.Duration) instead of casting it (which is actually why I've been think of this issue), what is the standard way to do that? Or isn't there one? I'm not aware of one. And if there isn't one, how should we do it? I can think of 3 possible ways: 1. Overload to in the module with the type being converted from. So, for instance, core.time would have an overload for to which takes a TickDuration and returns a Duration (either that or std.datetime if it didn't work to have that in druntime for some reason). I'm not sure if that'll cause problems with overload sets or not though. 2. Make it so that std.conv.to can do its thing based on opCast. If a type overloads opCast, then std.conv.to can use that opCast to do the conversion (but only if opCast is defined, not for just any cast which may or may not be valid). 3. Make it so that user-defined types have a semi-standard member function (e.g. to) which std.conv.to looks for and uses for conversions if it's there. I vote for 3. However, it should not be called 'to', because you may want to call to!X(y) in a member function, which would resolve to your member to, not the global to. -Steve
Re: DIP11: Automatic downloading of libraries
On 20.06.2011 16:35, Dmitry Olshansky wrote: On 20.06.2011 15:35, Jacob Carlborg wrote: On 2011-06-20 10:59, Dmitry Olshansky wrote: On 20.06.2011 12:25, Jacob Carlborg wrote: On 2011-06-19 22:28, Dmitry Olshansky wrote: Why having name as run-time parameter? I'd expect more like (given there is Target struct or class): //somewhere at top Target cool_lib, ...; then: with(cool_lib) { flags = "-L-lz"; } I'd even expect special types like Executable, Library and so on. The user shouldn't have to create the necessary object. If it does, how would the tool get it then? If we settle on effectively evaluating orbspec like this: //first module module orb_orange; mixin(import ("orange.orbspec")); // // builder entry point void main() { foreach(member; __traits(allMembers, orb_orange)) { static if(typeof(member) == Target){ //do necessary actions, sort out priority and construct a worklist } else //static if (...) //...could be others I mentioned { } } //all the work goes there } Should be straightforward? Alternatively with local imports we can pack it in a struct instead of separate module, though errors in script would be harder to report (but at least static constructors would be controlled!). More adequatly would be, of course, to pump it to dmd from stdin... I had no idea that you could do that. It seems somewhat complicated and like a hack. Also note that Orbit is currently written in D1, which doesn't have __traits. Well, everything about compile-time introspection could be labeled like a hack. In fact I just seen the aforementioned "hack" on a much grander scale being used in upcoming std module, see std.benchmarking: https://github.com/D-Programming-Language/phobos/pull/85/files#L1R577 And personally hacks should look ugly or they are just features or at best shortcuts ;) Personal things aside I still suggest you to switch it to D2. I can understand if Phobos is just not up to snuff for you yet (btw cute curl wrapper is coming in a matter of days). But other then that... just look at all these candies ( opDispatch anyone? ) :) And even if porting is a piece of work, I suspect there a lot of people out there that would love to help this project. (given the lofty goal that config would be written in D, and not Ruby) Just looked through the source , it seems like you are doing a lot of work that's already been done in Phobos, so it might be worth doing a port to D2. Some simple wrappers might be needed, but ultimately: util.traits --> std.traits core.array --> std.array + std.algorithm io.path --> std.file & std.path orgb.util.OptinoParser --> std.getopt util.singleton should probably be pulled into Phobos, but a thread safe shared version. -- Dmitry Olshansky
Re: Yet another slap on the hand by implicit bool to int conversions
On Sun, 19 Jun 2011 19:42:22 -0400, bearophile wrote: Timon Gehr: Maybe DMD could warn on nonsense of the form x != x. Right. This thread is very good food for this enhancement request of mine: http://d.puremagic.com/issues/show_bug.cgi?id=5540 Vote for this enhancement :-) I don't think this is a good idea. Generic code could result in an error where there shouldn't be. For example: int foo(T)() { if(T.sizeof == char.sizeof) ... } Would this fail to compile where T == char? What about if T == ubyte? Generic programming sometimes results in silly code that is perfectly acceptable as generic code, and we need to take this into account before making decisions assuming a person is writing the code. -Steve
Re: DIP11: Automatic downloading of libraries
On 20.06.2011 15:35, Jacob Carlborg wrote: On 2011-06-20 10:59, Dmitry Olshansky wrote: On 20.06.2011 12:25, Jacob Carlborg wrote: On 2011-06-19 22:28, Dmitry Olshansky wrote: Why having name as run-time parameter? I'd expect more like (given there is Target struct or class): //somewhere at top Target cool_lib, ...; then: with(cool_lib) { flags = "-L-lz"; } I'd even expect special types like Executable, Library and so on. The user shouldn't have to create the necessary object. If it does, how would the tool get it then? If we settle on effectively evaluating orbspec like this: //first module module orb_orange; mixin(import ("orange.orbspec")); // // builder entry point void main() { foreach(member; __traits(allMembers, orb_orange)) { static if(typeof(member) == Target){ //do necessary actions, sort out priority and construct a worklist } else //static if (...) //...could be others I mentioned { } } //all the work goes there } Should be straightforward? Alternatively with local imports we can pack it in a struct instead of separate module, though errors in script would be harder to report (but at least static constructors would be controlled!). More adequatly would be, of course, to pump it to dmd from stdin... I had no idea that you could do that. It seems somewhat complicated and like a hack. Also note that Orbit is currently written in D1, which doesn't have __traits. Well, everything about compile-time introspection could be labeled like a hack. In fact I just seen the aforementioned "hack" on a much grander scale being used in upcoming std module, see std.benchmarking: https://github.com/D-Programming-Language/phobos/pull/85/files#L1R577 And personally hacks should look ugly or they are just features or at best shortcuts ;) Personal things aside I still suggest you to switch it to D2. I can understand if Phobos is just not up to snuff for you yet (btw cute curl wrapper is coming in a matter of days). But other then that... just look at all these candies ( opDispatch anyone? ) :) And even if porting is a piece of work, I suspect there a lot of people out there that would love to help this project. (given the lofty goal that config would be written in D, and not Ruby) -- Dmitry Olshansky
Re: Article discussing Go, could well be D
On 2011-06-20 13:07, Russel Winder wrote: On Sun, 2011-06-19 at 21:19 +0200, Jacob Carlborg wrote: [ . . . ] When I first started thinking about Orbit I decided for source packages. The reason for this is that the developer only have to create one package or doesn't have to build the app/lib for all supported platforms when releasing a new version of the package (although it would be good to know that it works on all supported platforms). [ . . . ] OS-level package manages have this issue, Ports went for source and compiling as needed on the grounds that this is most flexible, Debian, Fedora, etc. went for binary on the grounds it is far, far easier for the users. I find that most of the time MacPorts is fine as long as you only own one computer, but for things like Boost, MacQt, etc. my machines takes hours and hours to upgrade which really, really pisses me off. I find Debian package far more straightforward and furthermore binary packages can be cached locally so I only have to download once for all 4 machines I have. With source download I end up compiling twice one for each Mac OS X machine. So overall source packages suck -- even though they are reputedly safer against security attacks. Ubuntu has introduced the idea of personal build farms, aka PPAs, which work very well. This handles creating packages for all the version of Ubuntu still in support. Using something like Buildbot, which although supposedly a CI system can easily be "subverted" into being a package creation farm. I guess the question is really should the package manager be easy for developers or easy for users? If there are no packages because it is too hard for developers to package then no users either. If developers can do things easily, but it is hard for users, then no users so no point in creating packages. It's worth noting that there is massive move in the Java arena to issue binary, source and documentation artefacts -- where originally only binary artefacts were released. This is for supporting IDEs. Clearly source only packaging gets round this somewhat, but this means compilation on the user's machine during install, and that leads to suckiness -- see above for mild rant. Both source and binary packages have their weaknesses and advantages. If you have a package only available on one platform then binary packages would probably be the best. Maybe it's best to support both binary and source packages. You mention that Java packages are getting distributed with the sources as well to support IDEs. For D, compared with Java, you need to at least distribute imports, *.di, files to be able to use libraries. -- /Jacob Carlborg
Re: Rename std.string.toStringz?
On Sun, 19 Jun 2011 09:20:17 -0400, Andrei Alexandrescu wrote: On 6/18/11 5:42 PM, Jonathan M Davis wrote: On 2011-06-18 06:35, Andrei Alexandrescu wrote: On 6/18/11 4:59 AM, Jonathan M Davis wrote: I'll look at renaming toUTF16z to toWStringz to match toStringz (as was suggested by a couple of people in this thread) That should be a template toUTFz that takes either char*, wchar*, or dchar*. A good point. Are you arguing that toStringz should be replaced by such a construct? Or that it should simply exist in addition to toStringz? Also, we _could_ make it so that such a template would take the mutabality of the pointer as well (e.g. toUTF!(char*)(str), toUTF!(const(char)*), etc.), which would allow it to be used in cases where you actually want a mutable string (which toStringz doesn't do). - Jonathan M Davis I think that's a good idea, which would address that StackOverflow problem too. The way I'd probably suggest we go about it is as a universal transcoder. Define std.conv.to with strings of any width and qualification as input and with pointers to characters of any width as output. It is implied that the conversion entails adding a terminating zero. string a = "hello"; auto p = to!(wchar*)(a); // change width and qualifier I don't like relying on an implication is a zero character is added. A char * pointer may or may not be zero terminated (that is one of the issues with C), so you can't really designate a type to mean "zero terminated". The name (whatever it is) should indicate that a zero terminator is added. Simply because someone could see: string a = "hello"; auto p = to!(const(char)*)(a); and think "hm.. what a waste! I'll just change that to a.ptr," not realizing the harm he is doing (and this might actually pass unit tests too!). I like toUTFz. Along with aliases for toStringz, toWStringz, and toDStringz. -Steve
Re: Is there a standard way to define to for user-defined types?
I have tried to conversion test. import std.conv; void main() { tests1(); tests2(); testc1(); testc2(); } struct SS1{ ST1 opCast(T:ST1)(){ return ST1(); } } struct ST1{ } void tests1() { SS1 s1; auto t1a = cast(ST1)(s1); // -> s1.opCast!ST1() // auto t1b = to!ST1(s1); // NG } struct SS2{ } struct ST2{ this(SS2 source){} } void tests2() { SS2 s2; auto t2a = cast(ST2)(s2); // -> ST2(s2) == ctor call // auto t2b = to!ST2(s2); // NG } class CS1{ CT1 opCast(T:CT1)(){ return new CT1(); } } class CT1{ } void testc1() { CS1 s1 = new CS1(); auto t1a = cast(CT1)(s1); // -> s1.opCast!CT1() auto t1b = to!CT1(s1); // T toImpl(T, S)(S value) if (is(S : Object) && is(T : Object)) // -> cast(CT1)(s1) // -> s1.opCast!CT1() } class CS2{ } class CT2{ static CT2 opCall(CS2 source){ return new CT2(); } } //class CT2{ this(CS2 source){} } // Unfortunately, ctor is not called by CastExp. void testc2() { CS2 s2 = new CS2(); auto t2a = cast(CT2)(s2); // -> CT2(s2) == CT2.opCall(s2) // auto t2b = to!CT2(s2); // compiled, but runtime error occurs // T toImpl(T, S)(S value) if (is(S : Object) && is(T : Object)) // -> cast(CT1)(s1) // -> null } Some of thoughts of me: 1. std.conv.to should support built-in casting behavior, at least on struct object. This feature was recently fixed. See bug5897. 2. Using to!T() member function by std.conv.to is unnecessary feature, because it can be replaced by opCast!T. 3. For class object, I guess that is need to support 'Conversion Interface that Intrude into TargetType'. I think 'conversion constructor call' is good fit to it. Kenji
Re: Article discussing Go, could well be D
Jacob Carlborg wrote: >On 2011-06-20 10:46, Johannes Pfau wrote: >> Jacob Carlborg wrote: >>> [...] >>> I hope this explains most of the things and I'm sorry for any >>> confusion I may have caused. >> >> Thanks for that detailed explanation, I think I understand. This >> system also seems more flexible than the traditional 'this directory >> will be the root of the package, copy all files to be packaged into >> this directory' >> >>> Ok, I think I understand so far. I was thinking something similar. >>> But is a four digit version really necessary? >> >> I thought of variable length version numbers, this is what most >> package management systems use. Whats wrong with variable length >> versions? Look at 'compareBaseVer' in the source linked later for an >> example of how to compare such versions. > >Currently I have the three-part-version as default and then a custom >version (which basically can contain anything). The reason for the >three-part-version scheme is explained in the wiki. So it's to have defined semantics for version changes, to standardize thing like api breakage. I think this makes sense, although it forces a special versioning scheme on users it might be worth it. >>> This got quite complex. When I was thinking about SCM integration I >>> was thinking about you only specify the address to the repository, >>> which will mean the latest commit on the main branch. Then you could >>> also specify tags, branches and perhaps specific commits. But you >>> could never specify, for example, a release (or commit) newer then >>> another commit. This wouldn't work: >>> >>> orb "dwt", "~> 0.3.4", :git => >>> "git://github.com/jacob-carlborg/libjson.git" >>> >>> I see now that I've specified a version in the git example on the >>> wiki. This was a mistake, I removed the version now. >>> >> >> I think we look at 2 different approaches here: If I understood >> correctly you want to allow the _user_ to grab the lastest git >> version. Whenever he wants to update, he has to do that manually. He >> also always downloads the source code and compiles it on his machine >> (no binary git packages). > >Yes. It probably comes down to the questions if binary 'git' packages are worth the effort. I only know linux distribution package management systems where it's common to package snapshots. But it might be overkill for a package system dealing mostly with libraries. > >> My approach let's the _packager_ create git packages. From these >> source packages binary packages can be build and distributed to end >> users like any other package (release, prerelease). Snapshots are >> 'first class packages', which means everything working with releases >> and other packages will also work with snapshots. >> The downside of this approach is that it complicates things a lot. It >> needs a versioning scheme capable of sorting snapshots, releases and >> prereleases reliably. >> >> Here's some proof of concept code: >> https://gist.github.com/1035294 >> 200 LOC for a versioning scheme seems to be alot though. > >I don't think I understand your approach. > It might really be overkill. But consider this example: package FOO requires libjson >= 0.0.1 as a dynamic library. package BAR requires latest libjson from git as a dynamic library. now FOO could use libjson-git, but how does the package manager know that? It cannot know whether the git version is more recent than 0.0.1. It's also not possible to install both libraries at a time, as both are dynamic libraries with the same name. We now have a conflict where you can only install FOO or BAR, but not both. -- Johannes Pfau
missing: __traits(isPublic (private, etc...)
I am working a reflection / introspection library (dynamic / at runtime, as opposed to compile time with template and mixin). Now I can create "PropertyInfo" classes for property with very little code and it all works well. I'm hitting a problem, trying to "register" all property of my class automatically. I have a vanilla implementation like that: = void RegisterProperties(T)() { foreach(mn ; __traits(derivedMembers, T)) GetProperty!(T, mn); } = The problem is, it's trying to register private properties! (and of course fail, due to access rights...) So far I have test like === static if(mixin("__traits(compiles, t." ~memberName ~")") ) { getter = &GETTER!(T, memberName); } which test that the thing is a property. But how could I test it's a ***public*** property? if not possible, wouldn't it be a nice trait addition?
Re: Article discussing Go, could well be D
Daniel Gibson wrote: >Am 20.06.2011 10:52, schrieb Johannes Pfau: >> Jacob Carlborg wrote: >>> On 2011-06-19 21:59, Jose Armando Garcia wrote: On Sun, Jun 19, 2011 at 4:19 PM, Jacob Carlborg wrote: > On 2011-06-19 19:02, Johannes Pfau wrote: >> >> I still don't understand that completely. So does it list the >> files which will be contained in the package later, or file >> dependencies contained in other packages? >> (I'm asking because I'm not familiar >> with file-dependencies in package management systems. Most >> package management systems make a package depend on other >> packages, but not on the files in the packages) > > Ok, let me explain. When developing a package management system > the first thing one has do decide is if the package should contain > pre-built binaries/libraries, we can call these binary packages, > or the necessary files to build the package when installing, we > can call these source package (not to be confused with the source > type you've mentioned below). As a third option, one could have a > mixed package system containing both binary and source packages. > Maybe even mixed packages could be possible. Why decide on "file" package? This only works with packages that can be compiled. Think non-D source code packages and close source packages. Even one of the most commonly known "file" package manager (Gentoo's portage) allows for binary packages. >>> >>> I guess we could have a mixed system, with both source and binary >>> packages. >> >> Definitely. Standardised source packages allow automated binary >> package building, even for different architectures. Users should >> also be able to make small changes to source packages and create >> their own binary packages easily. Source packages only wouldn't work >> either, think of users on embedded systems. Compiling everything on >> a machine with 16MB ram and 200mhz isn't fun. Also binary packages >> are quite convenient. >> > >1. Will you develop or compile your own software (that uses software >from the package manager) on the embedded system? I guess it's more >common to develop the software on a PC or whatever and upload it to the >embedded system. Maybe I misunderstood something, but I thought orbit will also manage shared libraries once supported by the d compilers. Even on resource limited embedded systems it's likely that a library is needed by more than one program, so it can't really be shipped with the program. Static libraries, documentation and d headers are not needed on these platforms. Of course package managers for embedded systems (something like openembedded) can be used, but then all libraries have to be packaged again into a different package format. >2. Will an embedded system with such restricted resources have a x86 >arch - or will it more likely be ARM or even something completely >different? Should there be binaries available for any architecture >(that's hard, because most developers probably only have x86/amd64)? >If not, you'd have to compile yourself anyway. >(And of course we need a working compiler for that architecture first) Arm, mips (popular in internet routers), sh4 (settop boxes), ppc Ideally we'd have a package build system like launchpad: The developer (or packager) creates a source package, uploads it to the build service, build service transfers the source package to buildbot machines, those build binary packages for different architectures. The binary packages are then added to a repository. We won't have something like that from the beginning, but in a few years such a build service might be useful. >Cheers, >- Daniel -- Johannes Pfau
Re: Article discussing Go, could well be D
Russel Winder wrote: >On Sun, 2011-06-19 at 21:19 +0200, Jacob Carlborg wrote: >[ . . . ] >> When I first started thinking about Orbit I decided for source >> packages. The reason for this is that the developer only have to >> create one package or doesn't have to build the app/lib for all >> supported platforms when releasing a new version of the package >> (although it would be good to know that it works on all supported >> platforms). >[ . . . ] > >OS-level package manages have this issue, Ports went for source and >compiling as needed on the grounds that this is most flexible, Debian, >Fedora, etc. went for binary on the grounds it is far, far easier for >the users. > >I find that most of the time MacPorts is fine as long as you only own >one computer, but for things like Boost, MacQt, etc. my machines takes >hours and hours to upgrade which really, really pisses me off. I find >Debian package far more straightforward and furthermore binary packages >can be cached locally so I only have to download once for all 4 >machines I have. With source download I end up compiling twice one >for each Mac OS X machine. So overall source packages suck -- even >though they are reputedly safer against security attacks. > >Ubuntu has introduced the idea of personal build farms, aka PPAs, which >work very well. This handles creating packages for all the version of >Ubuntu still in support. Using something like Buildbot, which although >supposedly a CI system can easily be "subverted" into being a package >creation farm. > >I guess the question is really should the package manager be easy for >developers or easy for users? If there are no packages because it is >too hard for developers to package then no users either. If developers >can do things easily, but it is hard for users, then no users so no >point in creating packages. > >It's worth noting that there is massive move in the Java arena to issue >binary, source and documentation artefacts -- where originally only >binary artefacts were released. This is for supporting IDEs. Clearly >source only packaging gets round this somewhat, but this means >compilation on the user's machine during install, and that leads to >suckiness -- see above for mild rant. > It's possible to combine binary and source packages. Archlinux did that: by default you install prebuilt binary packages, but you can specify that you want to build certain packages by yourself. Archlinux also has a huge repository of source-only packages which always need to be build by the end user. AFAIK this system works quite well. -- Johannes Pfau