Re: We need to rethink remove in std.container
On Monday 21 February 2011 22:51:25 %u wrote: > Hm... so the entire issue here seems to be the capability to do > iteration and modification in a concurrent manner, right? > > IMHO that may not be worth the costs we're paying -- I would argue > that you normally shouldn't be modifying a collection that you're > iterating over in the first place; it just doesn't make any sense > when you're delaying everything. Sure, it would work fine if you > couldn't have more than one reference to any container (since every > range would would then have the same view of the container), but > when we're introducing delayed evaluation with ranges, I just don't > see how (or even why) it should be combinable with modification. > It's like trying to read from a database and concurrently modifying > the underlying storage by hand, bypassing the database... it just > doesn't work that way, outside of functional programming. So is it > really worth paying the price? _Of course_, it's worth it. Do you never alter anything in a range? Functions like sort need to be able to alter the elements that the range refers to. Not to mention, there's generally no other way to alter elements in a container other than through a range to it unless you remove them and add new ones to replace them. The one exception would be if the container is random access, and then you can use the subscript operator to get at its elements. Ranges definitely need to be able to alter the elements that they contain. Sure, there are plenty of times when you don't need to do that, but there are times when you definitely do. And really, the problem here isn't really the concept of ranges - it's their implementation. The fact that you can't get the right type of range with the right elements in it is the problem. Doing something like building the take functionality into forward ranges would fix that problem. - Jonathan M Davis
Re: We need to rethink remove in std.container
Hm... so the entire issue here seems to be the capability to do iteration and modification in a concurrent manner, right? IMHO that may not be worth the costs we're paying -- I would argue that you normally shouldn't be modifying a collection that you're iterating over in the first place; it just doesn't make any sense when you're delaying everything. Sure, it would work fine if you couldn't have more than one reference to any container (since every range would would then have the same view of the container), but when we're introducing delayed evaluation with ranges, I just don't see how (or even why) it should be combinable with modification. It's like trying to read from a database and concurrently modifying the underlying storage by hand, bypassing the database... it just doesn't work that way, outside of functional programming. So is it really worth paying the price?
Re: Uh... destructors?
> dmd is pretty lax about attributes which don't apply. It generally just > ignores them. Personally, I think that it should error on invalid attributes, but for some reason, that's not how it works. Of course, there could be other bugs in play here, but there's every possibility that the end result is completely valid. Well, the trouble is, pretty much all of these are invalid attributes: - const and pure make no sense, since destructors (should) change an object's state - override and final make no sense, since destructors obviously aren't ever overridden... they're always called after the subclass's destructor is called - static obviously makes no sense - synchronized is meaningless since there's only one thread ever running the destructor anyway - private makes no sense since (unless we're trying to imitate C++ here) destructors are only called from the runtime, and nowhere else. - The only meaningful attribute there is extern(C). I would agree that DMD should ignore the attributes that are redundant or optional (e.g. it should be okay for "static" to be written and/or omitted at the module-level, and DMD should ignore it) but I don't see why _wrong_ attributes should be ignored... it confuses the programmer, opens the potential for error, and doesn't have any benefits. (The only exception I can think of to this rule would be attributes that cannot be removed, like saying "private:" in the beginning... for general attributes like those, I guess DMD can ignore them, but for specifically written attributes like these, is there any benefit whatsoever to allowing them?) Thanks!
Re: We need to rethink remove in std.container
On Monday 21 February 2011 21:51:51 %u wrote: > You know, I'm actually now questioning whether STL did the right thing > with requiring iterators for the erase() method. It actually seems > quite pointless -- there's no reason why integer indices wouldn't > work. I don't think we really need to follow suit with STL here... is > there some situation with erase() that I'm missing that can't be > covered with plain integer (or rather, size_t) indices, and that > requires ranges? Some of them do take indices, but you can't generally operate on indices. To do that, you need the original container. But you can operate on an iterate just fine. So, it's generally easier to deal with iterators. However, the big thing would be that it actually doesn't make sense to use indices in many cases. Think about RedBlackTree for a moment. If you remove something from it or add something to it, that doesn't invalidate any range that you have which points to it (assuming that the end points haven't changed). You can still take that range and use it any any algorithm that you want - including remove (assuming that the algorithm takes the appropriate kind of range of course), and it will work. But using indices wouldn't work. They would have changed. Adding or removing anything to a container at or before an index that you have would make it so that that index no longer points to the same element. However, for many containers (though not all), any iterators or ranges would still be valid. So, on the whole, using iterators or ranges makes great sense. I have no problem with how the STL does that. The problem is transitioning that to ranges. The STL doesn't go and change your iterator types on you when you feed them to algorithms, but std.range and std.algorithm typically do. So, the result is sub- optimal in this case, to say the least. - Jonathan M Davis
Re: float equality
Andrei Alexandrescu wrote: I worked on Wall Street and have friends who have been doing it for years. Everybody uses double. Makes sense. Not many people on earth has more than 4,503,599,627,370,496 pennies lying about. -- Simen
Re: We need to rethink remove in std.container
You know, I'm actually now questioning whether STL did the right thing with requiring iterators for the erase() method. It actually seems quite pointless -- there's no reason why integer indices wouldn't work. I don't think we really need to follow suit with STL here... is there some situation with erase() that I'm missing that can't be covered with plain integer (or rather, size_t) indices, and that requires ranges?
Re: We need to rethink remove in std.container
> The more I look at it, the more I'm convinced that we really need to add a primitive to forward ranges that returns the first n elements of that range. Without that, I don't see how you can get a range of the correct type with only those elements in it unless it's also a bidirectional range, which some ranges (like SList's range) aren't. Huh, yeah I think I agree. It seems like ranges are just delayed evaluation, and that, at some point, we need to be able to force their evaluation -- otherwise, it's like combining Scheme's (delay X) and (force Y) operations with mutation, which just doesn't work sensibly in non-functional programming.
Re: We need to rethink remove in std.container
On Monday 21 February 2011 21:04:47 %u wrote: > > remove takes a range type which is the range type for the > > container that it's on. That makes sense. It's obviously not going > to be able to a take an arbitrary range and remove that from itself. > How would it know which elements in that range corresponded to which > elements in itself - especially when it could be a range which skips > elements or something similar? So, _that_ part makes sense. > > I'm wondering, what was the reason for making a "remove range X from > Y" method in the first place? To me, that's not a "remove" > operation; if anything, it's the "removeAll" operation, but I'd > actually call it "subtract" (because it's the set subtraction > operation... although there can be duplicates). > > If we change remove to only take in a single element, wouldn't its > implementation then be trivial? (For a range, just return a filter > that removes the element. For a container, actually remove the > element. For ranges that also behave like containers -- like arrays > -- I have no idea.) > > Does this sound like a good idea? Removing a range is not necessarily related to removing an element with a particular value. Sure, a method could be added to the various container types which takes a value and removes the first instance of that value from the container, but that's not what remove is necessarily trying to do. It's like erase in C++'s standard library which takes two iterators. There are plenty of situations where removing a range makes sense. The odd bit with remove in Phobos vs erase in the STL is that with iterators, you can give a single iterator to indicate a single element whereas you can't do that with a range. So, remove takes a range, and if you want to remove a single element, that range only has one element in it. It's a bit weird, but that's fine. The typical way to remove an element in the STL is to use find to find an element and then erase to remove it. remove in std.container is doing the same thing. The problem is that you can't give the result of find to remove, because instead of a single iterator, find gives you a whole range, and you probably don't want to remove that whole range. You generally either want to remove the first element or some set of elements at the front of the range. So, you need to able to remove the other elements from the range so that you can give that range to the remove method on the container. That's fine in principle, but since so many of the range-based algorithms give you a new type back and you need the exact type that that container uses for its ranges, it's very hard to get a range of the correct type with the correct elements in it. So, while find will give you the beginning of the range you want, there's not an easy way to remove the end of that range. Functions like take and takeExactly return new types, so they don't work. The more I look at it, the more I'm convinced that we really need to add a primitive to forward ranges that returns the first n elements of that range. Without that, I don't see how you can get a range of the correct type with only those elements in it unless it's also a bidirectional range, which some ranges (like SList's range) aren't. - Jonathan M Davis
Re: Uh... destructors?
On Monday 21 February 2011 20:46:56 %u wrote: > Hi, > > I'm just curious... why is saying something like this: > > extern(C) > private static const pure override final synchronized ~this() { } > > allowed? dmd is pretty lax about attributes which don't apply. It generally just ignores them. Personally, I think that it should error on invalid attributes, but for some reason, that's not how it works. Of course, there could be other bugs in play here, but there's every possibility that the end result is completely valid. - Jonathan M Davis
Re: We need to rethink remove in std.container
> remove takes a range type which is the range type for the container that it's on. That makes sense. It's obviously not going to be able to a take an arbitrary range and remove that from itself. How would it know which elements in that range corresponded to which elements in itself - especially when it could be a range which skips elements or something similar? So, _that_ part makes sense. I'm wondering, what was the reason for making a "remove range X from Y" method in the first place? To me, that's not a "remove" operation; if anything, it's the "removeAll" operation, but I'd actually call it "subtract" (because it's the set subtraction operation... although there can be duplicates). If we change remove to only take in a single element, wouldn't its implementation then be trivial? (For a range, just return a filter that removes the element. For a container, actually remove the element. For ranges that also behave like containers -- like arrays -- I have no idea.) Does this sound like a good idea?
Re: Linking COFF and OMF
>> That's pretty good. Almost all of those things are standard C. >> LDIV and UDIV could easily be eliminated. >> __except_list is a null asm label (it is FS:[0]). >> >> So the main problematic ones are: >> _xi_a , __acrtused_con, the __fp functions, and _Ccmp > So how to tackle that? I'm really glad that this issue is being looked into. I've literally wasted days (if not a few weeks) getting another to work instead of SNN.lib, and I think that the ultimate culprit that prevented things from working was _xi_a. I couldn't figure out how to solve it; I'm not sure if it was the root cause, but I think it interfered with thread-static data, and/or the garbage collector. What does _xi_a even do? Is it anything more than just a marker inside the executable?
Uh... destructors?
Hi, I'm just curious... why is saying something like this: extern(C) private static const pure override final synchronized ~this() { } allowed? Thanks!
Re: Uh... destructors?
On 2/21/2011 11:46 PM, %u wrote: Hi, I'm just curious... why is saying something like this: extern(C) private static const pure override final synchronized ~this() { } allowed? Thanks! ...And when's the last time you needed one?
Re: O(N) GC: The patch
On 2/20/2011 8:05 PM, Jason House wrote: Sounds promising. How does it effect other cases? Some typical GC-heavy benchmark? Lots of smaller no scan objects that are just under your optimization threshold? I posted some new benchmarks that are more like realistic workloads to the original bug report. The first benchmark I posted was admittedly a corner case. These new ones are more like typical scientific computing/large object allocation heavy cases. Also note that the effect of the patch will be magnified in a multithreaded program because more efficient GC/allocation means that there will be less bottlenecking on malloc() and less time with the world stopped. As a reminder, the report is at: http://d.puremagic.com/issues/show_bug.cgi?id=5623
about the GUI issue.
Sorry, maybe this question is not so "proper", but I really want to get some feedback, for here we have many professors about D. Before, if D want to use the GUI, such as GTK+, we can use a "gtkD" port. http://www.dsource.org/projects/gtkd And when the Milestone version for GTK+ (GTK+ 3.0) formally released out, there is a "GtkApplication" was introduced that many other language (except C Language) can use GTK+3.0 components and GUIs. http://library.gnome.org/devel/gtk3/3.0/index.html http://library.gnome.org/devel/gtk3/3.0/GtkApplication.html So, I want to know that whether our D can use this "GtkApplication" or we can using D to produce the GUI level applications based on GTK+3.0 ? Best regards. David.
Re: [OT] Round 2: Webpage design and "Name That Color!"
On Monday 21 February 2011 18:22:01 Nick Sabalausky wrote: > "Nick Sabalausky" wrote in message > news:ijpvpl$2l8u$1...@digitalmars.com... > > > I've been updating the docs for my Goldie project in preparation of a new > > release, and figured the they looked a bit...sterile, so I've tweaked the > > CSS a bit. And, well, I think I've stumbled upon a heisencolor...(or a > > heisenhue, rather) > > > > Without reading any replies or "cheating" by inspecting the pixels in a > > paint program, take a look at this screenshot: > > > > http://www.semitwist.com/download/goldie0.4docBeta.png > > > > ...and reply with what color you think the background looks like (the > > main background, not the > > sidebar). And whether or not you like it would be helpful, too, of > > course. And, strange as this may sound, reply again if you end up > > changing your mind on what color it looks like. > > Thanks all for the comments! I've made a few more tweaks, put up two sample > pages, and would like to get some opinions on if this now looks "good" or > "acceptable" or "bad" (and maybe improvement suggestions for any "bad" > votes): > > http://www.semitwist.com/goldie0.4docBeta2/index.html acceptable > http://www.semitwist.com/goldie0.4docBeta2/SampleApps/ParseAnything/index.h > tml bad Overall, I think they both look fine, but I think that the yellow background for the sample command line output looks pretty bad. As bright as white may be, it probably would be a lot better. - Jonathan M Davis
We need to rethink remove in std.container
Okay, removing elements from a container sucks right now. You can do stuff like removeAny (generally pretty useless IMHO) or removeFront just fine, but removing an arbitrary range from a container just plain sucks. remove takes a range type which is the range type for the container that it's on. That makes sense. It's obviously not going to be able to a take an arbitrary range and remove that from itself. How would it know which elements in that range corresponded to which elements in itself - especially when it could be a range which skips elements or something similar? So, _that_ part makes sense. But have you actually tried to get a range of the appropriate type to remove from a container? It seems like almost ever function in std.range and std.algorithm returns a new range type, making them completely useless for processing a range to be removed from a container. I was looking to remove a single element for a RedBlackTree. The best function that I could think to get the proper range was findSplit. The middle portion of the return value would be the portion that I was looking for. I'd take that and pass it to RedBlackTree's remove. Wrong. It uses takeExactly in its implementation and the first two portions of the result of findSplit aren't the right range type. So, what do I do? The _only_ thing that I can think of at the moment is to use find to locate the beginning of the range that I want, take the length of the range with walkLength and then use popBackN to pop of the elements I don't want. e.g. == import std.algorithm, std.container, std.conv, std.range, std.stdio; void main() { auto rbt = RedBlackTree!int([0, 2, 5, 12, 59, 22]); assert(to!string(rbt[]) == "[0, 2, 5, 12, 22, 59]"); auto found = find(rbt[], 5); assert(to!string(found) == "[5, 12, 22, 59]"); popBackN(found, walkLength(found) - 1); assert(to!string(found) == "[5]"); rbt.remove(found); assert(to!string(rbt[]) == "[0, 2, 12, 22, 59]"); } == That's disgusting. All that just to remove one element? And what if the range isn't bidirectional (e.g. SList)? Well, then you have no popBack, and as far as I can tell, you're screwed, since you can't use either take or takeExactly, because both of them return new range types. In particular, the fact that you can't take a range and create a new one of the same type from its first n elements is highly problematic. Maybe we need to add a function to ForwardRange which returns a new range with the first n elements (it certainly looks like that's the key element that's missing here). I don't know if that would be reasonable, but the fact that you can't create a range of the same type as the original range when taking just its first n elements seems crippling in this situation. I don't know what the proper solution to this is, but the current situation strikes me as untenable. I had to think through this problem for a while before I came to a solution that came even close to working, let alone get one that actually works. Removing elements from a container should _not_ be this hard. The situation with remove _needs_ to be improved. - Jonathan M Davis
[OT] Round 2: Webpage design and "Name That Color!"
"Nick Sabalausky" wrote in message news:ijpvpl$2l8u$1...@digitalmars.com... > I've been updating the docs for my Goldie project in preparation of a new > release, and figured the they looked a bit...sterile, so I've tweaked the > CSS a bit. And, well, I think I've stumbled upon a heisencolor...(or a > heisenhue, rather) > > Without reading any replies or "cheating" by inspecting the pixels in a > paint program, take a look at this screenshot: > > http://www.semitwist.com/download/goldie0.4docBeta.png > > ...and reply with what color you think the background looks like (the main > background, not the > sidebar). And whether or not you like it would be helpful, too, of course. > And, strange as this may sound, reply again if you end up changing your > mind on what color it looks like. > Thanks all for the comments! I've made a few more tweaks, put up two sample pages, and would like to get some opinions on if this now looks "good" or "acceptable" or "bad" (and maybe improvement suggestions for any "bad" votes): http://www.semitwist.com/goldie0.4docBeta2/index.html http://www.semitwist.com/goldie0.4docBeta2/SampleApps/ParseAnything/index.html (Most of the links are broken ATM, I know. And FWIW, "beige" is what I was trying to go for with the background.) FWIW, the old v0.3 documentation is here: http://www.semitwist.com/goldiedocs/current/Docs/ I want to at least make sure that the 0.4 docs are an improvement on that.
TDPL sample chapter unit tests
I just noticed that the TDPL sample chapter at http://erdani.com/d/thermopylae.pdf still contains a lot of warnings concerning failed unit test, which have been fixed since the sample chapter was published. This might probably leave a bad impression to people interested in D. Is there any chance to update it wiht a current version, Andrei? David
Re: float equality
On Monday 21 February 2011 15:58:03 Andrei Alexandrescu wrote: > On 2/21/11 4:48 AM, Jonathan M Davis wrote: > > On Monday 21 February 2011 01:55:28 Walter Bright wrote: > >> Kevin Bealer wrote: > >>> 1. To solve the basic problem the original poster was asking -- if you > >>> are working with simple decimals and arithmetic you can get completely > >>> accurate representations this way. For some cases like simple > >>> financial work this might work really well. e.g. where float would not > >>> be because of the slow leak of information with each operation. (I > >>> assume real professional financial work is already done using a > >>> (better) > >>> representation.) > >> > >> A reasonable way to do financial work is to use longs to represent > >> pennies. After all, you don't have fractional cents in your accounts. > >> > >> Using floating point to represent money is a disaster in the making. > > > > Actually, depending on what you're doing, I'm not sure that you can > > legally represent money with floating point values. As I understand it, > > there are definite restrictions on banking software and the like with > > regards to that sort of thing (though I don't know exactly what they > > are). > > This is a long-standing myth. I worked on Wall Street and have friends > who have been doing it for years. Everybody uses double. Hmm. Good to know. I do find that a bit scary though. I wonder what the level of error is in doing that and how much it affects your typical monetary calculation. - Jonathan M Davis
Re: float equality
On 2/21/11 6:08 PM, bearophile wrote: Andrei Alexandrescu: This is a long-standing myth. I worked on Wall Street and have friends who have been doing it for years. Everybody uses double. Unbrutal Python programmers are encouraged to avoid the float type to manage money values, and use decimal instead: http://docs.python.org/library/decimal.html Bye, bearophile ... and nobody uses Python :o). Andrei
Re: float equality
Andrei Alexandrescu: > This is a long-standing myth. I worked on Wall Street and have friends > who have been doing it for years. Everybody uses double. Unbrutal Python programmers are encouraged to avoid the float type to manage money values, and use decimal instead: http://docs.python.org/library/decimal.html Bye, bearophile
Re: float equality
On 2/21/11 4:48 AM, Jonathan M Davis wrote: On Monday 21 February 2011 01:55:28 Walter Bright wrote: Kevin Bealer wrote: 1. To solve the basic problem the original poster was asking -- if you are working with simple decimals and arithmetic you can get completely accurate representations this way. For some cases like simple financial work this might work really well. e.g. where float would not be because of the slow leak of information with each operation. (I assume real professional financial work is already done using a (better) representation.) A reasonable way to do financial work is to use longs to represent pennies. After all, you don't have fractional cents in your accounts. Using floating point to represent money is a disaster in the making. Actually, depending on what you're doing, I'm not sure that you can legally represent money with floating point values. As I understand it, there are definite restrictions on banking software and the like with regards to that sort of thing (though I don't know exactly what they are). This is a long-standing myth. I worked on Wall Street and have friends who have been doing it for years. Everybody uses double. Andrei
Re: Linking COFF and OMF
That's pretty good. Almost all of those things are standard C. LDIV and UDIV could easily be eliminated. __except_list is a null asm label (it is FS:[0]). So the main problematic ones are: _xi_a , __acrtused_con, the __fp functions, and _Ccmp So how to tackle that?
Re: float equality
so: > Why not? A system language has to allow the programmer to do everything the hardware is capable to do, including comparing floats in the usual way. But in general for a well designed language it's better to clearly denote dangerous/unsafe/bad things, and to make the safer routes the first ones you usually try. > where does it stop? I don't know, I am here to learn this too. You have languages like Assembly, SPARK, ATS. They are successful in their niche and for their purposes. They are at the (current) opposites for safety, control, freedom, power, performance, etc. In the area that has those tree languages as vertices (and there are many more possible vertices to choose from) there is a lot of design space for good languages. Bye, bearophile
Re: float equality
If one doesn't know what floating point is and insists on using it, it is his own responsibility to face the consequences. I don't buy this argument. Why not? A logical flaw on my part or the statement being somewhat harsh? Because i don't think it is the former, i will give an example for the latter. I am a self-taught programmer and i too made big mistakes when using FP, probably i still do since it is a strange beast to deal with even if you know all about it. For this reason it is somewhat understandable for people like me failing this kind of traps but can we say same thing for others? Do they have this excuse? Not knowing the fundamental thing about FP and use it? The little proposal I was thinking about is: 1) Turn the "==" operator into a compile-time syntax error if one or both operands are floating point values. if (x == y) { ==> syntax error 2) The semantic of the "==" is now done by the "is" operator. So if you want exactly this C code: if (x == y) { you use: if (x is y) { 3) Then another built-in function or semantics is added, that is similar to: some_syntax(x, y, an_approximation_level) It takes tree arguments: x, y and a number of bits. Turning the == into a syntax error is done to remember programmers that == among FP values is tricky, and the "is" operator is added because there are some situations where you want an exact comparison and you know what you are doing. This was my idea. I know it doesn't improve the current situation a lot... so I don't expect this to be accepted. But I like to be understood :-) Bye, bearophile One thing i fail to understand here is, yes i agree "float == float" probably a "code smell" and we can find a workaround for this. But i have to ask, where does it stop? I mean i can just here find you pages and pages of illogical results created with crafted examples using not even floating points but integrals. If you want to make it easier like the case in hand, ignore a fundamental rule and you get tons of examples. All i am trying to say is that if we are not in a position to solve this huge problem with a syntax change, lets just keep it the way it is instead of making yet another inconsistency.
Re: CMake for D2 ready for testers
On Mon, 21 Feb 2011 12:40:11 +0100 Jens Mueller wrote: > I don't know about upstreaming it. Certainly it would be nice. But for > doing so I need polish it further. OK. > It seems that not many people are using CMakeD and there seems to be > less interest in it. I believe many are waiting that D(2) become consolidated a bit, 64bit port etc. At least, I'm the one of them. :-) > But I assume this is going to change once 64-bit is stable. > If I find some time I will build the above word cloud example for > 64-bit and report here, if that helps you. I'll take a look tomorrow closer look at CMakeD. > Recently I've been a bit distracted from CMakeD development since I > stumbled over > Gyp > http://code.google.com/p/gyp/ > and > Premake > http://industriousone.com/premake > Both address similar needs like CMake but do not support D yet. They look interesting, but I'm sure they cannot replace CMake and therefore hope D will become 1st class citizen in the CMake country. Sincerely, Gour -- “In the material world, conceptions of good and bad are all mental speculations…” (Sri Caitanya Mahaprabhu) http://atmarama.net | Hlapicina (Croatia) | GPG: CDBF17CA signature.asc Description: PGP signature
Re: toHash /opCmp for builtin-types
sadly no: (-30) - (30) = -170532704 which is incorrect. It does however work for short/byte (and opCmp still returning int). oops, wrong example. It is: (-20) - (20) = 294967296, sry. Anyway, you see the point with overflows
Re: toHash /opCmp for builtin-types
Am 21.02.2011 21:55, schrieb Simon Buerger: sadly no: (-30) - (30) = -170532704 which is incorrect. It does however work for short/byte (and opCmp still returning int). oops, wrong example. It is: (-20) - (20) = 294967296, sry. Anyway, you see the point with overflows Hm yes, you're right :-)
Re: toHash /opCmp for builtin-types
On 21.02.2011 21:22, Daniel Gibson wrote: Am 21.02.2011 20:59, schrieb Simon Buerger: Following came to my mind while coding some generic collection classes: The toHash and opCmp operations are not supported for builtin-types though their implementation is trivial. * toHash The code is already there inside TypeInfo.getHash. But typeid(value).getHash(&value) is much uglier than value.toHash. Note that hashes make sense for integer (trivial implementation), not necessarily for floats. * opCmp Would be useful for delegating opCmp of a struct to one member. Alternative: Introduce new operator which returns 1/0/-1 (ruby does this with "<=>"). Currently I end up writing: int opCmp(...) { if(a>b) return +1; if(a==b) return 0; return -1; } which uses 2 comparisons where only 1 is needed (though the compiler might notice it if comparision is pure and so on). Furthermore it might me a nice idea to have toString (or the future "writeTo") for builtin-types. It would need some new code in the core-lib, but could simplify generic programming. any thoughts? - Krox Well, opCmp() can be done easier, at least for ints: int opCmp(...) { return a-b; } sadly no: (-30) - (30) = -170532704 which is incorrect. It does however work for short/byte (and opCmp still returning int). For floats.. well, if you don't want/need any tolerance this would work as well, else it'd be more difficult. I'm not sure if it make sense for floats. Furthermore, does TypeInfo.getHash support them? A <=> operator would be neat, though.
Re: toHash /opCmp for builtin-types
Am 21.02.2011 20:59, schrieb Simon Buerger: Following came to my mind while coding some generic collection classes: The toHash and opCmp operations are not supported for builtin-types though their implementation is trivial. * toHash The code is already there inside TypeInfo.getHash. But typeid(value).getHash(&value) is much uglier than value.toHash. Note that hashes make sense for integer (trivial implementation), not necessarily for floats. * opCmp Would be useful for delegating opCmp of a struct to one member. Alternative: Introduce new operator which returns 1/0/-1 (ruby does this with "<=>"). Currently I end up writing: int opCmp(...) { if(a>b) return +1; if(a==b) return 0; return -1; } which uses 2 comparisons where only 1 is needed (though the compiler might notice it if comparision is pure and so on). Furthermore it might me a nice idea to have toString (or the future "writeTo") for builtin-types. It would need some new code in the core-lib, but could simplify generic programming. any thoughts? - Krox Well, opCmp() can be done easier, at least for ints: int opCmp(...) { return a-b; } For floats.. well, if you don't want/need any tolerance this would work as well, else it'd be more difficult. A <=> operator would be neat, though.
Re: CMake for D2 ready for testers
Russel Winder wrote: > On Mon, 2011-02-21 at 12:40 +0100, Jens Mueller wrote: > [ . . . ] > > I don't know about upstreaming it. Certainly it would be nice. But for > > doing so I need polish it further. > > Can the code comprising the D support for CMake be "packaged" up so that > it can be offerred to everyone direct from a DVCS repository? SCons and > Waf have the tool concept to allow for this. CMake must have something > analogous. People can then make use of the D support with their CMake > without the necessity of it heading upstream -- though it would be good > for that to happen eventually. Don't know how packaging is done in SCons/Waf. With CMakeD, you clone the repository, i.e. $ hg clone http://cmaked2.googlecode.com/hg/ cmaked2 and $ cd cmaked2/cmaked $ mkdir build $ cd build $ cmake .. $ make install to install it. That will copy the necessary files into your CMake installation. I'll guess SCons/Waf offers something more than that. > > It seems that not many people are using CMakeD and there seems to be > > less interest in it. But I used it for a word cloud I programed for a > > course (see here http://gitorious.org/wordcloud). The nice thing is that > > you can rely on CMake's modules. E.g. in the above example it was > > straightforward to let CMake make sure that the GD library it installed. > > That makes it very useful for integrating a C/C++ library. > > If you do not know already, the getting started guide is here > > http://code.google.com/p/cmaked2/wiki/GettingStarted > > According to http://code.google.com/p/cmaked2/wiki/TestedPlatforms it > > was used on recent versions of Debian, ArchLinux, Gentoo, and Ubuntu > > with CMake at least 2.8.2 and dmd at least 2.049. It should work on > > Windows as well. Some people have used it. > > Recently I added gdc support which works for me. But so far I haven't > > got any feedback from other users. > > Personally I prefer SCons and Waf over CMake, but would be happy to > trial CMake and its D support. In fact the SCons tool I am trying to > force myself to work on and the CMake D support may help each other by > collaborating. Yeah. I try to help, if I can. Don't hesitate asking. Though I have to admit I have almost no Python skills. I like Ruby more. It pleases my eyes and there seems to be only enough space for one scripting language in my head. > > > What about 64bit support in dmd2? > > > > You mean support for building 64bit code with dmd2 using CMakeD? That > > should be fairly straightforward given that you just need to pass -m64 > > to dmd. I think by default it builds 32-bit even on a 64-bit machine. > > But I assume this is going to change once 64-bit is stable. > > If I find some time I will build the above word cloud example for > > 64-bit and report here, if that helps you. > > In one sense it is as easy as passing -m64 but there is also the issue > of the list of libraries needed at link time -- or does the CMake stuff > already pull in that information from dmd.conf? CMakeD just relies on dmd. But you're right it's a bit more complicated. It seems that on Linux CMake has no proper way of cross building a 32 bit/64 bit version. That kind of cross compiling does not seems to work. I would need to investigate further to find out whether it's a dmd problem. Usually I think for building a 32 bit C binary you just pass -m32 then the linker should search in ...lib32/. If you build a 64 bit binary it should search in ...lib64/. If you don't specify anything it's up to the compiler. CMake's task is just to check whether the dependent library is installed. I think at the moment it does not look in lib32/lib64 separately. In that sense it's support for cross compiling is weak. I may be wrong here. > > Recently I've been a bit distracted from CMakeD development since I > > stumbled over > > Gyp > > http://code.google.com/p/gyp/ > > and > > Premake > > http://industriousone.com/premake > > Both address similar needs like CMake but do not support D yet. > > Are these good enough to get traction compared to SCons, Waf, CMake, > Autotools, Make, Ant, Maven, Gradle, Gant? This is a serious question > not a troll. There is always space for a new, better build framework to > take the community by storm, but on the other hand if they are just side > shows then it dilutes effort and progress. I do not know yet. I think both of them are pretty weak regarding already available modules, i.e. files to find a specific dependency. Gyp is developed for building Chromium. They had a problem with SCons while migrating to it. They also wrote in what regard CMake didn't work out for them http://code.google.com/p/gyp/wiki/GypVsCMake I like premake for it's readability see http://industriousone.com/sample-script and it's all Lua. Though I'm not sure whether I can keep two scripting languages in my head. But Lua seems to be very simple. Jens
toHash /opCmp for builtin-types
Following came to my mind while coding some generic collection classes: The toHash and opCmp operations are not supported for builtin-types though their implementation is trivial. * toHash The code is already there inside TypeInfo.getHash. But typeid(value).getHash(&value) is much uglier than value.toHash. Note that hashes make sense for integer (trivial implementation), not necessarily for floats. * opCmp Would be useful for delegating opCmp of a struct to one member. Alternative: Introduce new operator which returns 1/0/-1 (ruby does this with "<=>"). Currently I end up writing: int opCmp(...) { if(a>b) return +1; if(a==b) return 0; return -1; } which uses 2 comparisons where only 1 is needed (though the compiler might notice it if comparision is pure and so on). Furthermore it might me a nice idea to have toString (or the future "writeTo") for builtin-types. It would need some new code in the core-lib, but could simplify generic programming. any thoughts? - Krox
Re: float equality
so: > I still think "==" should mean the exact equality test and must be > consistent in language. Everyone in this thread has misunderstood what I have tried to say, so I will try to explain again, see the bottom of this post. My idea was to turn the "==" operator among FP values into a syntax error (plus other added ideas). > Making something like almostEqual default is far more catastrophic than > its current form. This problem doesn't exists in my idea. > It doesn't solve the existing problem and create a basis for new form of > problems. The problems it creates are smaller. > If one doesn't know what floating point is and insists on using it, it is > his own responsibility to face the consequences. I don't buy this argument. > If only interval arithmetic would solve all the problems, i wouldn't > hesitate dumping everything about FP. > But no, it comes with its shortcomings. I agree that interval arithmetic has its downsides. I have not proposed to replace normal floating point values with intervals. I have suggested to add a Phobos module for interval arithmetic because it's a useful thing to have. - The little proposal I was thinking about is: 1) Turn the "==" operator into a compile-time syntax error if one or both operands are floating point values. if (x == y) { ==> syntax error 2) The semantic of the "==" is now done by the "is" operator. So if you want exactly this C code: if (x == y) { you use: if (x is y) { 3) Then another built-in function or semantics is added, that is similar to: some_syntax(x, y, an_approximation_level) It takes tree arguments: x, y and a number of bits. Turning the == into a syntax error is done to remember programmers that == among FP values is tricky, and the "is" operator is added because there are some situations where you want an exact comparison and you know what you are doing. This was my idea. I know it doesn't improve the current situation a lot... so I don't expect this to be accepted. But I like to be understood :-) Bye, bearophile
Re: CMake for D2 ready for testers
On Mon, 2011-02-21 at 12:40 +0100, Jens Mueller wrote: [ . . . ] > I don't know about upstreaming it. Certainly it would be nice. But for > doing so I need polish it further. Can the code comprising the D support for CMake be "packaged" up so that it can be offerred to everyone direct from a DVCS repository? SCons and Waf have the tool concept to allow for this. CMake must have something analogous. People can then make use of the D support with their CMake without the necessity of it heading upstream -- though it would be good for that to happen eventually. > It seems that not many people are using CMakeD and there seems to be > less interest in it. But I used it for a word cloud I programed for a > course (see here http://gitorious.org/wordcloud). The nice thing is that > you can rely on CMake's modules. E.g. in the above example it was > straightforward to let CMake make sure that the GD library it installed. > That makes it very useful for integrating a C/C++ library. > If you do not know already, the getting started guide is here > http://code.google.com/p/cmaked2/wiki/GettingStarted > According to http://code.google.com/p/cmaked2/wiki/TestedPlatforms it > was used on recent versions of Debian, ArchLinux, Gentoo, and Ubuntu > with CMake at least 2.8.2 and dmd at least 2.049. It should work on > Windows as well. Some people have used it. > Recently I added gdc support which works for me. But so far I haven't > got any feedback from other users. Personally I prefer SCons and Waf over CMake, but would be happy to trial CMake and its D support. In fact the SCons tool I am trying to force myself to work on and the CMake D support may help each other by collaborating. > > What about 64bit support in dmd2? > > You mean support for building 64bit code with dmd2 using CMakeD? That > should be fairly straightforward given that you just need to pass -m64 > to dmd. I think by default it builds 32-bit even on a 64-bit machine. > But I assume this is going to change once 64-bit is stable. > If I find some time I will build the above word cloud example for > 64-bit and report here, if that helps you. In one sense it is as easy as passing -m64 but there is also the issue of the list of libraries needed at link time -- or does the CMake stuff already pull in that information from dmd.conf? > Recently I've been a bit distracted from CMakeD development since I > stumbled over > Gyp > http://code.google.com/p/gyp/ > and > Premake > http://industriousone.com/premake > Both address similar needs like CMake but do not support D yet. Are these good enough to get traction compared to SCons, Waf, CMake, Autotools, Make, Ant, Maven, Gradle, Gant? This is a serious question not a troll. There is always space for a new, better build framework to take the community by storm, but on the other hand if they are just side shows then it dilutes effort and progress. -- Russel. = Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.win...@ekiga.net 41 Buckmaster Roadm: +44 7770 465 077 xmpp: rus...@russel.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder signature.asc Description: This is a digitally signed message part
Re: float equality
On Sun, 20 Feb 2011 14:53:03 +0200, bearophile wrote: Walter: There's a total lack of evidence for that. MISRA-C standard disallows the equality among FP values. I presume they have some serious evidence for their choices, but I don't know such evidence. Today MISRA-C is one of the most serious attempts at fixing the holes in C language to write more reliable code for the industry. MISRA-C standard is sometimes extreme and I don't suggest to follow everything they say, but I suggest to not totally ignore what it says about many C features. 1. Roundoff error is not of a fixed magnitude. I meant to replace the nude FP equality with a function that has the shared number of mantissa bits as third required argument. And then perform the normal FP equality using the "is" operator. I still think "==" should mean the exact equality test and must be consistent in language. Making something like almostEqual default is far more catastrophic than its current form. It doesn't solve the existing problem and create a basis for new form of problems. If one doesn't know what floating point is and insists on using it, it is his own responsibility to face the consequences. Regarding FP rounding errors, eventually it will be good to add to Phobos2 a library for Interval FP arithmetic, with trigonometric functions too, etc: http://en.wikipedia.org/wiki/Interval_arithmetic Bye, bearophile If only interval arithmetic would solve all the problems, i wouldn't hesitate dumping everything about FP. But no, it comes with its shortcomings.
Re: float equality
On Sat, 19 Feb 2011 14:06:38 +0200, spir wrote: Hello, What do you think of this? unittest { assert(-1.1 + 2.2 == 1.1); // pass assert(-1.1 + 2.2 + 3.3 == 4.4);// pass assert(-1.1 + 3.3 + 2.2 == 4.4);// fail assert(-1.1 + 3.3 == 2.2); // fail } There is approxEquals in stdlib, right; but shouldn't builtin "==" be consistent anyway? Denis Strange no one mentioned this. Problem is not the floating point format in your example. I can do the same with integral numbers, how? int(5) / int(2) => int(2) or int(3)? And why? Answer depends on the rounding mode, if you don't know the given rounding mode for a (machine interpreted) number system you can't say anything. You know the answer because you know the rule. I can say same for the floating points since i know how they work.
Re: float equality
On Mon, 21 Feb 2011 10:15:28 -0500, Kagamin wrote: bearophile Wrote: Jonathan M Davis: > The thing is, of course, that actual equality sucks for floating point values. So much, that some people have proposed to deprecate the normal FP equality. MISRA-C disallows them. When you see a == among FP values is often a code smell. Is it safe to assert that noone needs exact FP comparison? Nope. if(var == 0) is a common, important and valid test. To say nothing of testing for sentinel values.
Re: float equality
bearophile Wrote: > Jonathan M Davis: > > > The thing is, of course, that actual equality sucks for floating point > > values. > > So much, that some people have proposed to deprecate the normal FP equality. > MISRA-C disallows them. When you see a == among FP values is often a code > smell. > Is it safe to assert that noone needs exact FP comparison?
Re: Linking COFF and OMF
Don Wrote: > That's pretty good. Almost all of those things are standard C. > LDIV and UDIV could easily be eliminated. > __except_list is a null asm label (it is FS:[0]). > > So the main problematic ones are: > _xi_a , __acrtused_con, the __fp functions, and _Ccmp Yep, they need to be taken care of anyway to support 64Bit on Windoze, some of them are even written in asm. Member ..\core32\cinit.asm Offset 0x700 __argc _errno __argv __xi_a Member llmath.asm Offset 0x7E540 __LCMP@ __LDIV@ __LMUL@ __ULDIV@ Member ..\win32\constart.c Offset 0x5B280 BSS __acrtused_con
Re: float equality
spir wrote: On 02/21/2011 05:32 AM, Walter Bright wrote: Kevin Bealer wrote: == Quote from Walter Bright (newshou...@digitalmars.com)'s article Kevin Bealer wrote: You could switch to this: struct { BigInt numerator; BigInt denominator; }; Bingo -- no compromise. It cannot represent irrational numbers accurately. True but I did mention this a few lines later. I guess I'm not seeing the point of representing numbers as ratios. That works only if you stick to arithmetic. As soon as you do logs, trig functions, roots, pi, etc., you're back to square one. "Naturally" non-representable numbers (irrationals), or results of "naturally" approximate operations (like trig), are not an issue because they are expected to yield inaccuracy. You keep talking about "inaccuracy" and "approximation", when I think you really mean "rounding". Actually irrationals are a *big* issue. You might expect sin(PI) == 0, but it isn't. This isn't a problem with sin(), it's a problem with the limited resolution of PI. sin(3.141592654) != 0. This is different from numbers which are well definite in base ten, as well as results of operations which should yield well definite results. We think in base ten, thus for us 1.1 is exact. Two operations yielding 1.1 (including plain conversion to binary of the literal '1.1') may in fact yield unequal numbers at the binary level; this is a trap, and just wrong: assert (1.1 != 3.3 - 2.2);// pass Subsequent issue is (as I just discovered) that there is no remedy for that, since there is no way to know, in the general case, how many common magnitude bits are supposed to be shared by the numbers to be compared, when the results are supposed to be correct. (Is this worse if floating point format since the scale factor is variable?) Decimal floating point formats exist, and have been implemented in hardware in some cases. They don't suffer from the rounding-during-I/O issue that you mention. Fortunately D supports the %a binary floating point format, eg, 0x1.34p+46, which gives you what-you-see-is-what-you-get. This lets me think that, for common use, fixed point with a single /decimal/ scale factor (for instance 10^-6) may be a better solution. While the underlying integer type may well be binary --this is orthogonal-- we don't need using eg BCD. Using longs, this example would allow representing numbers up to +/- (2^63)/(10^-9), equivalent to having about 43 bits for the integral part (*), together with a precision of 6 decimal digits for the fractional part. Seems correct for common use cases, I guess. Denis (*) 10^6 ~ 2^20 (2^63)/(10^-6) > 9_000_000_000_000
Re: Linking COFF and OMF
Trass3r wrote: In 2.052 several of the most complicated dependencies on snn.lib (those relating to exception handling) were removed. I don't know how many more DMC-specific ones there are, but using another snn.lib might be possible now. Compiled a hello world with empty snn.lib: That's pretty good. Almost all of those things are standard C. LDIV and UDIV could easily be eliminated. __except_list is a null asm label (it is FS:[0]). So the main problematic ones are: _xi_a , __acrtused_con, the __fp functions, and _Ccmp OPTLINK (R) for Win32 Release 8.00.8 Copyright (C) Digital Mars 1989-2010 All rights reserved. http://www.digitalmars.com/ctg/optlink.html helloworld.obj(helloworld) Error 42: Symbol Undefined __acrtused_con C:\dmd\windows\bin\..\lib\phobos.lib(dmain2) Error 42: Symbol Undefined ___alloca C:\dmd\windows\bin\..\lib\phobos.lib(dmain2) Error 42: Symbol Undefined __except_list helloworld.obj(helloworld) Error 42: Symbol Undefined _fprintf C:\dmd\windows\bin\..\lib\phobos.lib(dmain2) Error 42: Symbol Undefined _wcslen C:\dmd\windows\bin\..\lib\phobos.lib(deh) Error 42: Symbol Undefined __tls_array C:\dmd\windows\bin\..\lib\phobos.lib(deh) Error 42: Symbol Undefined __tls_index C:\dmd\windows\bin\..\lib\phobos.lib(gc) Error 42: Symbol Undefined _memcpy C:\dmd\windows\bin\..\lib\phobos.lib(gc) Error 42: Symbol Undefined _malloc C:\dmd\windows\bin\..\lib\phobos.lib(memory) Error 42: Symbol Undefined __xi_a C:\dmd\windows\bin\..\lib\phobos.lib(memory) Error 42: Symbol Undefined __end C:\dmd\windows\bin\..\lib\phobos.lib(gcx) Error 42: Symbol Undefined _calloc C:\dmd\windows\bin\..\lib\phobos.lib(gcx) Error 42: Symbol Undefined _free C:\dmd\windows\bin\..\lib\phobos.lib(gcx) Error 42: Symbol Undefined _memset C:\dmd\windows\bin\..\lib\phobos.lib(object_) Error 42: Symbol Undefined _strlen C:\dmd\windows\bin\..\lib\phobos.lib(thread) Error 42: Symbol Undefined __beginthreadex C:\dmd\windows\bin\..\lib\phobos.lib(thread) Error 42: Symbol Undefined __tlsend C:\dmd\windows\bin\..\lib\phobos.lib(thread) Error 42: Symbol Undefined __tlsstart C:\dmd\windows\bin\..\lib\phobos.lib(object_) Error 42: Symbol Undefined _memcmp C:\dmd\windows\bin\..\lib\phobos.lib(gcx) Error 42: Symbol Undefined _memmove C:\dmd\windows\bin\..\lib\phobos.lib(gcx) Error 42: Symbol Undefined _realloc C:\dmd\windows\bin\..\lib\phobos.lib(regexp) Error 42: Symbol Undefined _printf C:\dmd\windows\bin\..\lib\phobos.lib(regexp) Error 42: Symbol Undefined _memchr C:\dmd\windows\bin\..\lib\phobos.lib(datetime) Error 42: Symbol Undefined _localtime C:\dmd\windows\bin\..\lib\phobos.lib(datetime) Error 42: Symbol Undefined _tzset C:\dmd\windows\bin\..\lib\phobos.lib(stdio) Error 42: Symbol Undefined __iob C:\dmd\windows\bin\..\lib\phobos.lib(monitor) Error 42: Symbol Undefined __assert C:\dmd\windows\bin\..\lib\phobos.lib(lifetime) Error 42: Symbol Undefined __LDIV@ C:\dmd\windows\bin\..\lib\phobos.lib(outbuffer) Error 42: Symbol Undefined __vsnprintf C:\dmd\windows\bin\..\lib\phobos.lib(stdio) Error 42: Symbol Undefined _fclose C:\dmd\windows\bin\..\lib\phobos.lib(stdio) Error 42: Symbol Undefined _tmpfile C:\dmd\windows\bin\..\lib\phobos.lib(stdio) Error 42: Symbol Undefined _ftell C:\dmd\windows\bin\..\lib\phobos.lib(stdio) Error 42: Symbol Undefined _fseek C:\dmd\windows\bin\..\lib\phobos.lib(exception) Error 42: Symbol Undefined _strerror C:\dmd\windows\bin\..\lib\phobos.lib(stdio) Error 42: Symbol Undefined _setmode C:\dmd\windows\bin\..\lib\phobos.lib(stdio) Error 42: Symbol Undefined ___fhnd_info C:\dmd\windows\bin\..\lib\phobos.lib(stdio) Error 42: Symbol Undefined _fread C:\dmd\windows\bin\..\lib\phobos.lib(stdio) Error 42: Symbol Undefined _fopen C:\dmd\windows\bin\..\lib\phobos.lib(conv) Error 42: Symbol Undefined __ULDIV@ C:\dmd\windows\bin\..\lib\phobos.lib(format) Error 42: Symbol Undefined ___pfloatfmt C:\dmd\windows\bin\..\lib\phobos.lib(errno) Error 42: Symbol Undefined _errno C:\dmd\windows\bin\..\lib\phobos.lib(stdio) Error 42: Symbol Undefined ___fp_unlock C:\dmd\windows\bin\..\lib\phobos.lib(stdio) Error 42: Symbol Undefined __fgetc_nlock C:\dmd\windows\bin\..\lib\phobos.lib(stdio) Error 42: Symbol Undefined __fgetwc_nlock C:\dmd\windows\bin\..\lib\phobos.lib(stdio) Error 42: Symbol Undefined ___fp_lock C:\dmd\windows\bin\..\lib\phobos.lib(ti_cfloat) Error 42: Symbol Undefined __Ccmp
Re: Linking COFF and OMF
> In 2.052 several of the most complicated dependencies on snn.lib (those > relating to exception handling) were removed. I don't know how many more > DMC-specific ones there are, but using another snn.lib might be possible > now. Compiled a hello world with empty snn.lib: OPTLINK (R) for Win32 Release 8.00.8 Copyright (C) Digital Mars 1989-2010 All rights reserved. http://www.digitalmars.com/ctg/optlink.html helloworld.obj(helloworld) Error 42: Symbol Undefined __acrtused_con C:\dmd\windows\bin\..\lib\phobos.lib(dmain2) Error 42: Symbol Undefined ___alloca C:\dmd\windows\bin\..\lib\phobos.lib(dmain2) Error 42: Symbol Undefined __except_list helloworld.obj(helloworld) Error 42: Symbol Undefined _fprintf C:\dmd\windows\bin\..\lib\phobos.lib(dmain2) Error 42: Symbol Undefined _wcslen C:\dmd\windows\bin\..\lib\phobos.lib(deh) Error 42: Symbol Undefined __tls_array C:\dmd\windows\bin\..\lib\phobos.lib(deh) Error 42: Symbol Undefined __tls_index C:\dmd\windows\bin\..\lib\phobos.lib(gc) Error 42: Symbol Undefined _memcpy C:\dmd\windows\bin\..\lib\phobos.lib(gc) Error 42: Symbol Undefined _malloc C:\dmd\windows\bin\..\lib\phobos.lib(memory) Error 42: Symbol Undefined __xi_a C:\dmd\windows\bin\..\lib\phobos.lib(memory) Error 42: Symbol Undefined __end C:\dmd\windows\bin\..\lib\phobos.lib(gcx) Error 42: Symbol Undefined _calloc C:\dmd\windows\bin\..\lib\phobos.lib(gcx) Error 42: Symbol Undefined _free C:\dmd\windows\bin\..\lib\phobos.lib(gcx) Error 42: Symbol Undefined _memset C:\dmd\windows\bin\..\lib\phobos.lib(object_) Error 42: Symbol Undefined _strlen C:\dmd\windows\bin\..\lib\phobos.lib(thread) Error 42: Symbol Undefined __beginthreadex C:\dmd\windows\bin\..\lib\phobos.lib(thread) Error 42: Symbol Undefined __tlsend C:\dmd\windows\bin\..\lib\phobos.lib(thread) Error 42: Symbol Undefined __tlsstart C:\dmd\windows\bin\..\lib\phobos.lib(object_) Error 42: Symbol Undefined _memcmp C:\dmd\windows\bin\..\lib\phobos.lib(gcx) Error 42: Symbol Undefined _memmove C:\dmd\windows\bin\..\lib\phobos.lib(gcx) Error 42: Symbol Undefined _realloc C:\dmd\windows\bin\..\lib\phobos.lib(regexp) Error 42: Symbol Undefined _printf C:\dmd\windows\bin\..\lib\phobos.lib(regexp) Error 42: Symbol Undefined _memchr C:\dmd\windows\bin\..\lib\phobos.lib(datetime) Error 42: Symbol Undefined _localtime C:\dmd\windows\bin\..\lib\phobos.lib(datetime) Error 42: Symbol Undefined _tzset C:\dmd\windows\bin\..\lib\phobos.lib(stdio) Error 42: Symbol Undefined __iob C:\dmd\windows\bin\..\lib\phobos.lib(monitor) Error 42: Symbol Undefined __assert C:\dmd\windows\bin\..\lib\phobos.lib(lifetime) Error 42: Symbol Undefined __LDIV@ C:\dmd\windows\bin\..\lib\phobos.lib(outbuffer) Error 42: Symbol Undefined __vsnprintf C:\dmd\windows\bin\..\lib\phobos.lib(stdio) Error 42: Symbol Undefined _fclose C:\dmd\windows\bin\..\lib\phobos.lib(stdio) Error 42: Symbol Undefined _tmpfile C:\dmd\windows\bin\..\lib\phobos.lib(stdio) Error 42: Symbol Undefined _ftell C:\dmd\windows\bin\..\lib\phobos.lib(stdio) Error 42: Symbol Undefined _fseek C:\dmd\windows\bin\..\lib\phobos.lib(exception) Error 42: Symbol Undefined _strerror C:\dmd\windows\bin\..\lib\phobos.lib(stdio) Error 42: Symbol Undefined _setmode C:\dmd\windows\bin\..\lib\phobos.lib(stdio) Error 42: Symbol Undefined ___fhnd_info C:\dmd\windows\bin\..\lib\phobos.lib(stdio) Error 42: Symbol Undefined _fread C:\dmd\windows\bin\..\lib\phobos.lib(stdio) Error 42: Symbol Undefined _fopen C:\dmd\windows\bin\..\lib\phobos.lib(conv) Error 42: Symbol Undefined __ULDIV@ C:\dmd\windows\bin\..\lib\phobos.lib(format) Error 42: Symbol Undefined ___pfloatfmt C:\dmd\windows\bin\..\lib\phobos.lib(errno) Error 42: Symbol Undefined _errno C:\dmd\windows\bin\..\lib\phobos.lib(stdio) Error 42: Symbol Undefined ___fp_unlock C:\dmd\windows\bin\..\lib\phobos.lib(stdio) Error 42: Symbol Undefined __fgetc_nlock C:\dmd\windows\bin\..\lib\phobos.lib(stdio) Error 42: Symbol Undefined __fgetwc_nlock C:\dmd\windows\bin\..\lib\phobos.lib(stdio) Error 42: Symbol Undefined ___fp_lock C:\dmd\windows\bin\..\lib\phobos.lib(ti_cfloat) Error 42: Symbol Undefined __Ccmp
Re: float equality
"Walter Bright" wrote: A reasonable way to do financial work is to use longs to represent pennies. After all, you don't have fractional cents in your accounts. I tend to favor using floating point, doing calculations in pennies, and explicitly rounding result to nearest full penny. This is almost the same as using longs except that division does not always truncate numbers. Fractional pennies are often useful in intermediate results. If there is some more appropriate type to use, I might be easily convinced otherwise. However, I don't think using some indefinite precision number type is a fits-for-all solution either, since... I.e. you have to know what you're doing :-) Touche! -- Jouko
Re: CMake for D2 ready for testers
Gour wrote: > On Sun, 5 Sep 2010 22:28:41 -0700 > SK wrote: > > > Why labor over buggy Makefiles when you could be laboring over buggy > > CMake files at a much more productive level of abstraction? :o) > > I played with Waf a bit and it has nice support for D. > > However, despite many contributors listed, it still seems to be mostly > one-man-show which makes me a bit reluctant to use it over Cmake > (SCons seems to be very slow without much progress), so I wonder what > is the current status of the project? > > Any hope that D support will be applied in upstream? I don't know about upstreaming it. Certainly it would be nice. But for doing so I need polish it further. It seems that not many people are using CMakeD and there seems to be less interest in it. But I used it for a word cloud I programed for a course (see here http://gitorious.org/wordcloud). The nice thing is that you can rely on CMake's modules. E.g. in the above example it was straightforward to let CMake make sure that the GD library it installed. That makes it very useful for integrating a C/C++ library. If you do not know already, the getting started guide is here http://code.google.com/p/cmaked2/wiki/GettingStarted According to http://code.google.com/p/cmaked2/wiki/TestedPlatforms it was used on recent versions of Debian, ArchLinux, Gentoo, and Ubuntu with CMake at least 2.8.2 and dmd at least 2.049. It should work on Windows as well. Some people have used it. Recently I added gdc support which works for me. But so far I haven't got any feedback from other users. > What about 64bit support in dmd2? You mean support for building 64bit code with dmd2 using CMakeD? That should be fairly straightforward given that you just need to pass -m64 to dmd. I think by default it builds 32-bit even on a 64-bit machine. But I assume this is going to change once 64-bit is stable. If I find some time I will build the above word cloud example for 64-bit and report here, if that helps you. Recently I've been a bit distracted from CMakeD development since I stumbled over Gyp http://code.google.com/p/gyp/ and Premake http://industriousone.com/premake Both address similar needs like CMake but do not support D yet. Jens
Re: float equality
== Quote from Walter Bright (newshou...@digitalmars.com)'s article > Kevin Bealer wrote: > A reasonable way to do financial work is to use longs to represent pennies. > After all, you don't have fractional cents in your accounts. > Using floating point to represent money is a disaster in the making. ... > There's just no getting around needing to understand how computer arithmetic > works. ... > I.e. you have to know what you're doing :-) I'm sure that banks do what you are suggesting and have a policy (maybe a regulatorily required one) on who gets the fractional cents. This is easy because cents are a standard that everyone agrees is the 'bottom'. But there are lots of other cases that aren't standardized... i.e. is time represented in seconds, milliseconds, nanoseconds? Java, Windows and Linux use different versions. Some IBM computers measure time in 1/2^Nths of a second, where N is around 20 or 30. Is land measured in square feet or acres? Whatever you pick as the bottom may not be good enough in the future. If you introduce more exact measurements, now you have mixed representations. If you had a rational number of acres, on the other hand, you could start doing measurements in square feet and representing them as x/(square feet in an acre). Mixed representations are trouble because they invite usage errors. Rational doesn't change the facts of life but it simplifies some things because it makes the choice of base an arbitrary problem-domain kind of choice whereas in fixed point it is very difficult to revisit this decision. Every time I think about this, though, I think of "classroom" examples, so it's probably the case that my arguments are academic (in the bad sense)... and I should give it up. If this was more useful it would probably be used more. Kevin
Re: CMake for D2 ready for testers
On Mon, 21 Feb 2011 10:56:50 + Russel Winder wrote: > > > Any hope that D support will be applied in upstream? > > > > What about 64bit support in dmd2? > > Are you talking about CMake or SCons here? CMake. > For SCons, I have "forked" the D tool in the SCons core to be a > separate project, cf. https://bitbucket.org/russel/scons_dmd_new I may take a look, although CMake seems to be secure project. Sincerely, Gour -- “In the material world, conceptions of good and bad are all mental speculations…” (Sri Caitanya Mahaprabhu) http://atmarama.net | Hlapicina (Croatia) | GPG: CDBF17CA signature.asc Description: PGP signature
Re: CMake for D2 ready for testers
On Mon, 2011-02-21 at 11:31 +0100, Gour wrote: [ . . . ] > > I played with Waf a bit and it has nice support for D. > > However, despite many contributors listed, it still seems to be mostly > one-man-show which makes me a bit reluctant to use it over Cmake > (SCons seems to be very slow without much progress), so I wonder what > is the current status of the project? SCons core development does appear to be in a bit of hiatus just now, but this happens from time to time. The project has always picked up again though. > Any hope that D support will be applied in upstream? > > What about 64bit support in dmd2? Are you talking about CMake or SCons here? For SCons, I have "forked" the D tool in the SCons core to be a separate project, cf. https://bitbucket.org/russel/scons_dmd_new -- Russel. = Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.win...@ekiga.net 41 Buckmaster Roadm: +44 7770 465 077 xmpp: rus...@russel.org.uk London SW11 1EN, UK w: www.russel.org.uk skype: russel_winder signature.asc Description: This is a digitally signed message part
Re: float equality
On Monday 21 February 2011 01:55:28 Walter Bright wrote: > Kevin Bealer wrote: > > 1. To solve the basic problem the original poster was asking -- if you > > are working with simple decimals and arithmetic you can get completely > > accurate representations this way. For some cases like simple financial > > work this might work really well. e.g. where float would not be because > > of the slow leak of information with each operation. (I assume real > > professional financial work is already done using a (better) > > representation.) > > A reasonable way to do financial work is to use longs to represent pennies. > After all, you don't have fractional cents in your accounts. > > Using floating point to represent money is a disaster in the making. Actually, depending on what you're doing, I'm not sure that you can legally represent money with floating point values. As I understand it, there are definite restrictions on banking software and the like with regards to that sort of thing (though I don't know exactly what they are). Regardless, if you want to deal with money correctly, you pretty much have to treat it as cents and use integer values. Anything else will cause too many errors. That's definitely the way I've seen it done at any company that I've work at which have had to deal with money in their software. - Jonathan M Davis
Re: CMake for D2 ready for testers
On Sun, 5 Sep 2010 22:28:41 -0700 SK wrote: > Why labor over buggy Makefiles when you could be laboring over buggy > CMake files at a much more productive level of abstraction? :o) I played with Waf a bit and it has nice support for D. However, despite many contributors listed, it still seems to be mostly one-man-show which makes me a bit reluctant to use it over Cmake (SCons seems to be very slow without much progress), so I wonder what is the current status of the project? Any hope that D support will be applied in upstream? What about 64bit support in dmd2? Sincerely, Gour -- “In the material world, conceptions of good and bad are all mental speculations…” (Sri Caitanya Mahaprabhu) http://atmarama.net | Hlapicina (Croatia) | GPG: CDBF17CA signature.asc Description: PGP signature
Re: the naked keyword is an attribute - but it looks like an function when used?
On Mon, 21 Feb 2011 12:45:21 +0300, dennis luehring wrote: naked INSIDE of the context which is adress with the attribute - looks very strange to me, because it changes the pro- AND epilog of an function/codeblock real blabla(real x) { asm{ naked; mov EAX,[RSP]; naked; add EAX,0x3fff; naked; } } wouldn't it be better to have something like naked asm{ ... } or real blabla(real x) naked { naked asm{ } } or like delphi does real blabla( x: real ) assembler { mov EAX,[RSP]; add EAX,0x3fff; } Yes, naked applied to the whole function body, and I proposed to make it an external attribute because of that: http://www.mail-archive.com/digitalmars-d@puremagic.com/msg22593.html
Re: float equality
spir wrote: "Naturally" non-representable numbers (irrationals), or results of "naturally" approximate operations (like trig), are not an issue because they are expected to yield inaccuracy. Huh, I regularly run into people who are befuddled by the trig functions not conforming to the usual trig identity equations. Worse, I also regularly run into crappy sloppy implementations of the transcendental functions that lose excessive bits of accuracy (I'm looking at you, FreeBSD and OSX). Don has been moving std.math to using our own implementations because of this, avoiding dependency on crappy C libraries written by people who do not understand or care about FP math.
Re: float equality
Kevin Bealer wrote: 1. To solve the basic problem the original poster was asking -- if you are working with simple decimals and arithmetic you can get completely accurate representations this way. For some cases like simple financial work this might work really well. e.g. where float would not be because of the slow leak of information with each operation. (I assume real professional financial work is already done using a (better) representation.) A reasonable way to do financial work is to use longs to represent pennies. After all, you don't have fractional cents in your accounts. Using floating point to represent money is a disaster in the making. 2. To explain why the 'simple' task of representing something like .1 wasn't as easy as it looks. In other words, why the people who designed float weren't just brain dead. I think they really knew what they were doing but it shocks most people at first that a modern computer can't do what they see as grade school arithmetic. There's just no getting around needing to understand how computer arithmetic works. I think for some purposes though, lossless domain specific representations can be a good tool -- if you can represent a problem in a way that is lossless you can maybe do better calculations over long series than working with 'double' and taking the accuracy hit. This is necessarily an application specific technique though. I.e. you have to know what you're doing :-)
the naked keyword is an attribute - but it looks like an function when used?
naked INSIDE of the context which is adress with the attribute - looks very strange to me, because it changes the pro- AND epilog of an function/codeblock real blabla(real x) { asm{ naked; mov EAX,[RSP]; naked; add EAX,0x3fff; naked; } } wouldn't it be better to have something like naked asm{ ... } or real blabla(real x) naked { naked asm{ } } or like delphi does real blabla( x: real ) assembler { mov EAX,[RSP]; add EAX,0x3fff; }
Re: float equality
On 02/21/2011 08:52 AM, Kevin Bealer wrote: == Quote from Walter Bright (newshou...@digitalmars.com)'s article ... I do understand that if you have a full symbolic representation, you can do so with zero losses. But Kevin's proposal was not that, it was for a ratio representation. All it represents symbolically is division. There are plenty of other operations. I'm just answering the original poster's question. You're right though -- it's not a complete numerical system, (and I don't propose it for inclusion in the language or even necessarily the library.) I had two goals: 1. To solve the basic problem the original poster was asking -- if you are working with simple decimals and arithmetic you can get completely accurate representations this way. For some cases like simple financial work this might work really well. e.g. where float would not be because of the slow leak of information with each operation. (I assume real professional financial work is already done using a (better) representation.) The few financial libs/types I have met (it's not my glass of bier) all used plain integers ;-) (after all, 1.23 € or $ is a plain integer when counted in cents...), with a fixed or variable decimal scale factor. (what I talk about in another post, I just realise it) 2. To explain why the 'simple' task of representing something like .1 wasn't as easy as it looks. In other words, why the people who designed float weren't just brain dead. I think they really knew what they were doing but it shocks most people at first that a modern computer can't do what they see as grade school arithmetic. I think for some purposes though, lossless domain specific representations can be a good tool -- if you can represent a problem in a way that is lossless you can maybe do better calculations over long series than working with 'double' and taking the accuracy hit. This is necessarily an application specific technique though. I think some typical features, or solutions to implement given features, used in present programming languages, are historic traces from times when scientific computations where the only, then the typical, use of computers. I'm not far to think that fractional numbers (id est, numbers able to represent /measures/) form a domain-specific feature -- thus belong to library (lol). Denis -- _ vita es estrany spir.wikidot.com
Re: float equality
On 02/21/2011 05:32 AM, Walter Bright wrote: Kevin Bealer wrote: == Quote from Walter Bright (newshou...@digitalmars.com)'s article Kevin Bealer wrote: You could switch to this: struct { BigInt numerator; BigInt denominator; }; Bingo -- no compromise. It cannot represent irrational numbers accurately. True but I did mention this a few lines later. I guess I'm not seeing the point of representing numbers as ratios. That works only if you stick to arithmetic. As soon as you do logs, trig functions, roots, pi, etc., you're back to square one. "Naturally" non-representable numbers (irrationals), or results of "naturally" approximate operations (like trig), are not an issue because they are expected to yield inaccuracy. This is different from numbers which are well definite in base ten, as well as results of operations which should yield well definite results. We think in base ten, thus for us 1.1 is exact. Two operations yielding 1.1 (including plain conversion to binary of the literal '1.1') may in fact yield unequal numbers at the binary level; this is a trap, and just wrong: assert (1.1 != 3.3 - 2.2); // pass Subsequent issue is (as I just discovered) that there is no remedy for that, since there is no way to know, in the general case, how many common magnitude bits are supposed to be shared by the numbers to be compared, when the results are supposed to be correct. (Is this worse if floating point format since the scale factor is variable?) This lets me think that, for common use, fixed point with a single /decimal/ scale factor (for instance 10^-6) may be a better solution. While the underlying integer type may well be binary --this is orthogonal-- we don't need using eg BCD. Using longs, this example would allow representing numbers up to +/- (2^63)/(10^-9), equivalent to having about 43 bits for the integral part (*), together with a precision of 6 decimal digits for the fractional part. Seems correct for common use cases, I guess. Denis (*) 10^6 ~ 2^20 (2^63)/(10^-6) > 9_000_000_000_000 -- _ vita es estrany spir.wikidot.com