Christopher Smith wrote:
Destructors can, of course, throw exceptions. You just need to understand the implications of doing so. In particular, in the event that std::uncaught_exception() returns true (or would return true if you invoked it), the runtime's terminate handler is going to get called. This, or the other obvious alternative of ignoring errors when freeing resources (many resources simply can't fail to be freed anyway), is actually the desired outcome in most cases. So either plowing ahead or not throwing an exception works out pretty well most of the time. The most common bug vis-a-vis freeing resources is failing to try to free them in the first place, not failing to handle the case where you couldn't free them.
Until you stuff a Socket into a container and the program goes *boom* because it threw an Exception in the destructor. Oops. What about all those other nice Sockets that were just fine?
The worst part is that this was a normal C++ bug, things *almost* worked, but would occasionally fall into a big smoking heap when the network had a hiccup.
Been there. Done that. Rewrote 10,000 lines of somebody else's crap code to fix it by moving the Socket closing code out of the destructor and into the inline code.
Exceptions in destructors are evil.
Anyway, the notion that it's no better than Java is just silly. Sure, trying to recover from a failure to free a resource with anything much more sophisticated than abort() is tricky work regardless of your programming language, but Java doesn't even provide a mechanism for encapsulating the logic of when to free up a resource, and the nastiness of Java code for the simple case of "just try to free up resources as soon as I no longer need them" is just painful to code up.
I maintain that it is the same. However, this is just devolving into: "Is not! Is too!" I'm not going to convince you; I'll have to let painful experience do that for me. ;)
If you'd read the books you'd know that a) they are short for technical books and b) the title is amusing but misleading. In fact the books are collections of the Guru of the Week series. They typically have over a half dozen major topics, only one of which might be related to exceptions.
And what they lack in pages they make up for in density. Even the C++ committee had trouble with those.
Yes, and that's all it is: a safety net, and a poor one at that. Unlike with memory management, when you run out of sockets, there is no mechanism for implementing a policy to free up unused ones. You are basically left with having to build a framework which wraps sockets, tracks open ones with weak references, and then tries to figure out which ones it can/should close down in the event that you can't instantiate any more. I've been burned by this one, with specific reference to sockets, more often than I care to think about.
Given that you have to manually delete these sockets in C++, what's different about having to manually close these sockets in Java?
So, what you want is the ability to close() a socket when it immediately falls out of scope. And you are willing to give up putting sockets into containers for that capability.
I think you are overstating Meyer's position.
I don't think so. But read for yourself: http://www.awprofessional.com/content/images/0201749629/items/item2-2.pdf Meyers takes a very harsh stance about this.
Even "huge" languages like Common Lisp, Ada and C++ can't even come close to addressing all of them. That being said, the more common the usage of a design pattern, perhaps the more important it is that a language designer should be a trivial way to implement said pattern.
Huh? C++ is a *HUGE* pattern sink. Practically every Design Pattern gets used in C++ because everything is static.
I find that Common Lisp is generally the least pattern sensitive. I think that's because you are not bonded to the idea of static typing, Class and single dispatch. Structural patterns kind of go away in that environment leaving only the architectural patterns. Python works that way for me, as well.
> Secondly,
they were working with hardware much older than today's stuff, which tends to be far more sensitive to memory issues.
Disagree. Modern processors are *vastly* more sensitive to memory issues. Time to initial page access from a 333Mhz machine (what they said they were doing) to a 2GHz+ machine has improved *maybe* 30% (70ns to 50ns)--let's call it 50% (1/2) just to be aggressive (in fact, I think it's really closer to 15%). Memory *bandwidth* on the other hand has increased from 133 SDR to 533 MHz DDR or a factor of 8.
I will grant you that modern processors have lots more registers to play with. This means that you don't go to heap *or* stack and escape analysis becomes even more essential. However, registers tend to be lumped in with *stack*. This probably explain most speedup.
rather than memory and/or synchronization intensive. In practice with Mustang I've seen code perform over twice as fast by turning on escape analysis with the typical improvement being in the 10-30% range, and I've heard from others who've claimed as much as an order of magnitude performance improvement. Sure it's nothing compared to algorithmic improvements, but that's one of the more significant jumps I've been able to make by just tweaking the runtime.
Reference, please? While your arguments are plausible, you can't just dismiss measured evidence without providing other measured evidence.
If it is really that dramatic, I'd really like to know. -a -- [email protected] http://www.kernel-panic.org/cgi-bin/mailman/listinfo/kplug-lpsg
