Andrew Lentvorski wrote:

> Christopher Smith wrote:
>
>> Destructors can, of course, throw exceptions. You just need to
>> understand the implications of doing so. In particular, in the event
>> that std::uncaught_exception() returns true (or would return true if you
>> invoked it), the runtime's terminate handler is going to get called.
>> This, or the other obvious alternative of ignoring errors when freeing
>> resources (many resources simply can't fail to be freed anyway), is
>> actually the desired outcome in most cases. So either plowing ahead or
>> not throwing an exception works out pretty well most of the time. The
>> most common bug vis-a-vis freeing resources is failing to try to free
>> them in the first place, not failing to handle the case where you
>> couldn't free them.
>
> Until you stuff a Socket into a container and the program goes *boom*
> because it threw an Exception in the destructor.  Oops.  What about
> all those other nice Sockets that were just fine?

I'm not entirely clear on what was going on in this context. It sounds
like you failed to define a policy that reflected the behavior you
wanted, and instead chose to implement it manually. If you had to work
with a Socket class that throws in its destructor (not a wise design
choice), but you want to ensure the container finishes destroying all of
its elements, why not wrap the Socket class with a smart pointer or a
"behaves like" class that provides the behavior you want, or define the
container so that it provides this behavior itself? Seems like a lot
less work than rewriting 10k of code.

> The worst part is that this was a normal C++ bug, things *almost*
> worked, but would occasionally fall into a big smoking heap when the
> network had a hiccup.

It's hard to know for sure without more context, but it definitely
sounds like stuff I've done before that does indeed work (can you
explain why none of the approaches I have suggested would work?). This
may be a case where you simply weren't familiar enough with C++ to
figure out how to get it to do what you want (a common problem with C++).

> Been there.  Done that.  Rewrote 10,000 lines of somebody else's crap
> code to fix it by moving the Socket closing code out of the destructor
> and into the inline code.

I'd be curious as to what that inline code looked like.

> Exceptions in destructors are evil.

I'll give you that, particularly with sockets. Honestly the mistake was
probably using that socket class.

>> Anyway, the notion that it's no better than Java is just silly. Sure,
>> trying to recover from a failure to free a resource with anything much
>> more sophisticated than abort() is tricky work regardless of your
>> programming language, but Java doesn't even provide a mechanism for
>> encapsulating the logic of when to free up a resource, and the nastiness
>> of Java code for the simple case of "just try to free up resources as
>> soon as I no longer need them" is just painful to code up.
>
> I maintain that it is the same.  However, this is just devolving into:
> "Is not!  Is too!"  I'm not going to convince you; I'll have to let
> painful experience do that for me.  ;)

Your statement that it is the same make any sense to me. Java does
successfully have policy based logic (albeit fairly inflexible) for
managing memory and its internal monitors (you are unfortunately on your
own for any other synchronization primitives you might use). Your
assertion that this is no different than explicitly managing the
resources would seem to undermine the notion that these are two of the
key benefits of using Java.

I can assure you that my statements are not the musings of a novice, but
someone who has been working with these tools for quite some time.

>> If you'd read the books you'd know that a) they are short for technical
>> books and b) the title is amusing but misleading. In fact the books are
>> collections of the Guru of the Week series. They typically have over a
>> half dozen major topics, only one of which might be related to
>> exceptions.
>
> And what they lack in pages they make up for in density.  Even the C++
> committee had trouble with those.

The C++ committee had difficulty *solving* the problems posed by the GoW
series, and not just the ones involving exceptions. Many of them
(including most of the ones involving exceptions) involved templates
which were an entirely new way of programming for most of the committee
members. The *solutions* (once someone had spelled them out) weren't
difficult to understand. Lots of the Java designers had a hard time
solving the problems addressed by java.util.concurrent (as those of us
who faithfully filed bug reports can attest to), but once Doug Lea
presented the solutions they and everyone else found them very easy to
employ.

Your claim that Herb Sutter has written "3 *entire* books on the problem
of exceptions" (emphasis yours) is grossly inaccurate, and just makes
your suggestion that I read the first book seem kind of silly.

I do agree that like most good technical books, the shorter ones tend to
be of the most value.

>> Yes, and that's all it is: a safety net, and a poor one at that. Unlike
>> with memory management, when you run out of sockets, there is no
>> mechanism for implementing a policy to free up unused ones. You are
>> basically left with having to build a framework which wraps sockets,
>> tracks open ones with weak references, and then tries to figure out
>> which ones it can/should close down in the event that you can't
>> instantiate any more. I've been burned by this one, with specific
>> reference to sockets, more often than I care to think about.
>
> Given that you have to manually delete these sockets in C++, what's
> different about having to manually close these sockets in Java?

Well, if I accept your premise that I have to manually delete the
sockets in C++, what's different is that Java provides finally blocks
which provide a straight forward way to ensure that an attempt is made
to close the sockets in the face of an exception being thrown, whereas
C++ doesn't have any such mechanism.

> So, what you want is the ability to close() a socket when it
> immediately falls out of scope.  And you are willing to give up
> putting sockets into containers for that capability.

Oh it's much worse than that. Based on your premise, despite all
evidence to the contrary, if you didn't disable exceptions in your C++
runtime, you have to write tons more code to avoid memory leaks in your
application.

>> I think you are overstating Meyer's position.
>
>
> I don't think so.  But read for yourself:
> http://www.awprofessional.com/content/images/0201749629/items/item2-2.pdf
>
> Meyers takes a very harsh stance about this.

He takes a harsh stance on folks who try to design around being able to
switch collection types. However, he has no problem with designing your
code and then deriving the guarantees you need from that, which was what
I was talking about. Indeed, this is how most of <algorithm> works, and
he's a big fan of using it.

A simple example:

template <typename InputIter, typename OutputIter>
void db_fetch_by_keys (InputIter begin, InputIter end, OutputIter out);

Now all this does is take a list of keys and pumps the objects in the
associated records in to whatever "out" is pointing at. Does it really
make sense to define it this way:

template <typename K, typename V>
void db_fetch_by_keys (const std::vector<K>& ids, std::list<V>& values);

Of course not, and that's exactly why std::copy has a similar interface.

>> Even "huge" languages like Common Lisp, Ada and C++ can't even come
>> close to addressing all of them. That being said, the more common the
>> usage of a design pattern, perhaps the more important it is that a
>> language designer should be a trivial way to implement said pattern.
>
There's a typeoh there that maybe is causing you some confusion with
regard to what I'm saying. "should be a trivial way" should be "should
provide a trivial way".

> Huh?  C++ is a *HUGE* pattern sink.  Practically every Design Pattern
> gets used in C++ because everything is static.

Okay, I think we have a big semantic gap here.

1) What is a pattern sink?
2) What the heck does "Practically every Design Pattern gets used in..."
mean? There's a near infinite number of design patterns.
3) I think you have a curious definition for when a Design Pattern gets
"used" in a language. Just like the name suggests, design patterns are
about design, not about languages. The design patterns exist at that
level independent of the language. Heck, some design patterns typically
aren't implemented in a programming language at all. Maybe a language
makes use of a pattern trivial while another makes it terribly painful,
but regardless which language you choose, you're using the design pattern.
4) "everything is static". C++ has a static type system, is that what
you meant? Saying everything is static seems a bit overstating it.

> I find that Common Lisp is generally the least pattern sensitive.

Oh yeah, and what is "pattern sensitive"?

BTW, you can find a handy implementation of the various GoF patterns in
LISP here:

http://roots.iai.uni-bonn.de/patchwork/Wiki.jsp?page=PatternsInLisp

>   I think that's because you are not bonded to the idea of static
> typing, Class and single dispatch.  Structural patterns kind of go
> away in that environment leaving only the architectural patterns.

Okay, another one of those semantic gaps. Where I come from a structural
pattern is just one type of design pattern, as is an architectural
pattern. They are two of about half a dozen different kinds of design
patterns. So imagining that the disappearance of structural patterns
would leave only architectural patterns seems pretty bizarre.

One of my favourite quotes about Norvig's notion that patterns don't
exist in Lisp: "If there are no such things as patterns in the Lisp
world, then there are no statements a master programmer can give as
advice for junior programmers, and hence there is no difference at all
between an beginning programmer and a master programmer in the Lisp
world." --Richard P. Gabriel.

>   Python works that way for me, as well.

Hey look! Python design patterns!

http://www.thinkware.se/cgi-bin/thinki.cgi/PythonPatterns

> > Secondly,
>
>> they were working with hardware much older than today's stuff, which
>> tends to be far more sensitive to memory issues.
>
>
> Disagree.  Modern processors are *vastly* more sensitive to memory issues.

My language was poorly chosen. My intended meaning was in agreement with
you.

>> rather than memory and/or synchronization intensive. In practice with
>> Mustang I've seen code perform over twice as fast by turning on escape
>> analysis with the typical improvement being in the 10-30% range, and
>> I've heard from others who've claimed as much as an order of magnitude
>> performance improvement. Sure it's nothing compared to algorithmic
>> improvements, but that's one of the more significant jumps I've been
>> able to make by just tweaking the runtime.
>
> Reference, please?  While your arguments are plausible, you can't just
> dismiss measured evidence without providing other measured evidence.
>
> If it is really that dramatic, I'd really like to know.

Try doing a function that repeatedly generates large multidimensional
arrays of floats or ints, and then computes dot products and cross
products. FFT's tend to really do a number on it too. If you do it right
without escape analysis you should get periodic thrashing of old space
memory, along with some nice thrashing of cache lines. Graph theory
stuff tends to show some of the biggest increases for reasons that are
pretty obvious once you think about it.

--Chris

-- 
[email protected]
http://www.kernel-panic.org/cgi-bin/mailman/listinfo/kplug-lpsg

Reply via email to