> So I don't see any technical reson for removing this optimization. In my
> personal view, all little things eventually end up. So we better optimize
> the things we can, than wait for a specific complain from users that we
> create too many objects in the DOM in error situations.
> 
> So unless you have a good reason why it is evil to reuse DOMError :), I
> suggest we add the re-use back.

I have several reasons but eventually it probably comes down to the
matter of taste more than anything else. Since I'm a new comer, I'm
happy to follow what you say. In any case, I shouldn't have mixed this
kind of change with a bug fix, and for that I apologize. I'll be careful
in the future.

That said, here are reasons, just in case if it convinces people
otherwise:


- Generally speaking, a good programming practice says you shouldn't have
  such a transient local variable as a field of an object. Variables/objects
  that are local should be local. It breaks abstraction.

- The fact that it was declared as "protected" is even uglier. Now this
  temporary local variable is not only exposed to the rest of the
  DOMNormalizer class, but it is now exposed to any classes in the same
  package.

- I hear modern VMs do a fantastic job of GC-ing short-lived objects
  such as DOMErrorImpl.

- Speaking of performance, it should work faster when no error is
  reported, which is probably the typical case people care. (Not that
  the difference matters, but anyway...)

- All little things eventually add up not just for the performance, but
  also for the complexity, which is also important IMO.

- James Clark once told me that he doesn't do those peephole code
  optimizations. Instead, he optimizes the architecture and the
  algorithm. Yet his code is always one of the fastest. Granted, we are
  not James Clark, but isn't this the attitude we should follow?

- What I've been seeing in many projects (including Xerces) is this: A
  code was initially written with every possible micro-optimization. As
  a result, code became less readable, and then as different maintainers
  touch the code, people sometimes fail to see the optimization, and
  after a while, you end up with the code which is neither particularly
  maintainable nor particularly efficient.
  
  I could be wrong, but one example can be seen in XMLDTDValidator and
  XMLElementDecl. DTDGrammar doesn't define such things as "element
  declaration" or "attribute list", presumably to avoid creating new
  objects. But then XMLDTDValidator wanted to access Grammar in more
  OO-fashion, so it has things like XMLElementDecl, and now every time
  a validator wants to access an element decl, all the information has
  to be copied, and one has to maintain a lot of integer indexes all the
  time. This is neither beautiful nor efficient.
  
  Other parsers, such as Crimson or Piccolo, had none of those micro
  optimizations, yet it's quite comparable to Xerces in terms of
  performance.
  
  A similar concern was raised for Xalan in the past:
  http://lists.xml.org/archives/xml-dev/200211/msg00782.html

- In summary, the conventional wisdom is against micro optimization.
  Here're slides from one of the JavaOne sessions this year.
  
http://servlet.java.sun.com/javaone/resources/content/sf2003/conf/sessions/pdfs/3688.pdf


regards,
--
Kohsuke Kawaguchi
Sun Microsystems                   [EMAIL PROTECTED]


---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to