On Jul 14, 2008, at 18:15, Andreas Delmelle wrote:

<snip />

Just quickly ran Jeremias' test-app myself, and on Apple JVM (1.5), the difference is +/-300ms for a million iterations, but not very consistent. Sometimes StringBuffer operates slightly faster, other times it's CharBuffer that wins. I guess the backing implementations are very closely related anyway, so it's not all that surprising.

It would most definitely be a huge overkill if it is /only/ used for simple String concatenation. In the context of catching SAX characters () events, I think the penalty is bound to be limited (maybe even on the contrary: see below). That is, I don't think I've ever seen a parser that reports characters one at a time (which would make the current implementation using CharBuffer very slow). Most SAX parsers report the characters in reasonably large chunks (as much as possible).

Just for fun, make it:

...
private static final char[] prefix = {' ', 'm', 'y', 'V', 'a', 'l', 'u', 'e', '='};
...
public String runCharBuffer() {
  CharBuffer buf = CharBuffer.allocate(1024);
  for (int i = 0; i < STEPS; i++) {
    buf.put(prefix).put(Integer.toString(i).toCharArray());
  }
  return buf.rewind().toString();
}
...

On my end, this runs noticeably faster than when passing Strings (almost 20%). When switching StringBuffer.append() to use char[] parameters, it runs a tiny bit slower than with Strings... No idea if this also holds for the Sun, IBM or Apache implementations.

Qua flexibility, the API for a CharBuffer (optionally) offers the possibility to get a reference to the backing array. For StringBuffer, we'd have to do something like: sb.toString ().toCharArray(), and IIC, this always yields an independent copy of the StringBuffer's array, not the array itself. (Note that this obviously also has its drawbacks; sometimes, you just /need/ an independent copy...)


Cheers

Andreas

Reply via email to