> -----Original Message-----
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED]]On Behalf Of Bordet, Simone
> Sent: Sunday, October 08, 2000 9:26 AM
> To: jBoss Development Mailing List (E-mail)
> Subject: [jBoss-Dev] CacheKey performances
>
>
> Hey,
>
> did a little test (see code below) to check how much expensive is the
> creation of a CacheKey, implementing 2 different algorithms
> (MarshalledObject and reflection).
> Here the results (running on laptop PII300 128MB WIN2K JDK 1.3) for the
> creation of 10000 objects:
>
> PK: 4.7-4.8 ms (for comparison)
> MarshalledObject: 4245-4285 ms
> Key (reflection): 2785-2825 ms
>
> Well, not the fastest results IMHO, both for MO and reflection.
> Given that a CacheKey is created for each bean creation or for each PK
> returned by a finder method, a finder that returns a Collection of 10000
> beans will take 4.2 (2.8) seconds more than what is needed. Is that
> acceptable in your opinion ?
Ok whereas this test is valid, bear in mind that we should really work from
a "distribution" of usage.
1- assume 90% usage with "objects" already created
2- assume 10% usage with finders
that is the "average" load we will see, that is the distribution we are
interested in
the 90% is fast since there is no calculation involved.
This is at the heart of "let's not design by exception"... (I consider a
10000 key lookup the exception)
> Marc, I've remember you talking about a sequencial-like CacheKey, is it
> correct ? What is the idea ?
yes, there is a sequencial idea, that one (I have to remember on a drunken
sunday afternoon :(...
so... we increment ++ like the stateful stuff and then the key is to have a
map from mo to ++
and of course we keep a ++ to object
normal operation works on ++ directly with NO POSSIBILITY of equal hashes
(incremented, again look at the stateful stuff).
if someone comes in with a ++ that has been removed (say passivation), we
get a ++ miss and then we look at the MO (which will hash and equal
correctly) in case we get a miss we can safely go on with setting ++ to
object in the new cache.
if we get a hit it means someone under a different ++ is using that object,
that is where the mo to ++ map is interesting and we retrieve the ++ to use.
the same "what is the distribution" calculation should come handy.
1- imagine 95% people don't passivate their handle for long times , then our
usage will be kosher (no mo to ++ lookup)
2- 5% do and then we have a 1 map miss, 1 map lookup (mo), 1 map lookup
(++).
last we thought about it with rickard this was foolproof and the
distribution calculation is to compare with the pure MO case where we do a
MO.equals on every cache lookup, btw the second option is the Jonathan
Weedon solution. If that operation is faster 100% of the times compared to
the 5 where we have caches misses then this is worth it.
marc
>
> Simon
>
> import java.lang.reflect.*;
>
> public class CacheKeyTest
> {
> public static void main(String[] args)
> {
> int n = 10000;
> PK pk = new PK(10, 100L, true, "SIMON", 1.0D, '1', 0.1F);
> long start = System.currentTimeMillis();
> for (int j = 0; j < n; ++j)
> {
> try
> {
> java.rmi.MarshalledObject mo = new
> java.rmi.MarshalledObject(pk);
> // int hash = mo.hashCode();
> }
> catch (java.io.IOException x) {}
> }
> long end = System.currentTimeMillis();
> System.out.println(n + " MO creation took: " + (end-start));
>
> int pkn = n * 1000;
> start = System.currentTimeMillis();
> for (int j = 0; j < pkn; ++j)
> {
> PK p = new PK(10, 100L, true, "SIMON", 1.0D, '1',
> 0.1F);
> }
> end = System.currentTimeMillis();
> System.out.println(pkn + " PK creation took: " +
> (end-start));
>
> // System.out.println(new Key(pk));
> int kn = n;
> start = System.currentTimeMillis();
> for (int j = 0; j < kn; ++j)
> {
> Key k = new Key(pk);
> }
> end = System.currentTimeMillis();
> System.out.println(kn + " PK creation took: " +
> (end-start));
>
> }
> }
>
> class PK
> {
> public int m_i;
> public long m_l;
> public boolean m_b;
> public String m_s;
> public double m_d;
> public char m_c;
> public float m_f;
>
> PK(int i, long l, boolean b, String s, double d, char c, float f)
> {
> m_i = i;
> m_l = l;
> m_b = b;
> m_s = s;
> m_d = d;
> m_c = c;
> m_f = f;
> }
> }
>
> class Key
> {
> private int m_hash;
> private Object m_id;
> private StringBuffer m_buffer;
> private static String m_separator = "|";
>
> Key(Object id)
> {
> m_id = id;
> Class cls = id.getClass();
> if (cls == Object.class || isPrimitiveWrapperOrString(cls))
> {
> m_hash = id.hashCode();
> }
> else
> {
> m_buffer = new StringBuffer();
> m_buffer.append(cls.getName());
> computeHash(cls);
> }
> }
> public int hashCode() {return m_hash;}
> public boolean equals(Object o)
> {
> if (this == o) {return true;}
> if (o instanceof Key)
> {
> Key other = (Key)o;
> if (m_buffer != null && other.m_buffer != null)
> {
> return
> m_buffer.toString().equals(other.m_buffer.toString());
> }
> return m_id.equals(other.m_id);
> }
> return false;
> }
> public String toString() {return m_buffer == null ? super.toString()
> + "#" + m_hash + "|" + m_id.toString(): m_buffer.toString() + "#"
> + m_hash +
> "|" + m_id.toString();}
> private void computeHash(Class cls)
> {
> if (cls == Object.class) {return;}
> Field[] fields = cls.getFields();
> int l = fields.length;
> for (int i = 0; i < l; ++i)
> {
> Field f = fields[i];
> Class c = f.getType();
> if (c.isPrimitive() ||
> isPrimitiveWrapperOrString(c))
> {
> m_buffer.append(m_separator);
> try {m_buffer.append(f.get(m_id));}
> catch (IllegalAccessException ignored) {}
> }
> else {computeHash(c);}
> }
> m_hash = m_buffer.toString().hashCode();
> }
> private boolean isPrimitiveWrapperOrString(Class c)
> {
> // Ordered by most common
> if (c == String.class ||
> c == Integer.class ||
> c == Boolean.class ||
> c == Long.class ||
> c == Double.class ||
> c == Float.class ||
> c == Character.class ||
> c == Byte.class ||
> c == Short.class) {return true;}
> return false;
> }
> }
>
>
>
>