I'm seeing some interesting things happening with hibernate's second level 
cache when using native queries in my EJB3/JPA app and was wondering if someone 
can verify that "this is the way it is supposed to work" and more importantly 
how can I work around it :)

I have a multi-threaded system where I am doing concurrent database access; in 
one of those threads, I am in a transaction (call it T1) and the database 
access involves executing a native query (a native INSERT query to be exact) 
via entityManager.createNativeQuery(...).executeUpdate().

At the same time, in another set of threads, I'm hammering the database with 
other queries - nothing fancy, and no native queries.

Those other threads end up slowing down the system and the thread running in T1 
is slowed down.  So much so that more than 15 seconds go by after T1 is started 
but before I executeUpdate that native query.

The native query then bombs out with a failure; here's part of the log that 
shows the problem, stripped of everything not important:


  | 18:52:42 INFO  ...enter T1...
  | ....
  | 18:52:57 INFO  ...Caused by: javax.persistence.PersistenceException: 
org.hibernate.cache.CacheException: org.jboss.cache.lock.TimeoutException: 
failure acquiring lock: 
fqn=/on_ear,jonpu/org/jboss/on/domain/resource/ResourceType, 
  | 

Look at the timestamps - 18:52.42 plus 15 seconds is 18:52.57
we are hitting a cache timeout before we can complete the native query. The 
cache timeout is 15s based on the exception message I see later in the logs 
"Caused by: org.jboss.cache.lock.TimeoutException: write lock for 
/on_ear,jonpu/org/jboss/on/domain/resource/ResourceType could not be acquired 
after 15000 ms"

The odd thing is, this native query does not directly involve the entity 
referred to in this error message (ResourceType).  So, I asked myself, "why is 
hibernate trying to get a write lock for this entity in its second-level 
cache?"  The ResourceType entity is annotated with 
"@org.hibernate.annotations.Cache(usage= 
CacheConcurrencyStrategy.TRANSACTIONAL)" and we are using JBossCache.

Looking at the stack trace of the exception, I then saw this:


  | at org.jboss.ejb3.entity.JBCCache.clear(JBCCache.java:207)
  | at org.hibernate.cache.TransactionalCache.clear(TransactionalCache.java:124)
  | at 
org.hibernate.impl.SessionFactoryImpl.evictEntity(SessionFactoryImpl.java:832)
  | at 
org.hibernate.action.BulkOperationCleanupAction.evictEntityRegions(BulkOperationCleanupAction.java:137)
  | at 
org.hibernate.action.BulkOperationCleanupAction.init(BulkOperationCleanupAction.java:100)
  | at 
org.hibernate.engine.query.NativeSQLQueryPlan.coordinateSharedCacheCleanup(NativeSQLQueryPlan.java:134)
  | at 
org.hibernate.engine.query.NativeSQLQueryPlan.performExecuteUpdate(NativeSQLQueryPlan.java:144)
 
  | 

It looks like hibernate is trying to perform some cache clearing with the bulk 
native query execution and trying to invalidate too much.

What ways are there to turn off or adjust this behavior? I don't want to extend 
the 15s time limit since I can't guarantee what's a long enough time and what 
should truly be considered a timeout.

View the original post : 
http://www.jboss.com/index.html?module=bb&op=viewtopic&p=4118709#4118709

Reply to the post : 
http://www.jboss.com/index.html?module=bb&op=posting&mode=reply&p=4118709
_______________________________________________
jboss-user mailing list
jboss-user@lists.jboss.org
https://lists.jboss.org/mailman/listinfo/jboss-user

Reply via email to