[
https://issues.apache.org/jira/browse/OPENJPA-407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12564820#action_12564820
]
Patrick Linskey commented on OPENJPA-407:
-----------------------------------------
Generally, the patch looks sound. I agree with Christiaan's concerns; we should
probably change the caches to be configurable data structures.
I think that from a data structure standpoint, I would prefer something like
the following:
- JDBCStoreManager has a private configurable Map<ClassMapping,SelectImpl>.
This might be a non-LRU CacheMap, and the configuration for it should be
exposed via a JDBCConfiguration setting.
- The Map<SelectImpl,SQLBuffer> is moved into SelectImpl itself, obviating the
need for the second public map.
We have not yet run the patch against our internal performance tests; I'd like
to hold off on that until we get the data structures finalized. (The requisite
configuration changes can be done afterwards; they shouldn't impact runtime
performance behavior.)
> Cache SQL (or closer precursors to SQL) more aggressively
> ---------------------------------------------------------
>
> Key: OPENJPA-407
> URL: https://issues.apache.org/jira/browse/OPENJPA-407
> Project: OpenJPA
> Issue Type: Improvement
> Components: jdbc, kernel, query, sql
> Affects Versions: 0.9.0, 0.9.6, 0.9.7, 1.0.0
> Reporter: Patrick Linskey
> Fix For: 1.1.0
>
> Attachments: findBy.patch, OPENJPA-407.patch
>
>
> When data is not available in the data cache, OpenJPA dynamically creates SQL
> to look up the requested data. OpenJPA should more aggressively cache this
> SQL to accelerate pathways from a cache miss to the database.
> The generated SQL takes a number of factors into account, including the
> requested records, transaction status, currently-loaded data, and the current
> fetch configuration. Any caching would need to account for these factors as
> well.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.