[
https://issues.apache.org/jira/browse/FALCON-508?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Venkatesh Seetharam updated FALCON-508:
---------------------------------------
Fix Version/s: (was: 0.6)
> Cache expiry is not handled in HiveCatalogService
> -------------------------------------------------
>
> Key: FALCON-508
> URL: https://issues.apache.org/jira/browse/FALCON-508
> Project: Falcon
> Issue Type: Bug
> Affects Versions: 0.5
> Reporter: Venkatesh Seetharam
> Labels: Hive
>
> Again, Don Bosco Durai reported to me in an email:
> We are using cached HCatClient object. His last success was June 30th, so the
> connection might have been stale. If there is an exception, we should clear
> the cache and try to get a new one. Since the getProxiedClient() and
> isAlive() method are different calls, you might have to give a method to
> clear the cache or pass additional parameter to fetch new client connection.
> {code}
> public static synchronized HCatClient getProxiedClient(String catalogUrl,
> String
> metaStorePrincipal) throws FalconException {
> if (!CACHE.containsKey(catalogUrl)) {
> {code}
> Also, the below code seems odd for cache. We should be always be overriding
> with the latest object. This might not be an issue if hcatClient is
> idempotent.
> Line 111: CACHE.putIfAbsent(catalogUrl, hcatClient);
> I do not think the latter is an issue but May be we could use
> org.apache.hcatalog.common.HCatUtil#getHiveClient which handles cache
> appropriately.
> Thanks Bosco.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)