[
https://issues.apache.org/jira/browse/HBASE-29401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Duo Zhang resolved HBASE-29401.
-------------------------------
Fix Version/s: 2.7.0
Hadoop Flags: Reviewed
Resolution: Fixed
Pushed to master ,branch-3 and branch-2.
Thanks [~chaijunjie] for contributing!
> Support invalidate meta cache when tables dropped or disabled
> -------------------------------------------------------------
>
> Key: HBASE-29401
> URL: https://issues.apache.org/jira/browse/HBASE-29401
> Project: HBase
> Issue Type: Improvement
> Components: asyncclient, Client
> Affects Versions: 2.6.1
> Reporter: chaijunjie
> Assignee: chaijunjie
> Priority: Major
> Labels: metaCache, pull-request-available
> Fix For: 2.7.0, 3.0.0-beta-2
>
>
> There are one senior:
> we have 2 apps to read/write HBase tables for a long time...
> we could condisder as APP1 and APP2
> APP1 create a new HBase table everyday, and write some data to it, then
> never write/read it, it is just a daily-table(temporary table)...
> APP2 will drop old table(before 30 days)....
> Then the metaCache in APP1's conn, will contains too many old tables's region
> locations...even never clear these after the table dropped...only when we
> create a new hbase conn, will decrease memory used...
> when we drop table using another conn(APP), will cause memleak on other APP's
> metacache...
> we could not to control to invalidate meta cache when some tables never visit
> again...
> I add this test to TestMetaCache.java to reproduce it, everyone could use
> this UT to test, but I am not sure, is this seniro is normaly? or there are
> some invalidate meta supported? if not let me check if we could limit the
> meta cache...
> Create one table using conn1, and locate it regions...then drop it by
> conn2,,,the meta cache of conn1 will keep the table region locations ...
> {code:java}
> @Test
> public void test2Apps() throws Throwable {
> TableName tableName = TableName.valueOf("test2Apps");
> ColumnFamilyDescriptor cf =
> ColumnFamilyDescriptorBuilder.newBuilder(Bytes.toBytes("cf")).build();
> TableDescriptor tbn =
> TableDescriptorBuilder.newBuilder(tableName).setColumnFamily(cf).build();
> try (ConnectionImplementation conn1 = (ConnectionImplementation)
> ConnectionFactory.createConnection(
> TEST_UTIL.getConfiguration());
> ConnectionImplementation conn2 = (ConnectionImplementation)
> ConnectionFactory.createConnection(
> TEST_UTIL.getConfiguration());) {
> try (Admin admin1 = conn1.getAdmin()) {
> admin1.createTable(tbn);
> conn1.getRegionLocator(tableName).getAllRegionLocations();
> Assert.assertEquals(1,
> conn1.getNumberOfCachedRegionLocations(tableName));
> }
> try (Admin admin2 = conn2.getAdmin()) {
> admin2.disableTable(tableName);
> admin2.deleteTable(tableName);
> conn2.getRegionLocator(tableName).getAllRegionLocations();
> Assert.assertEquals(0,
> conn2.getNumberOfCachedRegionLocations(tableName));
> }
> Assert.assertEquals(1,
> conn1.getNumberOfCachedRegionLocations(tableName));
> }
> }
> {code}
> -------------------------------------------------------------
> There 2 soultions:
> 1. use caffeine cache instead of ConcurrentMap in MetaCache, it could help
> limit memory use, but it will cause performance degradation....
> https://github.com/ben-manes/caffeine/wiki/benchmarks
> 2. use a async thread in conn to invalidate meta cache if table not
> enbabled...but also could limit the memory if never delete temporary table...
> is there any other soultion?
--
This message was sent by Atlassian Jira
(v8.20.10#820010)