[ 
https://issues.apache.org/jira/browse/HDFS-13212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16389132#comment-16389132
 ] 

Weiwei Wu commented on HDFS-13212:
----------------------------------

Step 1: visit the PATH-A that not in the mount point

In method MountTableResolver.LookupLocation, if a input source path (PATH-A) 
can not find a match mount point, it will return a default 
location(DEFAULT-LOCATION) with null sourcePath (below code line 393), and add 
it to the locationCache.
{code:java}
382    public PathLocation lookupLocation(final String path) {
383      PathLocation ret = null;
384      MountTable entry = findDeepest(path);
385      if (entry != null) {
386        ret = buildLocation(path, entry);
387      } else {
388        // Not found, use default location
389        RemoteLocation remoteLocation =
390            new RemoteLocation(defaultNameService, path);
391        List<RemoteLocation> locations =
392            Collections.singletonList(remoteLocation);
393        ret = new PathLocation(null, locations); // a location with null 
sourcePath
394      }
395      return ret;
396    }
{code}
 

Step 2: add the PATH-A mount point

when add a  PATH-A mount point, router need to invalid the pre default location 
cache, otherwise the new add mount point will never work because locationCache 
will alway return the DEFAULT-LOCATION.

invalidateLocationCache will lookup all locationCache the find the match 
sourcePath, so it will cause a  null pointer exception in below code line 241.

 
{code:java}
227    private void invalidateLocationCache(final String path) {
228      LOG.debug("Invalidating {} from {}", path, locationCache);
229      if (locationCache.size() == 0) {
230        return;
231      }
232
233      // Go through the entries and remove the ones from the path to 
invalidate
234      ConcurrentMap<String, PathLocation> map = locationCache.asMap();
235      Set<Entry<String, PathLocation>> entries = map.entrySet();
236      Iterator<Entry<String, PathLocation>> it = entries.iterator();
237      while (it.hasNext()) {
238        Entry<String, PathLocation> entry = it.next();
239        PathLocation loc = entry.getValue();
240       String src = loc.getSourcePath();
241       if (src.startsWith(path)) {
242         LOG.debug("Removing {}", src);
243         it.remove();
244       }
245     }
246   
247     LOG.debug("Location cache after invalidation: {}", locationCache);
248   }
{code}
 

 

This case is tested in below test code 
{code:java}
+    // Add the default location to location cache
+    mountTable.getDestinationForPath("/testlocationcache");
+
+    // Add the entry again but mount to another ns
+    Map<String, String> map3 = getMountTableEntry("3", "/testlocationcache");
+    MountTable entry3 = MountTable.newInstance("/testlocationcache", map3);
+    entries.add(entry3);
+    mountTable.refreshEntries(entries);
+
+    // Ensure location cache update correctly
+    assertEquals("3->/testlocationcache/",
+            mountTable.getDestinationForPath("/testlocationcache").toString());
{code}
 

 

> RBF: Fix router location cache issue
> ------------------------------------
>
>                 Key: HDFS-13212
>                 URL: https://issues.apache.org/jira/browse/HDFS-13212
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>          Components: federation, hdfs
>            Reporter: Weiwei Wu
>            Priority: Major
>         Attachments: HDFS-13212-001.patch, HDFS-13212-002.patch, 
> HDFS-13212-003.patch, HDFS-13212-004.patch, HDFS-13212-005.patch
>
>
> The MountTableResolver refreshEntries function have a bug when add a new 
> mount table entry which already have location cache. The old location cache 
> will never be invalid until this mount point change again.
> Need to invalid the location cache when add the mount table entries.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to