joshelser commented on issue #1019: HBASE-23679 Use new FileSystem objects 
during bulk loads
URL: https://github.com/apache/hbase/pull/1019#issuecomment-573269130
 
 
   Ugh, there's a few of these where, down in HStore, HRegion, and even the WAL 
code (ugh), which is all invoked via bulk load where we do a 
`FileSystem.get(conf)` or `path.getFileSystem(conf)`. All of them will leak a 
FileSystem instance with the SBLM changes in 2.x.
   
   ```
   ctr-e141-1563959304486-133915-01-000008: 2020-01-11 01:42:31,080 WARN  
[RpcServer.default.FPBQ.Fifo.handler=97,queue=7,port=16020] fs.FileSystem: 
Caching new filesystem: -1042984133
   ctr-e141-1563959304486-133915-01-000008: java.lang.Exception
   ctr-e141-1563959304486-133915-01-000008:     at 
org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3365)
   ctr-e141-1563959304486-133915-01-000008:     at 
org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3320)
   ctr-e141-1563959304486-133915-01-000008:     at 
org.apache.hadoop.fs.FileSystem.get(FileSystem.java:479)
   ctr-e141-1563959304486-133915-01-000008:     at 
org.apache.hadoop.fs.FileSystem.get(FileSystem.java:227)
   ctr-e141-1563959304486-133915-01-000008:     at 
org.apache.hadoop.hbase.wal.AbstractFSWALProvider.getServerNameFromWALDirectoryName(AbstractFSWALProvider.java:330)
   ctr-e141-1563959304486-133915-01-000008:     at 
org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks.reorderBlocks(HFileSystem.java:426)
   ctr-e141-1563959304486-133915-01-000008:     at 
org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:378)
   ctr-e141-1563959304486-133915-01-000008:     at 
com.sun.proxy.$Proxy20.getBlockLocations(Unknown Source)
   ctr-e141-1563959304486-133915-01-000008:     at 
org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSClient.java:862)
   ctr-e141-1563959304486-133915-01-000008:     at 
org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:851)
   ctr-e141-1563959304486-133915-01-000008:     at 
org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:840)
   ctr-e141-1563959304486-133915-01-000008:     at 
org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1004)
   ctr-e141-1563959304486-133915-01-000008:     at 
org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:326)
   ctr-e141-1563959304486-133915-01-000008:     at 
org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:322)
   ctr-e141-1563959304486-133915-01-000008:     at 
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
   ctr-e141-1563959304486-133915-01-000008:     at 
org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:334)
   ctr-e141-1563959304486-133915-01-000008:     at 
org.apache.hadoop.fs.FilterFileSystem.open(FilterFileSystem.java:164)
   ctr-e141-1563959304486-133915-01-000008:     at 
org.apache.hadoop.fs.FileSystem.open(FileSystem.java:899)
   ctr-e141-1563959304486-133915-01-000008:     at 
org.apache.hadoop.hbase.io.FSDataInputStreamWrapper.<init>(FSDataInputStreamWrapper.java:125)
   ctr-e141-1563959304486-133915-01-000008:     at 
org.apache.hadoop.hbase.io.FSDataInputStreamWrapper.<init>(FSDataInputStreamWrapper.java:102)
   ctr-e141-1563959304486-133915-01-000008:     at 
org.apache.hadoop.hbase.regionserver.StoreFileInfo.open(StoreFileInfo.java:254)
   ctr-e141-1563959304486-133915-01-000008:     at 
org.apache.hadoop.hbase.regionserver.HStoreFile.open(HStoreFile.java:367)
   ctr-e141-1563959304486-133915-01-000008:     at 
org.apache.hadoop.hbase.regionserver.HStoreFile.initReader(HStoreFile.java:475)
   ctr-e141-1563959304486-133915-01-000008:     at 
org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:690)
   ctr-e141-1563959304486-133915-01-000008:     at 
org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:683)
   ctr-e141-1563959304486-133915-01-000008:     at 
org.apache.hadoop.hbase.regionserver.HStore.bulkLoadHFile(HStore.java:854)
   ctr-e141-1563959304486-133915-01-000008:     at 
org.apache.hadoop.hbase.regionserver.HRegion.bulkLoadHFiles(HRegion.java:6057)
   ctr-e141-1563959304486-133915-01-000008:     at 
org.apache.hadoop.hbase.regionserver.SecureBulkLoadManager$1.run(SecureBulkLoadManager.java:264)
   ctr-e141-1563959304486-133915-01-000008:     at 
org.apache.hadoop.hbase.regionserver.SecureBulkLoadManager$1.run(SecureBulkLoadManager.java:233)
   ctr-e141-1563959304486-133915-01-000008:     at 
java.security.AccessController.doPrivileged(Native Method)
   ctr-e141-1563959304486-133915-01-000008:     at 
javax.security.auth.Subject.doAs(Subject.java:360)
   ctr-e141-1563959304486-133915-01-000008:     at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1710)
   ctr-e141-1563959304486-133915-01-000008:     at 
org.apache.hadoop.hbase.regionserver.SecureBulkLoadManager.secureBulkLoadHFiles(SecureBulkLoadManager.java:233)
   ctr-e141-1563959304486-133915-01-000008:     at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.bulkLoadHFile(RSRpcServices.java:2338)
   ctr-e141-1563959304486-133915-01-000008:     at 
org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:42004)
   ctr-e141-1563959304486-133915-01-000008:     at 
org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:413)
   ctr-e141-1563959304486-133915-01-000008:     at 
org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:131)
   ctr-e141-1563959304486-133915-01-000008:     at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324)
   ctr-e141-1563959304486-133915-01-000008:     at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304)
   ```
   
   The fix is to just push down the FileSystem or use one that is already 
created, but this gets tricky in some places. Will need to step back from this 
all and see if there's a better way to do this.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

Reply via email to