[GitHub] [hbase] joshelser edited a comment on issue #1019: HBASE-23679 Use new FileSystem objects during bulk loads

2020-01-10 Thread GitBox
joshelser edited a comment on issue #1019: HBASE-23679 Use new FileSystem 
objects during bulk loads
URL: https://github.com/apache/hbase/pull/1019#issuecomment-573258755
 
 
   ```
   2020-01-11 00:15:00,797 WARN  
[RpcServer.default.FPBQ.Fifo.handler=99,queue=9,port=16020] fs.FileSystem: 
Caching new filesystem: -279427062
   java.lang.Exception
at 
org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3365)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3320)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:479)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:361)
at 
org.apache.hadoop.hbase.regionserver.HStore.assertBulkLoadHFileOk(HStore.java:761)
at 
org.apache.hadoop.hbase.regionserver.HRegion.bulkLoadHFiles(HRegion.java:5958)
at 
org.apache.hadoop.hbase.regionserver.SecureBulkLoadManager$1.run(SecureBulkLoadManager.java:264)
at 
org.apache.hadoop.hbase.regionserver.SecureBulkLoadManager$1.run(SecureBulkLoadManager.java:233)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:360)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1710)
at 
org.apache.hadoop.hbase.regionserver.SecureBulkLoadManager.secureBulkLoadHFiles(SecureBulkLoadManager.java:233)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.bulkLoadHFile(RSRpcServices.java:2338)
at 
org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:42004)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:413)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:131)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304)
   ```
   
   Looks like this isn't quite sufficient. Another leak (albeit, much slower) 
coming here. Need to do more to push down that DFS instance we made and use 
that until we move the files into the final location.
   
   Added some debug to FileSystem.java to see the above. Testing is just done 
via IntegrationTestBulkLoad with high number of loops but small chain length.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] joshelser edited a comment on issue #1019: HBASE-23679 Use new FileSystem objects during bulk loads

2020-01-10 Thread GitBox
joshelser edited a comment on issue #1019: HBASE-23679 Use new FileSystem 
objects during bulk loads
URL: https://github.com/apache/hbase/pull/1019#issuecomment-573269130
 
 
   Ugh, there's a few of these where, down in HStore, HRegion, and even the WAL 
code (ugh), which is all invoked via bulk load where we do a 
`FileSystem.get(conf)` or `path.getFileSystem(conf)`. All of them will leak a 
FileSystem instance with the SBLM changes in 2.x.
   
   ```
   2020-01-11 01:42:31,080 WARN  
[RpcServer.default.FPBQ.Fifo.handler=97,queue=7,port=16020] fs.FileSystem: 
Caching new filesystem: -1042984133
   java.lang.Exception
at 
org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3365)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3320)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:479)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:227)
at 
org.apache.hadoop.hbase.wal.AbstractFSWALProvider.getServerNameFromWALDirectoryName(AbstractFSWALProvider.java:330)
at 
org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks.reorderBlocks(HFileSystem.java:426)
at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:378)
at com.sun.proxy.$Proxy20.getBlockLocations(Unknown Source)
at 
org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSClient.java:862)
at org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:851)
at org.apache.hadoop.hdfs.DFSClient.getLocatedBlocks(DFSClient.java:840)
at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1004)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:326)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$4.doCall(DistributedFileSystem.java:322)
at 
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:334)
at org.apache.hadoop.fs.FilterFileSystem.open(FilterFileSystem.java:164)
at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:899)
at 
org.apache.hadoop.hbase.io.FSDataInputStreamWrapper.(FSDataInputStreamWrapper.java:125)
at 
org.apache.hadoop.hbase.io.FSDataInputStreamWrapper.(FSDataInputStreamWrapper.java:102)
at 
org.apache.hadoop.hbase.regionserver.StoreFileInfo.open(StoreFileInfo.java:254)
at 
org.apache.hadoop.hbase.regionserver.HStoreFile.open(HStoreFile.java:367)
at 
org.apache.hadoop.hbase.regionserver.HStoreFile.initReader(HStoreFile.java:475)
at 
org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:690)
at 
org.apache.hadoop.hbase.regionserver.HStore.createStoreFileAndReader(HStore.java:683)
at 
org.apache.hadoop.hbase.regionserver.HStore.bulkLoadHFile(HStore.java:854)
at 
org.apache.hadoop.hbase.regionserver.HRegion.bulkLoadHFiles(HRegion.java:6057)
at 
org.apache.hadoop.hbase.regionserver.SecureBulkLoadManager$1.run(SecureBulkLoadManager.java:264)
at 
org.apache.hadoop.hbase.regionserver.SecureBulkLoadManager$1.run(SecureBulkLoadManager.java:233)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:360)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1710)
at 
org.apache.hadoop.hbase.regionserver.SecureBulkLoadManager.secureBulkLoadHFiles(SecureBulkLoadManager.java:233)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.bulkLoadHFile(RSRpcServices.java:2338)
at 
org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:42004)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:413)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:131)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304)
   ```
   
   The fix is to just push down the FileSystem or use one that is already 
created, but this gets tricky in some places. Will need to step back from this 
all and see if there's a better way to do this.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services