[ https://issues.apache.org/jira/browse/HADOOP-14727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16113931#comment-16113931 ]
Xiao Chen commented on HADOOP-14727: ------------------------------------ I also looked into the local repro when backing out the 4 mentioned jiras. This is how that same {{BlockReaderRemote}} created from the above is closed: {noformat} 2017-08-03 21:10:53,052 INFO org.apache.hadoop.hdfs.client.impl.BlockReaderRemote: ____ closing blockreaderremote org.apache.hadoop.hdfs.client.impl.BlockReaderRemote@80ceea3 java.lang.Exception: ____ at org.apache.hadoop.hdfs.client.impl.BlockReaderRemote.close(BlockReaderRemote.java:310) at org.apache.hadoop.hdfs.DFSInputStream.closeCurrentBlockReaders(DFSInputStream.java:1572) at org.apache.hadoop.hdfs.DFSInputStream.close(DFSInputStream.java:664) at java.io.FilterInputStream.close(FilterInputStream.java:181) at org.apache.xerces.impl.XMLEntityManager$RewindableInputStream.close(Unknown Source) at org.apache.xerces.impl.io.UTF8Reader.close(Unknown Source) at org.apache.xerces.impl.XMLEntityManager.endEntity(Unknown Source) at org.apache.xerces.impl.XMLEntityScanner.load(Unknown Source) at org.apache.xerces.impl.XMLEntityScanner.skipSpaces(Unknown Source) at org.apache.xerces.impl.XMLDocumentScannerImpl$TrailingMiscDispatcher.dispatch(Unknown Source) at org.apache.xerces.impl.XMLDocumentFragmentScannerImpl.scanDocument(Unknown Source) at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source) at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source) at org.apache.xerces.parsers.XMLParser.parse(Unknown Source) at org.apache.xerces.parsers.DOMParser.parse(Unknown Source) at org.apache.xerces.jaxp.DocumentBuilderImpl.parse(Unknown Source) at javax.xml.parsers.DocumentBuilder.parse(DocumentBuilder.java:121) at org.apache.hadoop.conf.Configuration.parse(Configuration.java:2645) at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2713) at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2662) at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2540) at org.apache.hadoop.conf.Configuration.get(Configuration.java:1071) at org.apache.hadoop.conf.Configuration.getTrimmed(Configuration.java:1121) at org.apache.hadoop.conf.Configuration.getInt(Configuration.java:1339) at org.apache.hadoop.mapreduce.counters.Limits.init(Limits.java:45) at org.apache.hadoop.mapreduce.counters.Limits.reset(Limits.java:130) at org.apache.hadoop.mapreduce.v2.hs.CompletedJob.loadFullHistoryData(CompletedJob.java:363) at org.apache.hadoop.mapreduce.v2.hs.CompletedJob.<init>(CompletedJob.java:105) at org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager$HistoryFileInfo.loadJob(HistoryFileManager.java:473) at org.apache.hadoop.mapreduce.v2.hs.CachedHistoryStorage.loadJob(CachedHistoryStorage.java:180) at org.apache.hadoop.mapreduce.v2.hs.CachedHistoryStorage.access$000(CachedHistoryStorage.java:52) at org.apache.hadoop.mapreduce.v2.hs.CachedHistoryStorage$1.load(CachedHistoryStorage.java:103) at org.apache.hadoop.mapreduce.v2.hs.CachedHistoryStorage$1.load(CachedHistoryStorage.java:100) at com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3568) at com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2350) at com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2313) at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2228) at com.google.common.cache.LocalCache.get(LocalCache.java:3965) at com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3969) at com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4829) at com.google.common.cache.LocalCache$LocalManualCache.getUnchecked(LocalCache.java:4834) at org.apache.hadoop.mapreduce.v2.hs.CachedHistoryStorage.getFullJob(CachedHistoryStorage.java:193) {noformat} where {code} 2712 } else if (resource instanceof InputStream) { 2713 doc = parse(builder, (InputStream) resource, null); 2714 returnCachedProperties = true; 2715 } else if (resource instanceof Properties) { {code} Although I naturally feel the same with what patch 1 does, it seems the existing behavior is to close regardlessly. > Socket not closed properly when reading Configurations with BlockReaderRemote > ----------------------------------------------------------------------------- > > Key: HADOOP-14727 > URL: https://issues.apache.org/jira/browse/HADOOP-14727 > Project: Hadoop Common > Issue Type: Bug > Components: conf > Affects Versions: 2.9.0, 3.0.0-alpha4 > Reporter: Xiao Chen > Assignee: Jonathan Eagles > Priority: Blocker > Attachments: HADOOP-14727.001-branch-2.patch, HADOOP-14727.001.patch > > > This is caught by Cloudera's internal testing over the alpha4 release. > We got reports that some hosts ran out of FDs. Triaging that, found out both > oozie server and Yarn JobHistoryServer have tons of sockets on {{CLOSE_WAIT}} > state. > [~haibochen] helped narrow down to a consistent reproduction by simply > visiting the JHS web UI, and clicking through a job and its logs. > I then look at the {{BlockReaderRemote}} and related code, and didn't spot > any leaks in the implementation. After adding a debug log whenever a {{Peer}} > is created/closed/in/out {{PeerCache}}, it looks like all the {{CLOSE_WAIT}} > sockets are created from this call stack: > {noformat} > 2017-08-02 13:58:59,901 INFO > org.apache.hadoop.hdfs.client.impl.BlockReaderFactory: ____ associated peer > NioInetPeer(Socket[addr=/10.17.196.28,port=20002,localport=42512]) with > blockreader org.apache.hadoop.hdfs.client.impl.BlockReaderRemote@717ce109 > java.lang.Exception: test > at > org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:745) > at > org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.build(BlockReaderFactory.java:385) > at > org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:636) > at > org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:566) > at > org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:749) > at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:807) > at java.io.DataInputStream.read(DataInputStream.java:149) > at > com.ctc.wstx.io.StreamBootstrapper.ensureLoaded(StreamBootstrapper.java:482) > at > com.ctc.wstx.io.StreamBootstrapper.resolveStreamEncoding(StreamBootstrapper.java:306) > at > com.ctc.wstx.io.StreamBootstrapper.bootstrapInput(StreamBootstrapper.java:167) > at > com.ctc.wstx.stax.WstxInputFactory.doCreateSR(WstxInputFactory.java:573) > at > com.ctc.wstx.stax.WstxInputFactory.createSR(WstxInputFactory.java:633) > at > com.ctc.wstx.stax.WstxInputFactory.createSR(WstxInputFactory.java:647) > at > com.ctc.wstx.stax.WstxInputFactory.createXMLStreamReader(WstxInputFactory.java:366) > at org.apache.hadoop.conf.Configuration.parse(Configuration.java:2649) > at > org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2697) > at > org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2662) > at > org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2545) > at org.apache.hadoop.conf.Configuration.get(Configuration.java:1076) > at > org.apache.hadoop.conf.Configuration.getTrimmed(Configuration.java:1126) > at > org.apache.hadoop.conf.Configuration.getInt(Configuration.java:1344) > at org.apache.hadoop.mapreduce.counters.Limits.init(Limits.java:45) > at org.apache.hadoop.mapreduce.counters.Limits.reset(Limits.java:130) > at > org.apache.hadoop.mapreduce.v2.hs.CompletedJob.loadFullHistoryData(CompletedJob.java:363) > at > org.apache.hadoop.mapreduce.v2.hs.CompletedJob.<init>(CompletedJob.java:105) > at > org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager$HistoryFileInfo.loadJob(HistoryFileManager.java:473) > at > org.apache.hadoop.mapreduce.v2.hs.CachedHistoryStorage.loadJob(CachedHistoryStorage.java:180) > at > org.apache.hadoop.mapreduce.v2.hs.CachedHistoryStorage.access$000(CachedHistoryStorage.java:52) > at > org.apache.hadoop.mapreduce.v2.hs.CachedHistoryStorage$1.load(CachedHistoryStorage.java:103) > at > org.apache.hadoop.mapreduce.v2.hs.CachedHistoryStorage$1.load(CachedHistoryStorage.java:100) > at > com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3568) > at > com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2350) > at > com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2313) > at > com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2228) > at com.google.common.cache.LocalCache.get(LocalCache.java:3965) > at com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3969) > at > com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4829) > at > com.google.common.cache.LocalCache$LocalManualCache.getUnchecked(LocalCache.java:4834) > at > org.apache.hadoop.mapreduce.v2.hs.CachedHistoryStorage.getFullJob(CachedHistoryStorage.java:193) > at > org.apache.hadoop.mapreduce.v2.hs.JobHistory.getJob(JobHistory.java:220) > at > org.apache.hadoop.mapreduce.v2.app.webapp.AppController.requireJob(AppController.java:416) > at > org.apache.hadoop.mapreduce.v2.app.webapp.AppController.attempts(AppController.java:277) > at > org.apache.hadoop.mapreduce.v2.hs.webapp.HsController.attempts(HsController.java:152) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.apache.hadoop.yarn.webapp.Dispatcher.service(Dispatcher.java:162) > at javax.servlet.http.HttpServlet.service(HttpServlet.java:790) > at > com.google.inject.servlet.ServletDefinition.doServiceImpl(ServletDefinition.java:287) > at > com.google.inject.servlet.ServletDefinition.doService(ServletDefinition.java:277) > at > com.google.inject.servlet.ServletDefinition.service(ServletDefinition.java:182) > at > com.google.inject.servlet.ManagedServletPipeline.service(ManagedServletPipeline.java:91) > at > com.google.inject.servlet.FilterChainInvocation.doFilter(FilterChainInvocation.java:85) > at > com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:941) > at > com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:875) > at > com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:829) > at > com.google.inject.servlet.FilterChainInvocation.doFilter(FilterChainInvocation.java:82) > at > com.google.inject.servlet.ManagedFilterPipeline.dispatch(ManagedFilterPipeline.java:119) > at com.google.inject.servlet.GuiceFilter$1.call(GuiceFilter.java:133) > at com.google.inject.servlet.GuiceFilter$1.call(GuiceFilter.java:130) > at > com.google.inject.servlet.GuiceFilter$Context.call(GuiceFilter.java:203) > at > com.google.inject.servlet.GuiceFilter.doFilter(GuiceFilter.java:130) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759) > at > org.apache.hadoop.security.http.XFrameOptionsFilter.doFilter(XFrameOptionsFilter.java:57) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759) > at > org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:109) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759) > at > org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1552) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759) > at > org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759) > at > org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) > at > org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548) > at > org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226) > at > org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180) > at > org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512) > at > org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) > at > org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) > at > org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119) > at > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) > at org.eclipse.jetty.server.Server.handle(Server.java:534) > at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320) > at > org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251) > at > org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283) > at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108) > at > org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93) > at > org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303) > at > org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148) > at > org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136) > at > org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671) > at > org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589) > at java.lang.Thread.run(Thread.java:748) > {noformat} > I was able to further confirm this theory by backing out the 4 recent commits > to {{Configuration}} on alpha3 and no longer seeing {{CLOSE_WAIT}} sockets. > - HADOOP-14501. > - HADOOP-14399. (only reverted to make other reverts easier) > - HADOOP-14216. Addendum > - HADOOP-14216. > It's not clear to me who's responsible to close the InputStream though. -- This message was sent by Atlassian JIRA (v6.4.14#64029) --------------------------------------------------------------------- To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org