[ https://issues.apache.org/jira/browse/SOLR-12290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Mark Miller updated SOLR-12290: ------------------------------- Priority: Major (was: Minor) Fix Version/s: master (8.0) 7.4 Issue Type: Bug (was: Task) Summary: Do not close an servlet streams and improve our servlet stream closing prevention code for users and devs. (was: Improve our servlet stream closing prevention code for users and devs.) > Do not close an servlet streams and improve our servlet stream closing > prevention code for users and devs. > ---------------------------------------------------------------------------------------------------------- > > Key: SOLR-12290 > URL: https://issues.apache.org/jira/browse/SOLR-12290 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Reporter: Mark Miller > Assignee: Mark Miller > Priority: Major > Fix For: 7.4, master (8.0) > > Attachments: SOLR-12290.patch, SOLR-12290.patch, SOLR-12290.patch, > SOLR-12290.patch > > > Original Summary: > When you fetch a file for replication we close the request output stream > after writing the file which ruins the connection for reuse. > We can't close response output streams, we need to reuse these connections. > If we do close them, clients are hit with connection problems when they try > and reuse the connection from their pool. > New Summary: > At some point the above was addressed during refactoring. We should remove > these neutered closes and review our close shield code. > If you are here to track down why this is done: > Connection reuse requires that we read all streams and do not close them - > instead the container itself must manage request and response streams. If we > allow them to be closed, not only do we lose some connection reuse, but we > can cause spurious client errors that can cause expensive recoveries for no > reason. The spec allows us to count on the container to manage streams. It's > our job simply to not close them and to always read them fully, from client > and server. > Java itself can help with always reading the streams fully up to some small > default amount of unread stream slack, but that is very dangerous to count > on, so we always manually eat up anything on the streams our normal logic > ends up not reading for whatever reason. > We also cannot call abort without ruining the connection or sendError. These > should be options of very last resort (requiring a blood sacrifice) or when > shutting down. > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) --------------------------------------------------------------------- To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org