Markus, you're correct and I admit my use case is a bit exceptional. The one case I thought I needed a "shutdown" hook was when using adaptive threading and/or the fetch bandwidth threshold. When crawling large segments (multi-hour fetches) I assumed that if a slow fetcher is killed it would be nice to clean up the resources early for our app-specific logic. Anyway, relying on the JVM shutting down after each map/reduce is working fine for us.
Thanks! -----Original Message----- From: Markus Jelsma [mailto:markus.jel...@openindex.io] Sent: Monday, May 09, 2016 5:33 AM To: user@nutch.apache.org Subject: RE: startUp/shutDown methods for plugins Hello Joseph, you can initialize things in the plugin's setConf() method, but there is no close(). Why do you want to clear resources? The plugin's lifetime is very short for most mappers and reducers, and Hadoop will kill those JVM's anyway. Markus -----Original message----- > From:Joseph Naegele <jnaeg...@grierforensics.com> > Sent: Friday 6th May 2016 16:04 > To: user@nutch.apache.org > Subject: startUp/shutDown methods for plugins > > Hi folks, > > > > I'm using Nutch 1.11. Is it possible to implement plugin instance > startUp/shutDown methods for normal extension points? This would allow > for cleaning up resources at the end of a plugin instance's lifetime. > > > > Thanks, > > Joe > >