I don't know the exact cause but I would say it depends on what OS and what job you are running, if any, when you shutdown. AFAIK the hadoop scripts send a SIGQUIT, same a ctrl-c, to the server processes. They may or may not have shutdown hooks built it. I know the hbase servers processes did a while ago, don't know if the hadoop ones do.

The servers do create pid files and put them in the pids directory as specified in the hadoop-env.sh file. An indexer job may also have a file that is used to lock the index it is creating before it is finished.

Don't know if any of this helps your problem. Just a guess but it may be locking issues with windows as well.

Dennis

Raymond Balmès wrote:
Same problems... even rebooting the PC does not always solve the issue,
files remain locked.

I have gone the brutal way and use unlocker.exe but I mean to find out
what's going wrong so I will keep posted on this one.

-Ray-

2009/4/23 Lukas, Ray <[email protected]>

Question:
What is the proper accepted and safe way to shut down nutch (hadoop)
after I am done with it?

Hadoop.getFileSystem().closeAll() ??
I did try this and no luck. Anyone else having this problem?

Thanks guys.. Thanks, if/when I find it I will post it for everyone.
Ray


Reply via email to