Ting Dai created MAPREDUCE-6991: ----------------------------------- Summary: getRogueTaskPID and testProcessTree hangs when the file creation failed Key: MAPREDUCE-6991 URL: https://issues.apache.org/jira/browse/MAPREDUCE-6991 Project: Hadoop Map/Reduce Issue Type: Bug Affects Versions: 2.0.0-alpha, 0.23.0 Reporter: Ting Dai
When writing to file failed, in the following thread, due to disk full, hardware error, etc, {code:java} private class RogueTaskThread extends Thread { public void run() { try { Vector<String> args = new Vector<String>(); if(isSetsidAvailable()) { args.add("setsid"); } args.add("bash"); args.add("-c"); args.add(" echo $$ > " + pidFile + "; sh " + shellScript + " " + N + ";") ; shexec = new ShellCommandExecutor(args.toArray(new String[0])); shexec.execute(); } catch (ExitCodeException ee) { LOG.info("Shell Command exit with a non-zero exit code. This is" + " expected as we are killing the subprocesses of the" + " task intentionally. " + ee); } catch (IOException ioe) { LOG.info("Error executing shell command " + ioe); } finally { LOG.info("Exit code: " + shexec.getExitCode()); } } } {code} The getRogueTaskPID() and testProcessTree() get stuck waiting until the file is created or get interrupted. {code:java} private String getRogueTaskPID() { File f = new File(pidFile); while (!f.exists()) { try { Thread.sleep(500); } catch (InterruptedException ie) { break; } } // read from pidFile return getPidFromPidFile(pidFile); } {code} {code:java} public void testProcessTree() throws Exception { // create pid file path string tempFile = new File(TEST_ROOT_DIR, getClass().getName() + "_pidFile_" + rm.nextInt() + ".pid"); tempFile.deleteOnExit(); pidFile = TEST_ROOT_DIR + File.separator + tempFile.getName(); ..... Thread t = new RogueTaskThread(); t.start(); String pid = getRogueTaskPID(); ...... } {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) --------------------------------------------------------------------- To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org