You can try to take a jstack stack trace and see what its hung on.
I've only ever noticed a close() hang when the NN does not accept the
complete-file call (due to minimum replication not being guaranteed),
but given your changes (which I haven't an idea about yet) it could be
something else as well. You're essentially trying to make the same
client talk to two different FSes I think (aside of the JT RPC).

On Wed, Mar 27, 2013 at 5:50 PM, Pedro Sá da Costa <psdc1...@gmail.com> wrote:
> Hi,
>
> I'm using the Hadoop 1.0.4 API to try to submit a job in a remote
> JobTracker. I created modfied the JobClient to submit the same job in
> different JTs. E.g, the JobClient is in my PC and it try to submit the same
> Job  in 2 JTs at different sites in Amazon EC2. When I'm launching the Job,
> in the setup phase, the JobClient is trying to submit split file info into
> the remote JT.  This is the method of the JobClient that I've the problem:
>
>
>   public static void createSplitFiles(Path jobSubmitDir,
>       Configuration conf, FileSystem   fs,
>       org.apache.hadoop.mapred.InputSplit[] splits)
>   throws IOException {
>     FSDataOutputStream out = createFile(fs,
>         JobSubmissionFiles.getJobSplitFile(jobSubmitDir), conf);
>     SplitMetaInfo[] info = writeOldSplits(splits, out, conf);
>     out.close();
>
> writeJobSplitMetaInfo(fs,JobSubmissionFiles.getJobSplitMetaFile(jobSubmitDir),
>         new FsPermission(JobSubmissionFiles.JOB_FILE_PERMISSION),
> splitVersion,
>         info);
>   }
>
> 1 - The FSDataOutputStream hangs in the out.close() instruction. Why it
> hangs? What should I do to solve this?
>
>
> --
> Best regards,



-- 
Harsh J

Reply via email to