Hi, i'm writing files through my map function. I get the FileOutputFormat.getWorkOutputPath(JobConf) path and create a file in this directory. When the task finish with success the file is then copied to JobOutput directory. All this work fine when i run on my local machine with pseudo-distributed configuration of hadoop but when i run this on amazon elastic mapreduce, the temporary files are not copied to the JobOutput path and a directory _temporary is created with the files.
Is there any additional configuration that i have to do to run this job on amazon elastic mapreduce? Thanks in advance! Carlos.