Create test scenario for "distributed cache file behaviour, when dfs file is 
modified"
--------------------------------------------------------------------------------------

                 Key: MAPREDUCE-1676
                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1676
             Project: Hadoop Map/Reduce
          Issue Type: Test
    Affects Versions: 0.22.0
            Reporter: Iyappan Srinivasan


 Verify the Distributed Cache functionality. This test scenario is for a 
distributed cache file behaviour when it is modified before and after being 
accessed by maximum two jobs. Once a job uses a distributed cache file  that 
file is stored in the mapred.local.dir. If the next job
 uses the same file, but with differnt timestamp, then that  file is stored 
again. So, if two jobs choose the same tasktracker for their job execution 
then, the distributed cache file should be found twice.

This testcase runs a job with a distributed cache file. All the tasks' 
corresponding tasktracker's handle is got and checked for the presence of 
distributed cache with proper permissions in the proper directory. Next when 
job runs again and if any of its tasks hits the same tasktracker, which ran one 
of the task of the previous job, then that
file should be uploaded again and task should not use the old file.



-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to