[ https://issues.apache.org/jira/browse/MAPREDUCE-1672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Sharad Agarwal updated MAPREDUCE-1672: -------------------------------------- Affects Version/s: (was: 0.22.0) Fix Version/s: (was: 0.22.0) Issue Type: Test (was: New Feature) > Create test scenario for "distributed cache file behaviour, when dfs file is > not modified" > ------------------------------------------------------------------------------------------ > > Key: MAPREDUCE-1672 > URL: https://issues.apache.org/jira/browse/MAPREDUCE-1672 > Project: Hadoop Map/Reduce > Issue Type: Test > Components: test > Reporter: Iyappan Srinivasan > Assignee: Iyappan Srinivasan > Attachments: TestDistributedCacheUnModifiedFile.patch, > TestDistributedCacheUnModifiedFile.patch > > > This test scenario is for a distributed cache file behaviour > when it is not modified before and after being > accessed by maximum two jobs. Once a job uses a distributed cache file > that file is stored in the mapred.local.dir. If the next job > uses the same file, then that is not stored again. > So, if two jobs choose the same tasktracker for their job execution > then, the distributed cache file should not be found twice. > This testcase should run a job with a distributed cache file. All the > tasks' corresponding tasktracker's handle is got and checked for > the presence of distributed cache with proper permissions in the > proper directory. Next when job > runs again and if any of its tasks hits the same tasktracker, which > ran one of the task of the previous job, then that > file should not be uploaded again and task use the old file. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.