http://git-wip-us.apache.org/repos/asf/hadoop/blob/19041008/hadoop-common-project/hadoop-common/src/site/markdown/release/0.17.0/RELEASENOTES.0.17.0.md
----------------------------------------------------------------------
diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.17.0/RELEASENOTES.0.17.0.md
 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.17.0/RELEASENOTES.0.17.0.md
index c8c2794..4e2fee2 100644
--- 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.17.0/RELEASENOTES.0.17.0.md
+++ 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.17.0/RELEASENOTES.0.17.0.md
@@ -23,269 +23,265 @@ These release notes cover new developer and user-facing 
incompatibilities, impor
 
 ---
 
-* [HADOOP-3382](https://issues.apache.org/jira/browse/HADOOP-3382) | *Blocker* 
| **Memory leak when files are not cleanly closed**
+* [HADOOP-1593](https://issues.apache.org/jira/browse/HADOOP-1593) | *Major* | 
**FsShell should work with paths in non-default FileSystem**
 
-Fixed a memory leak associated with 'abandoned' files (i.e. not cleanly 
closed). This held up significant amounts of memory depending on activity and 
how long NameNode has been running.
+This bug allows non default path to specifeid in fsshell commands.
+
+So, you can now specify hadoop dfs -ls hdfs://remotehost1:port/path
+  and  hadoop dfs -ls hdfs://remotehost2:port/path without changing the config.
 
 
 ---
 
-* [HADOOP-3280](https://issues.apache.org/jira/browse/HADOOP-3280) | *Blocker* 
| **virtual address space limits break streaming apps**
+* [HADOOP-2345](https://issues.apache.org/jira/browse/HADOOP-2345) | *Major* | 
**new transactions to support HDFS Appends**
 
-This patch adds the mapred.child.ulimit to limit the virtual memory for 
children processes to the given value.
+Introduce new namenode transactions to support appending to HDFS files.
 
 
 ---
 
-* [HADOOP-3266](https://issues.apache.org/jira/browse/HADOOP-3266) | *Major* | 
**Remove HOD changes from CHANGES.txt, as they are now inside src/contrib/hod**
+* [HADOOP-2178](https://issues.apache.org/jira/browse/HADOOP-2178) | *Major* | 
**Job history on HDFS**
 
-Moved HOD change items from CHANGES.txt to a new file 
src/contrib/hod/CHANGES.txt.
+This feature provides facility to store job history on DFS. Now cluster admin 
can provide either localFS location or DFS location using configuration 
property "mapred.job.history.location"  to store job histroy. History will be 
logged in user specified location also. User can specify history location using 
configuration property "mapred.job.history.user.location" .
+The classes org.apache.hadoop.mapred.DefaultJobHistoryParser.MasterIndex and 
org.apache.hadoop.mapred.DefaultJobHistoryParser.MasterIndexParseListener, and 
public method org.apache.hadoop.mapred.DefaultJobHistoryParser.parseMasterIndex 
are not available.
+The signature of public method 
org.apache.hadoop.mapred.DefaultJobHistoryParser.parseJobTasks(File 
jobHistoryFile, JobHistory.JobInfo job) is changed to 
DefaultJobHistoryParser.parseJobTasks(String jobHistoryFile, JobHistory.JobInfo 
job, FileSystem fs).
+The signature of public method 
org.apache.hadoop.mapred.JobHistory.parseHistory(File path, Listener l) is 
changed to JobHistory.parseHistoryFromFS(String path, Listener l, FileSystem fs)
 
 
 ---
 
-* [HADOOP-3239](https://issues.apache.org/jira/browse/HADOOP-3239) | *Major* | 
**exists() calls logs FileNotFoundException in namenode log**
+* [HADOOP-2192](https://issues.apache.org/jira/browse/HADOOP-2192) | *Major* | 
**dfs mv command differs from POSIX standards**
 
-getFileInfo returns null for File not found instead of throwing 
FileNotFoundException
+this patch makes dfs -mv more like linux mv command getting rid of unnecessary 
output in dfs -mv and returns an error message when moving non existent 
files/directories --- mv: cannot stat "filename": No such file or directory.
 
 
 ---
 
-* [HADOOP-3223](https://issues.apache.org/jira/browse/HADOOP-3223) | *Blocker* 
| **Hadoop dfs -help for permissions contains a typo**
+* [HADOOP-2873](https://issues.apache.org/jira/browse/HADOOP-2873) | *Major* | 
**Namenode fails to re-start after cluster shutdown - DFSClient: Could not 
obtain blocks even all datanodes were up & live**
 
-Minor typo fix in help message for chmod. impact : none.
+**WARNING: No release note provided for this change.**
 
 
 ---
 
-* [HADOOP-3204](https://issues.apache.org/jira/browse/HADOOP-3204) | *Blocker* 
| **LocalFSMerger needs to catch throwable**
+* [HADOOP-2063](https://issues.apache.org/jira/browse/HADOOP-2063) | *Blocker* 
| **Command to pull corrupted files**
 
-Fixes LocalFSMerger in ReduceTask.java to handle errors/exceptions better. 
Prior to this all exceptions except IOException would be silently ignored.
+Added a new option -ignoreCrc to fs -get, or equivalently, fs -copyToLocal, 
such that crc checksum will be ignored for the command.  The use of this option 
is to download the corrupted files.
 
 
 ---
 
-* [HADOOP-3168](https://issues.apache.org/jira/browse/HADOOP-3168) | *Major* | 
**reduce amount of logging in hadoop streaming**
+* [HADOOP-1985](https://issues.apache.org/jira/browse/HADOOP-1985) | *Major* | 
**Abstract node to switch mapping into a topology service class used by 
namenode and jobtracker**
 
-Decreases the frequency of logging from streaming from every 100 records to 
every 10,000 records.
+This issue introduces rack awareness for map tasks. It also moves the rack 
resolution logic to the central servers - NameNode & JobTracker. The 
administrator can specify a loadable class given by 
topology.node.switch.mapping.impl to specify the class implementing the logic 
for rack resolution. The class must implement a method - resolve(List\<String\> 
names), where names is the list of DNS-names/IP-addresses that we want 
resolved. The return value is a list of resolved network paths of the form 
/foo/rack, where rack is the rackID where the node belongs to and foo is the 
switch where multiple racks are connected, and so on. The default 
implementation of this class is packaged along with hadoop and points to 
org.apache.hadoop.net.ScriptBasedMapping and this class loads a script that can 
be used for rack resolution. The script location is configurable. It is 
specified by topology.script.file.name and defaults to an empty script. In the 
case where the script name is empty, /default-rack
  is returned for all dns-names/IP-addresses. The loadable 
topology.node.switch.mapping.impl provides administrators fleixibilty to define 
how their site's node resolution should happen.
+For mapred, one can also specify the level of the cache w.r.t the number of 
levels in the resolved network path - defaults to two. This means that the 
JobTracker will cache tasks at the host level and at the rack level.
+Known issue: the task caching will not work with levels greater than 2 (beyond 
racks). This bug is tracked in HADOOP-3296.
 
 
 ---
 
-* [HADOOP-3162](https://issues.apache.org/jira/browse/HADOOP-3162) | *Blocker* 
| **Map/reduce stops working with comma separated input paths**
+* [HADOOP-1986](https://issues.apache.org/jira/browse/HADOOP-1986) | *Major* | 
**Add support for a general serialization mechanism for Map Reduce**
 
-The public methods org.apache.hadoop.mapred.JobConf.setInputPath(Path) and 
org.apache.hadoop.mapred.JobConf.addInputPath(Path) are deprecated. And the 
methods have the semantics of branch 0.16.
-The following public APIs  are added in 
org.apache.hadoop.mapred.FileInputFormat :
-public static void setInputPaths(JobConf job, Path... paths);
-public static void setInputPaths(JobConf job, String commaSeparatedPaths);
-public static void addInputPath(JobConf job, Path path);
-public static void addInputPaths(JobConf job, String commaSeparatedPaths);
-Earlier code calling JobConf.setInputPath(Path), JobConf.addInputPath(Path) 
should now call FileInputFormat.setInputPaths(JobConf, Path...) and 
FileInputFormat.addInputPath(Path) respectively
+Programs that implement the raw Mapper or Reducer interfaces will need 
modification to compile with this release. For example,
 
+class MyMapper implements Mapper {
+  public void map(WritableComparable key, Writable val,
+    OutputCollector out, Reporter reporter) throws IOException {
+    // ...
+  }
+  // ...
+}
 
----
+will need to be changed to refer to the parameterized type. For example:
 
-* [HADOOP-3152](https://issues.apache.org/jira/browse/HADOOP-3152) | *Minor* | 
**Make index interval configuable when using MapFileOutputFormat for map-reduce 
job**
+class MyMapper implements Mapper\<WritableComparable, Writable, 
WritableComparable, Writable\> {
+  public void map(WritableComparable key, Writable val,
+    OutputCollector\<WritableComparable, Writable\> out, Reporter reporter) 
throws IOException {
+    // ...
+  }
+  // ...
+}
 
-Add a static method MapFile#setIndexInterval(Configuration, int interval) so 
that MapReduce jobs that use MapFileOutputFormat can set the index interval.
+Similarly implementations of the following raw interfaces will need 
modification: InputFormat, OutputCollector, OutputFormat, Partitioner, 
RecordReader, RecordWriter
 
 
 ---
 
-* [HADOOP-3140](https://issues.apache.org/jira/browse/HADOOP-3140) | *Major* | 
**JobTracker should not try to promote a (map) task if it does not write to DFS 
at all**
+* [HADOOP-910](https://issues.apache.org/jira/browse/HADOOP-910) | *Major* | 
**Reduces can do merges for the on-disk map output files in parallel with their 
copying**
 
-Tasks that don't generate any output are not inserted in the commit queue of 
the JobTracker. They are marked as SUCCESSFUL by the TaskTracker and the 
JobTracker updates their state short-circuiting the commit queue.
+Reducers now perform merges of shuffle data (both in-memory and on disk) while 
fetching map outputs. Earlier, during shuffle they used to merge only the 
in-memory outputs.
 
 
 ---
 
-* [HADOOP-3137](https://issues.apache.org/jira/browse/HADOOP-3137) | *Major* | 
**[HOD] Update hod version number**
+* [HADOOP-2219](https://issues.apache.org/jira/browse/HADOOP-2219) | *Major* | 
**du like command to count number of files under a given directory**
 
-Build script was changed to make HOD versions follow Hadoop version numbers. 
As a result of this change, the next version of HOD would not be 0.5, but would 
be synchronized to the Hadoop version number. Users who rely on the version 
number of HOD should note the unexpected jump in version numbers.
+Added a new fs command fs -count for counting the number of bytes, files and 
directories under a given path.
+
+Added a new RPC getContentSummary(String path) to ClientProtocol.
 
 
 ---
 
-* [HADOOP-3124](https://issues.apache.org/jira/browse/HADOOP-3124) | *Major* | 
**DFS data node should not use hard coded 10 minutes as write timeout.**
+* [HADOOP-2820](https://issues.apache.org/jira/browse/HADOOP-2820) | *Major* | 
**Remove deprecated classes in streaming**
 
-Makes DataNode socket write timeout configurable. User impact : none.
+The deprecated classes org.apache.hadoop.streaming.StreamLineRecordReader,  
org.apache.hadoop.streaming.StreamOutputFormat and 
org.apache.hadoop.streaming.StreamSequenceRecordReader are removed
 
 
 ---
 
-* [HADOOP-3099](https://issues.apache.org/jira/browse/HADOOP-3099) | *Blocker* 
| **Need new options in distcp for preserving ower, group and permission**
+* [HADOOP-2819](https://issues.apache.org/jira/browse/HADOOP-2819) | *Major* | 
**Remove deprecated methods in JobConf()**
 
-Added a new option -p to distcp for preserving file/directory status.
--p[rbugp]              Preserve status
-                       r: replication number
-                       b: block size
-                       u: user
-                       g: group
-                       p: permission
-                       -p alone is equivalent to -prbugp
+The following deprecated methods are removed from org.apache.hadoop.JobConf :
+public Class getInputKeyClass()
+public void setInputKeyClass(Class theClass)
+public Class getInputValueClass()
+public void setInputValueClass(Class theClass)
+
+The methods, public boolean 
org.apache.hadoop.JobConf.getSpeculativeExecution() and
+public void org.apache.hadoop.JobConf.setSpeculativeExecution(boolean 
speculativeExecution) are undeprecated.
 
 
 ---
 
-* [HADOOP-3093](https://issues.apache.org/jira/browse/HADOOP-3093) | *Major* | 
**ma/reduce throws the following exception if "io.serializations" is not set:**
+* [HADOOP-2817](https://issues.apache.org/jira/browse/HADOOP-2817) | *Major* | 
**Remove deprecated mapred.tasktracker.tasks.maximum and 
clusterStatus.getMaxTasks()**
 
-The following public APIs  are added in org.apache.hadoop.conf.Configuration
- String[] Configuration.getStrings(String name, String... defaultValue)  and
- void Configuration.setStrings(String name, String... values)
+The deprecated method public int 
org.apache.hadoop.mapred.ClusterStatus.getMaxTasks() is removed.
+The deprecated configuration property "mapred.tasktracker.tasks.maximum" is 
removed.
 
 
 ---
 
-* [HADOOP-3091](https://issues.apache.org/jira/browse/HADOOP-3091) | *Major* | 
**hadoop dfs -put should support multiple src**
+* [HADOOP-2821](https://issues.apache.org/jira/browse/HADOOP-2821) | *Major* | 
**Remove deprecated classes in util**
 
-hadoop dfs -put accepts multiple sources when destination is a directory.
+The deprecated classes org.apache.hadoop.util.ShellUtil and 
org.apache.hadoop.util.ToolBase are removed.
 
 
 ---
 
-* [HADOOP-3073](https://issues.apache.org/jira/browse/HADOOP-3073) | *Blocker* 
| **SocketOutputStream.close() should close the channel.**
+* [HADOOP-2758](https://issues.apache.org/jira/browse/HADOOP-2758) | *Major* | 
**Reduce memory copies when data is read from DFS**
 
-SocketOutputStream.close() closes the underlying channel. Increase 
compatibility with java.net.Socket.getOutputStream. User Impact : none.
+DataNode takes 50% less CPU while serving data to clients.
 
 
 ---
 
-* [HADOOP-3060](https://issues.apache.org/jira/browse/HADOOP-3060) | *Major* | 
**MiniMRCluster is ignoring parameter taskTrackerFirst**
+* [HADOOP-771](https://issues.apache.org/jira/browse/HADOOP-771) | *Major* | 
**Namenode should return error when trying to delete non-empty directory**
 
-The parameter boolean taskTrackerFirst is removed from 
org.apache.hadoop.mapred.MiniMRCluster constructors.
-Thus signature of following APIs
-  public MiniMRCluster(int jobTrackerPort, int taskTrackerPort, int 
numTaskTrackers, String namenode, boolean taskTrackerFirst, int numDir)
-  public MiniMRCluster(int jobTrackerPort, int taskTrackerPort, int 
numTaskTrackers, String namenode, boolean taskTrackerFirst, int numDir, 
String[] racks)
-  public MiniMRCluster(int jobTrackerPort, int taskTrackerPort, int 
numTaskTrackers, String namenode, boolean taskTrackerFirst, int numDir, 
String[] racks, String[] hosts)
-  public MiniMRCluster(int jobTrackerPort, int taskTrackerPort, int 
numTaskTrackers, String namenode, boolean taskTrackerFirst, int numDir, 
String[] racks, String[] hosts, UnixUserGroupInformation ugi )
-is changed to
-  public MiniMRCluster(int jobTrackerPort, int taskTrackerPort, int 
numTaskTrackers, String namenode, int numDir)
-  public MiniMRCluster(int jobTrackerPort, int taskTrackerPort, int 
numTaskTrackers, String namenode, int numDir, String[] racks)
-  public MiniMRCluster(int jobTrackerPort, int taskTrackerPort, int 
numTaskTrackers, String namenode, int numDir, String[] racks, String[] hosts)
-  public MiniMRCluster(int jobTrackerPort, int taskTrackerPort, int 
numTaskTrackers, String namenode, int numDir, String[] racks, String[] hosts, 
UnixUserGroupInformation ugi )
-respectively.
-Since the old signatures were not deprecated, any code using the old 
constructors must be changed to use the new constructors.
+This patch adds a new api to file system i.e delete(path, boolean), 
deprecating the previous delete(path).
+the new api recursively deletes files only if boolean is set to true.
+If path is a file, the boolean value does not matter, if path is a directory 
and the directory is non empty delete(path, false) will throw an exception and 
delete(path, true) will delete all files recursively.
 
 
 ---
 
-* [HADOOP-3048](https://issues.apache.org/jira/browse/HADOOP-3048) | *Blocker* 
| **Stringifier**
+* [HADOOP-2765](https://issues.apache.org/jira/browse/HADOOP-2765) | *Major* | 
**setting memory limits for tasks**
 
- A new Interface and a default implementation to convert and restore 
serializations of objects to strings.
+This feature enables specifying ulimits for streaming/pipes tasks. Now pipes 
and streaming tasks have same virtual memory available as the java process 
which invokes them. Ulimit value will be the same as -Xmx value for java 
processes provided using mapred.child.java.opts.
 
 
 ---
 
-* [HADOOP-3041](https://issues.apache.org/jira/browse/HADOOP-3041) | *Blocker* 
| **Within a task, the value ofJobConf.getOutputPath() method is modified**
+* [HADOOP-2657](https://issues.apache.org/jira/browse/HADOOP-2657) | *Major* | 
**Enhancements to DFSClient to support flushing data at any point in time**
 
-1. Deprecates JobConf.setOutputPath and JobConf.getOutputPath
-JobConf.getOutputPath() still returns the same value that it used to return. 
-2. Deprecates OutputFormatBase. Adds FileOutputFormat. Existing output formats 
extending OutputFormatBase, now extend FileOutputFormat.
-3. Adds the following APIs in FileOutputFormat :
-public static void setOutputPath(JobConf conf, Path outputDir); // sets 
mapred.output.dir
-public static Path getOutputPath(JobConf conf) ; // gets mapred.output.dir
-public static Path getWorkOutputPath(JobConf conf); // gets 
mapred.work.output.dir
-4. static void setWorkOutputPath(JobConf conf, Path outputDir) is also added 
to FileOutputFormat. This is used by the framework to set 
mapred.work.output.dir as task's temporary output dir .
+A new API DFSOututStream.flush() flushes all outstanding data to the pipeline 
of datanodes.
 
 
 ---
 
-* [HADOOP-3040](https://issues.apache.org/jira/browse/HADOOP-3040) | *Major* | 
**Streaming should assume an empty key if the first character on a line is the 
seperator (stream.map.output.field.separator, by default, tab)**
+* [HADOOP-2399](https://issues.apache.org/jira/browse/HADOOP-2399) | *Major* | 
**Input key and value to combiner and reducer should be reused**
 
-If the first character on a line is the separator, empty key is assumed, and 
the whole line is the value (due to a bug this was not the case).
+The key and value objects that are given to the Combiner and Reducer are now 
reused between calls. This is much more efficient, but the user can not assume 
the objects are constant.
 
 
 ---
 
-* [HADOOP-3001](https://issues.apache.org/jira/browse/HADOOP-3001) | *Blocker* 
| **FileSystems should track how many bytes are read and written**
+* [HADOOP-2423](https://issues.apache.org/jira/browse/HADOOP-2423) | *Major* | 
**The codes in FSDirectory.mkdirs(...) is inefficient.**
 
-Adds new framework map/reduce counters that track the number of bytes read and 
written to HDFS, local, KFS, and S3 file systems.
+Improved FSDirectory.mkdirs(...) performance.  In 
NNThroughputBenchmark-create, the ops per sec in  was improved ~54%.
 
 
 ---
 
-* [HADOOP-2982](https://issues.apache.org/jira/browse/HADOOP-2982) | *Blocker* 
| **[HOD] checknodes should look for free nodes without the jobs attribute**
+* [HADOOP-2470](https://issues.apache.org/jira/browse/HADOOP-2470) | *Major* | 
**Open and isDir should be removed from ClientProtocol**
 
-The number of free nodes in the cluster is computed using a better algorithm 
that filters out inconsistencies in node status as reported by Torque.
+Open and isDir were removed from ClientProtocol.
 
 
 ---
 
-* [HADOOP-2947](https://issues.apache.org/jira/browse/HADOOP-2947) | *Blocker* 
| **[HOD] Hod should redirect stderr and stdout of Hadoop daemons to assist 
debugging**
+* [HADOOP-2775](https://issues.apache.org/jira/browse/HADOOP-2775) | *Major* | 
**[HOD] Put in place unit test framework for HOD**
 
-The stdout and stderr streams of daemons are redirected to files that are 
created under the hadoop log directory. Users can now send kill 3 signals to 
the daemons to get stack traces and thread dumps for debugging.
+A unit testing framework based on pyunit is added to HOD. Developers 
contributing patches to HOD should now contribute unit tests along with the 
patches where possible.
 
 
 ---
 
-* [HADOOP-2899](https://issues.apache.org/jira/browse/HADOOP-2899) | *Major* | 
**[HOD] hdfs:///mapredsystem directory not cleaned up after deallocation**
+* [HADOOP-2825](https://issues.apache.org/jira/browse/HADOOP-2825) | *Major* | 
**MapOutputLocation.getFile() needs to be removed**
 
-The mapred system directory generated by HOD is cleaned up at cluster 
deallocation time.
+The deprecated method, public long 
org.apache.hadoop.mapred.MapOutputLocation.getFile(FileSystem fileSys, Path 
localFilename, int reduce, Progressable pingee, int timeout) is removed.
 
 
 ---
 
-* [HADOOP-2873](https://issues.apache.org/jira/browse/HADOOP-2873) | *Major* | 
**Namenode fails to re-start after cluster shutdown - DFSClient: Could not 
obtain blocks even all datanodes were up & live**
+* [HADOOP-2822](https://issues.apache.org/jira/browse/HADOOP-2822) | *Major* | 
**Remove deprecated classes in mapred**
 
-**WARNING: No release note provided for this incompatible change.**
+The deprecated classes org.apache.hadoop.mapred.InputFormatBase and 
org.apache.hadoop.mapred.PhasedFileSystem are removed.
 
 
 ---
 
-* [HADOOP-2855](https://issues.apache.org/jira/browse/HADOOP-2855) | *Blocker* 
| **[HOD] HOD fails to allocate a cluster if the tarball specified is a 
relative path**
+* [HADOOP-2559](https://issues.apache.org/jira/browse/HADOOP-2559) | *Major* | 
**DFS should place one replica per rack**
 
-Changes were made to handle relative paths correctly for important HOD options 
such as the cluster directory, tarball option, and script file.
+Change DFS block placement to allocate the first replica locally, the second 
off-rack, and the third intra-rack from the second.
 
 
 ---
 
-* [HADOOP-2854](https://issues.apache.org/jira/browse/HADOOP-2854) | *Blocker* 
| **Remove the deprecated ipc.Server.getUserInfo()**
+* [HADOOP-2239](https://issues.apache.org/jira/browse/HADOOP-2239) | *Major* | 
**Security:  Need to be able to encrypt Hadoop socket connections**
 
-Removes deprecated method Server.getUserInfo()
+This patch adds a new FileSystem, HftpsFileSystem, that allows access to HDFS 
data over HTTPS.
 
 
 ---
 
-* [HADOOP-2839](https://issues.apache.org/jira/browse/HADOOP-2839) | *Blocker* 
| **Remove deprecated methods in FileSystem**
+* [HADOOP-2027](https://issues.apache.org/jira/browse/HADOOP-2027) | *Major* | 
**FileSystem should provide byte ranges for file locations**
 
-Removes deprecated API FileSystem#globPaths()
+New FileSystem API getFileBlockLocations to return the number of bytes in each 
block in a file via a single rpc to the namenode to speed up job planning. 
Deprecates getFileCacheHints.
 
 
 ---
 
-* [HADOOP-2831](https://issues.apache.org/jira/browse/HADOOP-2831) | *Blocker* 
| **Remove the deprecated INode.getAbsoluteName()**
+* [HADOOP-2899](https://issues.apache.org/jira/browse/HADOOP-2899) | *Major* | 
**[HOD] hdfs:///mapredsystem directory not cleaned up after deallocation**
 
-Removes deprecated method INode#getAbsoluteName()
+The mapred system directory generated by HOD is cleaned up at cluster 
deallocation time.
 
 
 ---
 
-* [HADOOP-2828](https://issues.apache.org/jira/browse/HADOOP-2828) | *Major* | 
**Remove deprecated methods in Configuration.java**
+* [HADOOP-2116](https://issues.apache.org/jira/browse/HADOOP-2116) | *Major* | 
**Job.local.dir to be exposed to tasks**
 
-The following deprecated methods in org.apache.hadoop.conf.Configuration are 
removed.
-public Object getObject(String name)
-public void setObject(String name, Object value)
-public Object get(String name, Object defaultValue)
-public void set(String name, Object value)
-and public Iterator entries()
+This issue restructures local job directory on the tasktracker.
+Users are provided with a job-specific shared directory  
(mapred-local/taskTracker/jobcache/$jobid/ work) for using it as scratch space, 
through configuration property and system property "job.local.dir". Now, the 
directory "../work" is not available from the task's cwd.
 
 
 ---
 
-* [HADOOP-2826](https://issues.apache.org/jira/browse/HADOOP-2826) | *Major* | 
**FileSplit.getFile(), LineRecordReader. readLine() need to be removed**
+* [HADOOP-2796](https://issues.apache.org/jira/browse/HADOOP-2796) | *Major* | 
**For script option hod should exit with distinguishable exit codes for script 
code and hod exit code.**
 
-The deprecated methods, public File 
org.apache.hadoop.mapred.FileSplit.getFile() and 
-  public static  long 
org.apache.hadoop.mapred.LineRecordReader.readLine(InputStream in,  
OutputStream out)
-are removed.
-The constructor 
org.apache.hadoop.mapred.LineRecordReader.LineReader(InputStream in, 
Configuration conf) 's visibility is made public.
-The signature of the public 
org.apache.hadoop.streaming.UTF8ByteArrayUtils.readLIne(InputStream) method is 
changed to UTF8ByteArrayUtils.readLIne(LineReader, Text).  Since the old 
signature is not deprecated, any code using the old method must be changed to 
use the new method.
+A provision to reliably detect a failing script's exit code was added. In case 
the hod script option returned a non-zero exit code, users can now look for a 
'script.exitcode' file written to the HOD cluster directory. If this file is 
present, it means the script failed with the returned exit code.
 
 
 ---
 
-* [HADOOP-2825](https://issues.apache.org/jira/browse/HADOOP-2825) | *Major* | 
**MapOutputLocation.getFile() needs to be removed**
+* [HADOOP-2828](https://issues.apache.org/jira/browse/HADOOP-2828) | *Major* | 
**Remove deprecated methods in Configuration.java**
 
-The deprecated method, public long 
org.apache.hadoop.mapred.MapOutputLocation.getFile(FileSystem fileSys, Path 
localFilename, int reduce, Progressable pingee, int timeout) is removed.
+The following deprecated methods in org.apache.hadoop.conf.Configuration are 
removed.
+public Object getObject(String name)
+public void setObject(String name, Object value)
+public Object get(String name, Object defaultValue)
+public void set(String name, Object value)
+and public Iterator entries()
 
 
 ---
@@ -306,299 +302,303 @@ and public int getLine() are removed
 
 ---
 
-* [HADOOP-2822](https://issues.apache.org/jira/browse/HADOOP-2822) | *Major* | 
**Remove deprecated classes in mapred**
+* [HADOOP-3040](https://issues.apache.org/jira/browse/HADOOP-3040) | *Major* | 
**Streaming should assume an empty key if the first character on a line is the 
seperator (stream.map.output.field.separator, by default, tab)**
 
-The deprecated classes org.apache.hadoop.mapred.InputFormatBase and 
org.apache.hadoop.mapred.PhasedFileSystem are removed.
+If the first character on a line is the separator, empty key is assumed, and 
the whole line is the value (due to a bug this was not the case).
 
 
 ---
 
-* [HADOOP-2821](https://issues.apache.org/jira/browse/HADOOP-2821) | *Major* | 
**Remove deprecated classes in util**
-
-The deprecated classes org.apache.hadoop.util.ShellUtil and 
org.apache.hadoop.util.ToolBase are removed.
+* [HADOOP-1622](https://issues.apache.org/jira/browse/HADOOP-1622) | *Major* | 
**Hadoop should provide a way to allow the user to specify jar file(s) the user 
job depends on**
 
+This patch allows new command line options for
 
----
+hadoop jar
+which are
 
-* [HADOOP-2820](https://issues.apache.org/jira/browse/HADOOP-2820) | *Major* | 
**Remove deprecated classes in streaming**
+hadoop jar -files \<comma seperated list of files\> -libjars \<comma seperated 
list of jars\> -archives \<comma seperated list of archives\>
 
-The deprecated classes org.apache.hadoop.streaming.StreamLineRecordReader,  
org.apache.hadoop.streaming.StreamOutputFormat and 
org.apache.hadoop.streaming.StreamSequenceRecordReader are removed
+-files options allows you to speficy comma seperated list of path which would 
be present in your current working directory of your task
+-libjars option allows you to add jars to the classpaths of the maps and 
reduces.
+-archives allows you to pass archives as arguments that are unzipped/unjarred 
and a link with name of the jar/zip are created in the current working 
directory if tasks.
 
 
 ---
 
-* [HADOOP-2819](https://issues.apache.org/jira/browse/HADOOP-2819) | *Major* | 
**Remove deprecated methods in JobConf()**
-
-The following deprecated methods are removed from org.apache.hadoop.JobConf :
-public Class getInputKeyClass()
-public void setInputKeyClass(Class theClass)
-public Class getInputValueClass()
-public void setInputValueClass(Class theClass)
+* [HADOOP-2119](https://issues.apache.org/jira/browse/HADOOP-2119) | 
*Critical* | **JobTracker becomes non-responsive if the task trackers finish 
task too fast**
 
-The methods, public boolean 
org.apache.hadoop.JobConf.getSpeculativeExecution() and 
-public void org.apache.hadoop.JobConf.setSpeculativeExecution(boolean 
speculativeExecution) are undeprecated.
+This removes many inefficiencies in task placement and scheduling logic. The 
JobTracker would perform linear scans of the list of submitted tasks in cases 
where it did not find an obvious candidate task for a node. With better data 
structures for managing job state, all task placement operations now run in 
constant time (in most cases). Also, the task output promotions are batched.
 
 
 ---
 
-* [HADOOP-2818](https://issues.apache.org/jira/browse/HADOOP-2818) | *Major* | 
**Remove deprecated Counters.getDisplayName(),  getCounterNames(),   
getCounter(String counterName)**
+* [HADOOP-3091](https://issues.apache.org/jira/browse/HADOOP-3091) | *Major* | 
**hadoop dfs -put should support multiple src**
 
-The deprecated methods public String 
org.apache.hadoop.mapred.Counters.getDisplayName(String counter) and 
-public synchronized Collection\<String\> 
org.apache.hadoop.mapred.Counters.getCounterNames() are removed.
-The deprecated method public synchronized long 
org.apache.hadoop.mapred.Counters.getCounter(String counterName) is 
undeprecated.
+hadoop dfs -put accepts multiple sources when destination is a directory.
 
 
 ---
 
-* [HADOOP-2817](https://issues.apache.org/jira/browse/HADOOP-2817) | *Major* | 
**Remove deprecated mapred.tasktracker.tasks.maximum and 
clusterStatus.getMaxTasks()**
+* [HADOOP-3073](https://issues.apache.org/jira/browse/HADOOP-3073) | *Blocker* 
| **SocketOutputStream.close() should close the channel.**
 
-The deprecated method public int 
org.apache.hadoop.mapred.ClusterStatus.getMaxTasks() is removed.
-The deprecated configuration property "mapred.tasktracker.tasks.maximum" is 
removed.
+SocketOutputStream.close() closes the underlying channel. Increase 
compatibility with java.net.Socket.getOutputStream. User Impact : none.
 
 
 ---
 
-* [HADOOP-2796](https://issues.apache.org/jira/browse/HADOOP-2796) | *Major* | 
**For script option hod should exit with distinguishable exit codes for script 
code and hod exit code.**
+* [HADOOP-2982](https://issues.apache.org/jira/browse/HADOOP-2982) | *Blocker* 
| **[HOD] checknodes should look for free nodes without the jobs attribute**
 
-A provision to reliably detect a failing script's exit code was added. In case 
the hod script option returned a non-zero exit code, users can now look for a 
'script.exitcode' file written to the HOD cluster directory. If this file is 
present, it means the script failed with the returned exit code.
+The number of free nodes in the cluster is computed using a better algorithm 
that filters out inconsistencies in node status as reported by Torque.
 
 
 ---
 
-* [HADOOP-2775](https://issues.apache.org/jira/browse/HADOOP-2775) | *Major* | 
**[HOD] Put in place unit test framework for HOD**
+* [HADOOP-3060](https://issues.apache.org/jira/browse/HADOOP-3060) | *Major* | 
**MiniMRCluster is ignoring parameter taskTrackerFirst**
 
-A unit testing framework based on pyunit is added to HOD. Developers 
contributing patches to HOD should now contribute unit tests along with the 
patches where possible.
+The parameter boolean taskTrackerFirst is removed from 
org.apache.hadoop.mapred.MiniMRCluster constructors.
+Thus signature of following APIs
+  public MiniMRCluster(int jobTrackerPort, int taskTrackerPort, int 
numTaskTrackers, String namenode, boolean taskTrackerFirst, int numDir)
+  public MiniMRCluster(int jobTrackerPort, int taskTrackerPort, int 
numTaskTrackers, String namenode, boolean taskTrackerFirst, int numDir, 
String[] racks)
+  public MiniMRCluster(int jobTrackerPort, int taskTrackerPort, int 
numTaskTrackers, String namenode, boolean taskTrackerFirst, int numDir, 
String[] racks, String[] hosts)
+  public MiniMRCluster(int jobTrackerPort, int taskTrackerPort, int 
numTaskTrackers, String namenode, boolean taskTrackerFirst, int numDir, 
String[] racks, String[] hosts, UnixUserGroupInformation ugi )
+is changed to
+  public MiniMRCluster(int jobTrackerPort, int taskTrackerPort, int 
numTaskTrackers, String namenode, int numDir)
+  public MiniMRCluster(int jobTrackerPort, int taskTrackerPort, int 
numTaskTrackers, String namenode, int numDir, String[] racks)
+  public MiniMRCluster(int jobTrackerPort, int taskTrackerPort, int 
numTaskTrackers, String namenode, int numDir, String[] racks, String[] hosts)
+  public MiniMRCluster(int jobTrackerPort, int taskTrackerPort, int 
numTaskTrackers, String namenode, int numDir, String[] racks, String[] hosts, 
UnixUserGroupInformation ugi )
+respectively.
+Since the old signatures were not deprecated, any code using the old 
constructors must be changed to use the new constructors.
 
 
 ---
 
-* [HADOOP-2765](https://issues.apache.org/jira/browse/HADOOP-2765) | *Major* | 
**setting memory limits for tasks**
+* [HADOOP-2055](https://issues.apache.org/jira/browse/HADOOP-2055) | *Minor* | 
**JobConf should have a setInputPathFilter method**
 
-This feature enables specifying ulimits for streaming/pipes tasks. Now pipes 
and streaming tasks have same virtual memory available as the java process 
which invokes them. Ulimit value will be the same as -Xmx value for java 
processes provided using mapred.child.java.opts.
+This issue provides users the ability to specify what paths to ignore for 
processing in the job input directory (apart from the filenames that start with 
"\_" and "."). Defines two new APIs - 
FileInputFormat.setInputPathFilter(JobConf, PathFilter), and, 
FileInputFormat.getInputPathFilter(JobConf).
 
 
 ---
 
-* [HADOOP-2758](https://issues.apache.org/jira/browse/HADOOP-2758) | *Major* | 
**Reduce memory copies when data is read from DFS**
+* [HADOOP-2854](https://issues.apache.org/jira/browse/HADOOP-2854) | *Blocker* 
| **Remove the deprecated ipc.Server.getUserInfo()**
 
-DataNode takes 50% less CPU while serving data to clients.
+Removes deprecated method Server.getUserInfo()
 
 
 ---
 
-* [HADOOP-2657](https://issues.apache.org/jira/browse/HADOOP-2657) | *Major* | 
**Enhancements to DFSClient to support flushing data at any point in time**
+* [HADOOP-2563](https://issues.apache.org/jira/browse/HADOOP-2563) | *Blocker* 
| **Remove deprecated FileSystem#listPaths()**
 
-A new API DFSOututStream.flush() flushes all outstanding data to the pipeline 
of datanodes.
+Removes deprecated method FileSystem#listPaths()
 
 
 ---
 
-* [HADOOP-2634](https://issues.apache.org/jira/browse/HADOOP-2634) | *Blocker* 
| **Deprecate exists() and isDir() to simplify ClientProtocol.**
+* [HADOOP-2855](https://issues.apache.org/jira/browse/HADOOP-2855) | *Blocker* 
| **[HOD] HOD fails to allocate a cluster if the tarball specified is a 
relative path**
 
-Deprecates exists() from ClientProtocol
+Changes were made to handle relative paths correctly for important HOD options 
such as the cluster directory, tarball option, and script file.
 
 
 ---
 
-* [HADOOP-2563](https://issues.apache.org/jira/browse/HADOOP-2563) | *Blocker* 
| **Remove deprecated FileSystem#listPaths()**
+* [HADOOP-2818](https://issues.apache.org/jira/browse/HADOOP-2818) | *Major* | 
**Remove deprecated Counters.getDisplayName(),  getCounterNames(),   
getCounter(String counterName)**
 
-Removes deprecated method FileSystem#listPaths()
+The deprecated methods public String 
org.apache.hadoop.mapred.Counters.getDisplayName(String counter) and
+public synchronized Collection\<String\> 
org.apache.hadoop.mapred.Counters.getCounterNames() are removed.
+The deprecated method public synchronized long 
org.apache.hadoop.mapred.Counters.getCounter(String counterName) is 
undeprecated.
 
 
 ---
 
-* [HADOOP-2559](https://issues.apache.org/jira/browse/HADOOP-2559) | *Major* | 
**DFS should place one replica per rack**
+* [HADOOP-2831](https://issues.apache.org/jira/browse/HADOOP-2831) | *Blocker* 
| **Remove the deprecated INode.getAbsoluteName()**
 
-Change DFS block placement to allocate the first replica locally, the second 
off-rack, and the third intra-rack from the second.
+Removes deprecated method INode#getAbsoluteName()
 
 
 ---
 
-* [HADOOP-2551](https://issues.apache.org/jira/browse/HADOOP-2551) | *Blocker* 
| **hadoop-env.sh needs finer granularity**
+* [HADOOP-2947](https://issues.apache.org/jira/browse/HADOOP-2947) | *Blocker* 
| **[HOD] Hod should redirect stderr and stdout of Hadoop daemons to assist 
debugging**
 
-New environment variables were introduced to allow finer grained control of 
Java options passed to server and client JVMs.  See the new \*\_OPTS variables 
in conf/hadoop-env.sh.
+The stdout and stderr streams of daemons are redirected to files that are 
created under the hadoop log directory. Users can now send kill 3 signals to 
the daemons to get stack traces and thread dumps for debugging.
 
 
 ---
 
-* [HADOOP-2470](https://issues.apache.org/jira/browse/HADOOP-2470) | *Major* | 
**Open and isDir should be removed from ClientProtocol**
+* [HADOOP-3137](https://issues.apache.org/jira/browse/HADOOP-3137) | *Major* | 
**[HOD] Update hod version number**
 
-Open and isDir were removed from ClientProtocol.
+Build script was changed to make HOD versions follow Hadoop version numbers. 
As a result of this change, the next version of HOD would not be 0.5, but would 
be synchronized to the Hadoop version number. Users who rely on the version 
number of HOD should note the unexpected jump in version numbers.
 
 
 ---
 
-* [HADOOP-2423](https://issues.apache.org/jira/browse/HADOOP-2423) | *Major* | 
**The codes in FSDirectory.mkdirs(...) is inefficient.**
+* [HADOOP-3093](https://issues.apache.org/jira/browse/HADOOP-3093) | *Major* | 
**ma/reduce throws the following exception if "io.serializations" is not set:**
 
-Improved FSDirectory.mkdirs(...) performance.  In 
NNThroughputBenchmark-create, the ops per sec in  was improved ~54%.
+The following public APIs  are added in org.apache.hadoop.conf.Configuration
+ String[] Configuration.getStrings(String name, String... defaultValue)  and
+ void Configuration.setStrings(String name, String... values)
 
 
 ---
 
-* [HADOOP-2410](https://issues.apache.org/jira/browse/HADOOP-2410) | *Major* | 
**Make EC2 cluster nodes more independent of each other**
+* [HADOOP-2839](https://issues.apache.org/jira/browse/HADOOP-2839) | *Blocker* 
| **Remove deprecated methods in FileSystem**
 
-The command "hadoop-ec2 run" has been replaced by "hadoop-ec2 launch-cluster 
\<group\> \<number of instances\>", and "hadoop-ec2 start-hadoop" has been 
removed since Hadoop is started on instance start up. See 
http://wiki.apache.org/hadoop/AmazonEC2 for details.
+Removes deprecated API FileSystem#globPaths()
 
 
 ---
 
-* [HADOOP-2399](https://issues.apache.org/jira/browse/HADOOP-2399) | *Major* | 
**Input key and value to combiner and reducer should be reused**
+* [HADOOP-2551](https://issues.apache.org/jira/browse/HADOOP-2551) | *Blocker* 
| **hadoop-env.sh needs finer granularity**
 
-The key and value objects that are given to the Combiner and Reducer are now 
reused between calls. This is much more efficient, but the user can not assume 
the objects are constant.
+New environment variables were introduced to allow finer grained control of 
Java options passed to server and client JVMs.  See the new \*\_OPTS variables 
in conf/hadoop-env.sh.
 
 
 ---
 
-* [HADOOP-2345](https://issues.apache.org/jira/browse/HADOOP-2345) | *Major* | 
**new transactions to support HDFS Appends**
+* [HADOOP-2634](https://issues.apache.org/jira/browse/HADOOP-2634) | *Blocker* 
| **Deprecate exists() and isDir() to simplify ClientProtocol.**
 
-Introduce new namenode transactions to support appending to HDFS files.
+Deprecates exists() from ClientProtocol
 
 
 ---
 
-* [HADOOP-2239](https://issues.apache.org/jira/browse/HADOOP-2239) | *Major* | 
**Security:  Need to be able to encrypt Hadoop socket connections**
+* [HADOOP-3099](https://issues.apache.org/jira/browse/HADOOP-3099) | *Blocker* 
| **Need new options in distcp for preserving ower, group and permission**
 
-This patch adds a new FileSystem, HftpsFileSystem, that allows access to HDFS 
data over HTTPS.
+Added a new option -p to distcp for preserving file/directory status.
+-p[rbugp]              Preserve status
+                       r: replication number
+                       b: block size
+                       u: user
+                       g: group
+                       p: permission
+                       -p alone is equivalent to -prbugp
 
 
 ---
 
-* [HADOOP-2219](https://issues.apache.org/jira/browse/HADOOP-2219) | *Major* | 
**du like command to count number of files under a given directory**
-
-Added a new fs command fs -count for counting the number of bytes, files and 
directories under a given path.
+* [HADOOP-3001](https://issues.apache.org/jira/browse/HADOOP-3001) | *Blocker* 
| **FileSystems should track how many bytes are read and written**
 
-Added a new RPC getContentSummary(String path) to ClientProtocol.
+Adds new framework map/reduce counters that track the number of bytes read and 
written to HDFS, local, KFS, and S3 file systems.
 
 
 ---
 
-* [HADOOP-2192](https://issues.apache.org/jira/browse/HADOOP-2192) | *Major* | 
**dfs mv command differs from POSIX standards**
+* [HADOOP-3048](https://issues.apache.org/jira/browse/HADOOP-3048) | *Blocker* 
| **Stringifier**
 
-this patch makes dfs -mv more like linux mv command getting rid of unnecessary 
output in dfs -mv and returns an error message when moving non existent 
files/directories --- mv: cannot stat "filename": No such file or directory.
+ A new Interface and a default implementation to convert and restore 
serializations of objects to strings.
 
 
 ---
 
-* [HADOOP-2178](https://issues.apache.org/jira/browse/HADOOP-2178) | *Major* | 
**Job history on HDFS**
+* [HADOOP-2410](https://issues.apache.org/jira/browse/HADOOP-2410) | *Major* | 
**Make EC2 cluster nodes more independent of each other**
 
-This feature provides facility to store job history on DFS. Now cluster admin 
can provide either localFS location or DFS location using configuration 
property "mapred.job.history.location"  to store job histroy. History will be 
logged in user specified location also. User can specify history location using 
configuration property "mapred.job.history.user.location" .
-The classes org.apache.hadoop.mapred.DefaultJobHistoryParser.MasterIndex and 
org.apache.hadoop.mapred.DefaultJobHistoryParser.MasterIndexParseListener, and 
public method org.apache.hadoop.mapred.DefaultJobHistoryParser.parseMasterIndex 
are not available.
-The signature of public method 
org.apache.hadoop.mapred.DefaultJobHistoryParser.parseJobTasks(File 
jobHistoryFile, JobHistory.JobInfo job) is changed to 
DefaultJobHistoryParser.parseJobTasks(String jobHistoryFile, JobHistory.JobInfo 
job, FileSystem fs).
-The signature of public method 
org.apache.hadoop.mapred.JobHistory.parseHistory(File path, Listener l) is 
changed to JobHistory.parseHistoryFromFS(String path, Listener l, FileSystem fs)
+The command "hadoop-ec2 run" has been replaced by "hadoop-ec2 launch-cluster 
\<group\> \<number of instances\>", and "hadoop-ec2 start-hadoop" has been 
removed since Hadoop is started on instance start up. See 
http://wiki.apache.org/hadoop/AmazonEC2 for details.
 
 
 ---
 
-* [HADOOP-2119](https://issues.apache.org/jira/browse/HADOOP-2119) | 
*Critical* | **JobTracker becomes non-responsive if the task trackers finish 
task too fast**
+* [HADOOP-2826](https://issues.apache.org/jira/browse/HADOOP-2826) | *Major* | 
**FileSplit.getFile(), LineRecordReader. readLine() need to be removed**
 
-This removes many inefficiencies in task placement and scheduling logic. The 
JobTracker would perform linear scans of the list of submitted tasks in cases 
where it did not find an obvious candidate task for a node. With better data 
structures for managing job state, all task placement operations now run in 
constant time (in most cases). Also, the task output promotions are batched.
+The deprecated methods, public File 
org.apache.hadoop.mapred.FileSplit.getFile() and
+  public static  long 
org.apache.hadoop.mapred.LineRecordReader.readLine(InputStream in,  
OutputStream out)
+are removed.
+The constructor 
org.apache.hadoop.mapred.LineRecordReader.LineReader(InputStream in, 
Configuration conf) 's visibility is made public.
+The signature of the public 
org.apache.hadoop.streaming.UTF8ByteArrayUtils.readLIne(InputStream) method is 
changed to UTF8ByteArrayUtils.readLIne(LineReader, Text).  Since the old 
signature is not deprecated, any code using the old method must be changed to 
use the new method.
 
 
 ---
 
-* [HADOOP-2116](https://issues.apache.org/jira/browse/HADOOP-2116) | *Major* | 
**Job.local.dir to be exposed to tasks**
+* [HADOOP-3140](https://issues.apache.org/jira/browse/HADOOP-3140) | *Major* | 
**JobTracker should not try to promote a (map) task if it does not write to DFS 
at all**
 
-This issue restructures local job directory on the tasktracker.
-Users are provided with a job-specific shared directory  
(mapred-local/taskTracker/jobcache/$jobid/ work) for using it as scratch space, 
through configuration property and system property "job.local.dir". Now, the 
directory "../work" is not available from the task's cwd.
+Tasks that don't generate any output are not inserted in the commit queue of 
the JobTracker. They are marked as SUCCESSFUL by the TaskTracker and the 
JobTracker updates their state short-circuiting the commit queue.
 
 
 ---
 
-* [HADOOP-2063](https://issues.apache.org/jira/browse/HADOOP-2063) | *Blocker* 
| **Command to pull corrupted files**
+* [HADOOP-3041](https://issues.apache.org/jira/browse/HADOOP-3041) | *Blocker* 
| **Within a task, the value ofJobConf.getOutputPath() method is modified**
 
-Added a new option -ignoreCrc to fs -get, or equivalently, fs -copyToLocal, 
such that crc checksum will be ignored for the command.  The use of this option 
is to download the corrupted files.
+1. Deprecates JobConf.setOutputPath and JobConf.getOutputPath
+JobConf.getOutputPath() still returns the same value that it used to return.
+2. Deprecates OutputFormatBase. Adds FileOutputFormat. Existing output formats 
extending OutputFormatBase, now extend FileOutputFormat.
+3. Adds the following APIs in FileOutputFormat :
+public static void setOutputPath(JobConf conf, Path outputDir); // sets 
mapred.output.dir
+public static Path getOutputPath(JobConf conf) ; // gets mapred.output.dir
+public static Path getWorkOutputPath(JobConf conf); // gets 
mapred.work.output.dir
+4. static void setWorkOutputPath(JobConf conf, Path outputDir) is also added 
to FileOutputFormat. This is used by the framework to set 
mapred.work.output.dir as task's temporary output dir .
 
 
 ---
 
-* [HADOOP-2055](https://issues.apache.org/jira/browse/HADOOP-2055) | *Minor* | 
**JobConf should have a setInputPathFilter method**
+* [HADOOP-3168](https://issues.apache.org/jira/browse/HADOOP-3168) | *Major* | 
**reduce amount of logging in hadoop streaming**
 
-This issue provides users the ability to specify what paths to ignore for 
processing in the job input directory (apart from the filenames that start with 
"\_" and "."). Defines two new APIs - 
FileInputFormat.setInputPathFilter(JobConf, PathFilter), and, 
FileInputFormat.getInputPathFilter(JobConf).
+Decreases the frequency of logging from streaming from every 100 records to 
every 10,000 records.
 
 
 ---
 
-* [HADOOP-2027](https://issues.apache.org/jira/browse/HADOOP-2027) | *Major* | 
**FileSystem should provide byte ranges for file locations**
+* [HADOOP-3152](https://issues.apache.org/jira/browse/HADOOP-3152) | *Minor* | 
**Make index interval configuable when using MapFileOutputFormat for map-reduce 
job**
 
-New FileSystem API getFileBlockLocations to return the number of bytes in each 
block in a file via a single rpc to the namenode to speed up job planning. 
Deprecates getFileCacheHints.
+Add a static method MapFile#setIndexInterval(Configuration, int interval) so 
that MapReduce jobs that use MapFileOutputFormat can set the index interval.
 
 
 ---
 
-* [HADOOP-1986](https://issues.apache.org/jira/browse/HADOOP-1986) | *Major* | 
**Add support for a general serialization mechanism for Map Reduce**
+* [HADOOP-3223](https://issues.apache.org/jira/browse/HADOOP-3223) | *Blocker* 
| **Hadoop dfs -help for permissions contains a typo**
 
-Programs that implement the raw Mapper or Reducer interfaces will need 
modification to compile with this release. For example, 
+Minor typo fix in help message for chmod. impact : none.
 
-class MyMapper implements Mapper {
-  public void map(WritableComparable key, Writable val,
-    OutputCollector out, Reporter reporter) throws IOException {
-    // ...
-  }
-  // ...
-}
 
-will need to be changed to refer to the parameterized type. For example:
+---
 
-class MyMapper implements Mapper\<WritableComparable, Writable, 
WritableComparable, Writable\> {
-  public void map(WritableComparable key, Writable val,
-    OutputCollector\<WritableComparable, Writable\> out, Reporter reporter) 
throws IOException {
-    // ...
-  }
-  // ...
-}
+* [HADOOP-3204](https://issues.apache.org/jira/browse/HADOOP-3204) | *Blocker* 
| **LocalFSMerger needs to catch throwable**
 
-Similarly implementations of the following raw interfaces will need 
modification: InputFormat, OutputCollector, OutputFormat, Partitioner, 
RecordReader, RecordWriter
+Fixes LocalFSMerger in ReduceTask.java to handle errors/exceptions better. 
Prior to this all exceptions except IOException would be silently ignored.
 
 
 ---
 
-* [HADOOP-1985](https://issues.apache.org/jira/browse/HADOOP-1985) | *Major* | 
**Abstract node to switch mapping into a topology service class used by 
namenode and jobtracker**
+* [HADOOP-3239](https://issues.apache.org/jira/browse/HADOOP-3239) | *Major* | 
**exists() calls logs FileNotFoundException in namenode log**
 
-This issue introduces rack awareness for map tasks. It also moves the rack 
resolution logic to the central servers - NameNode & JobTracker. The 
administrator can specify a loadable class given by 
topology.node.switch.mapping.impl to specify the class implementing the logic 
for rack resolution. The class must implement a method - resolve(List\<String\> 
names), where names is the list of DNS-names/IP-addresses that we want 
resolved. The return value is a list of resolved network paths of the form 
/foo/rack, where rack is the rackID where the node belongs to and foo is the 
switch where multiple racks are connected, and so on. The default 
implementation of this class is packaged along with hadoop and points to 
org.apache.hadoop.net.ScriptBasedMapping and this class loads a script that can 
be used for rack resolution. The script location is configurable. It is 
specified by topology.script.file.name and defaults to an empty script. In the 
case where the script name is empty, /default-rack
  is returned for all dns-names/IP-addresses. The loadable 
topology.node.switch.mapping.impl provides administrators fleixibilty to define 
how their site's node resolution should happen.
-For mapred, one can also specify the level of the cache w.r.t the number of 
levels in the resolved network path - defaults to two. This means that the 
JobTracker will cache tasks at the host level and at the rack level. 
-Known issue: the task caching will not work with levels greater than 2 (beyond 
racks). This bug is tracked in HADOOP-3296.
+getFileInfo returns null for File not found instead of throwing 
FileNotFoundException
 
 
 ---
 
-* [HADOOP-1622](https://issues.apache.org/jira/browse/HADOOP-1622) | *Major* | 
**Hadoop should provide a way to allow the user to specify jar file(s) the user 
job depends on**
+* [HADOOP-3162](https://issues.apache.org/jira/browse/HADOOP-3162) | *Blocker* 
| **Map/reduce stops working with comma separated input paths**
+
+The public methods org.apache.hadoop.mapred.JobConf.setInputPath(Path) and 
org.apache.hadoop.mapred.JobConf.addInputPath(Path) are deprecated. And the 
methods have the semantics of branch 0.16.
+The following public APIs  are added in 
org.apache.hadoop.mapred.FileInputFormat :
+public static void setInputPaths(JobConf job, Path... paths);
+public static void setInputPaths(JobConf job, String commaSeparatedPaths);
+public static void addInputPath(JobConf job, Path path);
+public static void addInputPaths(JobConf job, String commaSeparatedPaths);
+Earlier code calling JobConf.setInputPath(Path), JobConf.addInputPath(Path) 
should now call FileInputFormat.setInputPaths(JobConf, Path...) and 
FileInputFormat.addInputPath(Path) respectively
 
-This patch allows new command line options for 
 
-hadoop jar 
-which are 
+---
 
-hadoop jar -files \<comma seperated list of files\> -libjars \<comma seperated 
list of jars\> -archives \<comma seperated list of archives\>
+* [HADOOP-3124](https://issues.apache.org/jira/browse/HADOOP-3124) | *Major* | 
**DFS data node should not use hard coded 10 minutes as write timeout.**
 
--files options allows you to speficy comma seperated list of path which would 
be present in your current working directory of your task
--libjars option allows you to add jars to the classpaths of the maps and 
reduces. 
--archives allows you to pass archives as arguments that are unzipped/unjarred 
and a link with name of the jar/zip are created in the current working 
directory if tasks.
+Makes DataNode socket write timeout configurable. User impact : none.
 
 
 ---
 
-* [HADOOP-1593](https://issues.apache.org/jira/browse/HADOOP-1593) | *Major* | 
**FsShell should work with paths in non-default FileSystem**
-
-This bug allows non default path to specifeid in fsshell commands.
+* [HADOOP-3266](https://issues.apache.org/jira/browse/HADOOP-3266) | *Major* | 
**Remove HOD changes from CHANGES.txt, as they are now inside src/contrib/hod**
 
-So, you can now specify hadoop dfs -ls hdfs://remotehost1:port/path 
-  and  hadoop dfs -ls hdfs://remotehost2:port/path without changing the config.
+Moved HOD change items from CHANGES.txt to a new file 
src/contrib/hod/CHANGES.txt.
 
 
 ---
 
-* [HADOOP-910](https://issues.apache.org/jira/browse/HADOOP-910) | *Major* | 
**Reduces can do merges for the on-disk map output files in parallel with their 
copying**
+* [HADOOP-3280](https://issues.apache.org/jira/browse/HADOOP-3280) | *Blocker* 
| **virtual address space limits break streaming apps**
 
-Reducers now perform merges of shuffle data (both in-memory and on disk) while 
fetching map outputs. Earlier, during shuffle they used to merge only the 
in-memory outputs.
+This patch adds the mapred.child.ulimit to limit the virtual memory for 
children processes to the given value.
 
 
 ---
 
-* [HADOOP-771](https://issues.apache.org/jira/browse/HADOOP-771) | *Major* | 
**Namenode should return error when trying to delete non-empty directory**
+* [HADOOP-3382](https://issues.apache.org/jira/browse/HADOOP-3382) | *Blocker* 
| **Memory leak when files are not cleanly closed**
 
-This patch adds a new api to file system i.e delete(path, boolean), 
deprecating the previous delete(path). 
-the new api recursively deletes files only if boolean is set to true. 
-If path is a file, the boolean value does not matter, if path is a directory 
and the directory is non empty delete(path, false) will throw an exception and 
delete(path, true) will delete all files recursively.
+Fixed a memory leak associated with 'abandoned' files (i.e. not cleanly 
closed). This held up significant amounts of memory depending on activity and 
how long NameNode has been running.
 
 
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/19041008/hadoop-common-project/hadoop-common/src/site/markdown/release/0.17.1/CHANGES.0.17.1.md
----------------------------------------------------------------------
diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.17.1/CHANGES.0.17.1.md
 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.17.1/CHANGES.0.17.1.md
index f69eb78..b372c5d 100644
--- 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.17.1/CHANGES.0.17.1.md
+++ 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.17.1/CHANGES.0.17.1.md
@@ -27,54 +27,18 @@
 | [HADOOP-3565](https://issues.apache.org/jira/browse/HADOOP-3565) | 
JavaSerialization can throw java.io.StreamCorruptedException |  Major | . | Tom 
White | Tom White |
 
 
-### IMPORTANT ISSUES:
-
-| JIRA | Summary | Priority | Component | Reporter | Contributor |
-|:---- |:---- | :--- |:---- |:---- |:---- |
-
-
-### NEW FEATURES:
-
-| JIRA | Summary | Priority | Component | Reporter | Contributor |
-|:---- |:---- | :--- |:---- |:---- |:---- |
-
-
-### IMPROVEMENTS:
-
-| JIRA | Summary | Priority | Component | Reporter | Contributor |
-|:---- |:---- | :--- |:---- |:---- |:---- |
-
-
 ### BUG FIXES:
 
 | JIRA | Summary | Priority | Component | Reporter | Contributor |
 |:---- |:---- | :--- |:---- |:---- |:---- |
-| [HADOOP-3550](https://issues.apache.org/jira/browse/HADOOP-3550) | Reduce 
tasks failing with OOM |  Blocker | . | Arun C Murthy | Chris Douglas |
-| [HADOOP-3526](https://issues.apache.org/jira/browse/HADOOP-3526) | 
contrib/data\_join doesn't work |  Blocker | . | Spyros Blanas | Spyros Blanas |
-| [HADOOP-3522](https://issues.apache.org/jira/browse/HADOOP-3522) | 
ValuesIterator.next() doesn't return a new object, thus failing many equals() 
tests. |  Major | . | Spyros Blanas | Owen O'Malley |
-| [HADOOP-3477](https://issues.apache.org/jira/browse/HADOOP-3477) | release 
tar.gz contains duplicate files |  Major | build | Adam Heath | Adam Heath |
-| [HADOOP-3475](https://issues.apache.org/jira/browse/HADOOP-3475) | 
MapOutputBuffer allocates 4x as much space to record capacity as intended |  
Major | . | Chris Douglas | Chris Douglas |
+| [HADOOP-2159](https://issues.apache.org/jira/browse/HADOOP-2159) | Namenode 
stuck in safemode |  Major | . | Christian Kunz | Hairong Kuang |
 | [HADOOP-3472](https://issues.apache.org/jira/browse/HADOOP-3472) | 
MapFile.Reader getClosest() function returns incorrect results when before is 
true |  Major | io | Todd Lipcon | stack |
 | [HADOOP-3442](https://issues.apache.org/jira/browse/HADOOP-3442) | QuickSort 
may get into unbounded recursion |  Blocker | . | Runping Qi | Chris Douglas |
-| [HADOOP-2159](https://issues.apache.org/jira/browse/HADOOP-2159) | Namenode 
stuck in safemode |  Major | . | Christian Kunz | Hairong Kuang |
+| [HADOOP-3477](https://issues.apache.org/jira/browse/HADOOP-3477) | release 
tar.gz contains duplicate files |  Major | build | Adam Heath | Adam Heath |
+| [HADOOP-3475](https://issues.apache.org/jira/browse/HADOOP-3475) | 
MapOutputBuffer allocates 4x as much space to record capacity as intended |  
Major | . | Chris Douglas | Chris Douglas |
+| [HADOOP-3522](https://issues.apache.org/jira/browse/HADOOP-3522) | 
ValuesIterator.next() doesn't return a new object, thus failing many equals() 
tests. |  Major | . | Spyros Blanas | Owen O'Malley |
+| [HADOOP-3550](https://issues.apache.org/jira/browse/HADOOP-3550) | Reduce 
tasks failing with OOM |  Blocker | . | Arun C Murthy | Chris Douglas |
+| [HADOOP-3526](https://issues.apache.org/jira/browse/HADOOP-3526) | 
contrib/data\_join doesn't work |  Blocker | . | Spyros Blanas | Spyros Blanas |
 | [HADOOP-1979](https://issues.apache.org/jira/browse/HADOOP-1979) | fsck on 
namenode without datanodes takes too much time |  Minor | . | Koji Noguchi | 
Lohit Vijayarenu |
 
 
-### TESTS:
-
-| JIRA | Summary | Priority | Component | Reporter | Contributor |
-|:---- |:---- | :--- |:---- |:---- |:---- |
-
-
-### SUB-TASKS:
-
-| JIRA | Summary | Priority | Component | Reporter | Contributor |
-|:---- |:---- | :--- |:---- |:---- |:---- |
-
-
-### OTHER:
-
-| JIRA | Summary | Priority | Component | Reporter | Contributor |
-|:---- |:---- | :--- |:---- |:---- |:---- |
-
-

http://git-wip-us.apache.org/repos/asf/hadoop/blob/19041008/hadoop-common-project/hadoop-common/src/site/markdown/release/0.17.2/CHANGES.0.17.2.md
----------------------------------------------------------------------
diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.17.2/CHANGES.0.17.2.md
 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.17.2/CHANGES.0.17.2.md
index db3eac4..629976c 100644
--- 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.17.2/CHANGES.0.17.2.md
+++ 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.17.2/CHANGES.0.17.2.md
@@ -20,64 +20,24 @@
 
 ## Release 0.17.2 - 2008-08-11
 
-### INCOMPATIBLE CHANGES:
-
-| JIRA | Summary | Priority | Component | Reporter | Contributor |
-|:---- |:---- | :--- |:---- |:---- |:---- |
-
-
-### IMPORTANT ISSUES:
-
-| JIRA | Summary | Priority | Component | Reporter | Contributor |
-|:---- |:---- | :--- |:---- |:---- |:---- |
-
-
-### NEW FEATURES:
-
-| JIRA | Summary | Priority | Component | Reporter | Contributor |
-|:---- |:---- | :--- |:---- |:---- |:---- |
-
-
-### IMPROVEMENTS:
-
-| JIRA | Summary | Priority | Component | Reporter | Contributor |
-|:---- |:---- | :--- |:---- |:---- |:---- |
 
 
 ### BUG FIXES:
 
 | JIRA | Summary | Priority | Component | Reporter | Contributor |
 |:---- |:---- | :--- |:---- |:---- |:---- |
-| [HADOOP-4773](https://issues.apache.org/jira/browse/HADOOP-4773) | namenode 
startup error, hadoop-user-namenode.pid permission denied. |  Critical | . | 
Focus |  |
-| [HADOOP-3931](https://issues.apache.org/jira/browse/HADOOP-3931) | Bug in 
MapTask.MapOutputBuffer.collect leads to an unnecessary and harmful 'reset' |  
Blocker | . | Arun C Murthy | Chris Douglas |
-| [HADOOP-3859](https://issues.apache.org/jira/browse/HADOOP-3859) | 1000  
concurrent read on a single file failing  the task/client |  Blocker | . | Koji 
Noguchi | Johan Oskarsson |
-| [HADOOP-3813](https://issues.apache.org/jira/browse/HADOOP-3813) | RPC queue 
overload of JobTracker |  Major | . | Christian Kunz | Amareshwari Sriramadasu |
-| [HADOOP-3760](https://issues.apache.org/jira/browse/HADOOP-3760) | DFS 
operations fail because of Stream closed error |  Blocker | . | Amar Kamat | 
Lohit Vijayarenu |
+| [HADOOP-3370](https://issues.apache.org/jira/browse/HADOOP-3370) | failed 
tasks may stay forever in TaskTracker.runningJobs |  Critical | . | Zheng Shao 
| Zheng Shao |
+| [HADOOP-3633](https://issues.apache.org/jira/browse/HADOOP-3633) | Uncaught 
exception in DataXceiveServer |  Blocker | . | Koji Noguchi | Konstantin 
Shvachko |
+| [HADOOP-3681](https://issues.apache.org/jira/browse/HADOOP-3681) | Infinite 
loop in dfs close |  Blocker | . | Koji Noguchi | Lohit Vijayarenu |
+| [HADOOP-3002](https://issues.apache.org/jira/browse/HADOOP-3002) | HDFS 
should not remove blocks while in safemode. |  Blocker | . | Konstantin 
Shvachko | Konstantin Shvachko |
+| [HADOOP-3685](https://issues.apache.org/jira/browse/HADOOP-3685) | 
Unbalanced replication target |  Blocker | . | Koji Noguchi | Hairong Kuang |
 | [HADOOP-3758](https://issues.apache.org/jira/browse/HADOOP-3758) | Excessive 
exceptions in HDFS namenode log file |  Blocker | . | Jim Huang | Lohit 
Vijayarenu |
+| [HADOOP-3760](https://issues.apache.org/jira/browse/HADOOP-3760) | DFS 
operations fail because of Stream closed error |  Blocker | . | Amar Kamat | 
Lohit Vijayarenu |
 | [HADOOP-3707](https://issues.apache.org/jira/browse/HADOOP-3707) | Frequent 
DiskOutOfSpaceException on almost-full datanodes |  Blocker | . | Koji Noguchi 
| Raghu Angadi |
-| [HADOOP-3685](https://issues.apache.org/jira/browse/HADOOP-3685) | 
Unbalanced replication target |  Blocker | . | Koji Noguchi | Hairong Kuang |
-| [HADOOP-3681](https://issues.apache.org/jira/browse/HADOOP-3681) | Infinite 
loop in dfs close |  Blocker | . | Koji Noguchi | Lohit Vijayarenu |
 | [HADOOP-3678](https://issues.apache.org/jira/browse/HADOOP-3678) | Avoid 
spurious "DataXceiver: java.io.IOException: Connection reset by peer" errors in 
DataNode log |  Blocker | . | Raghu Angadi | Raghu Angadi |
-| [HADOOP-3633](https://issues.apache.org/jira/browse/HADOOP-3633) | Uncaught 
exception in DataXceiveServer |  Blocker | . | Koji Noguchi | Konstantin 
Shvachko |
-| [HADOOP-3370](https://issues.apache.org/jira/browse/HADOOP-3370) | failed 
tasks may stay forever in TaskTracker.runningJobs |  Critical | . | Zheng Shao 
| Zheng Shao |
-| [HADOOP-3002](https://issues.apache.org/jira/browse/HADOOP-3002) | HDFS 
should not remove blocks while in safemode. |  Blocker | . | Konstantin 
Shvachko | Konstantin Shvachko |
-
-
-### TESTS:
-
-| JIRA | Summary | Priority | Component | Reporter | Contributor |
-|:---- |:---- | :--- |:---- |:---- |:---- |
-
-
-### SUB-TASKS:
-
-| JIRA | Summary | Priority | Component | Reporter | Contributor |
-|:---- |:---- | :--- |:---- |:---- |:---- |
-
-
-### OTHER:
-
-| JIRA | Summary | Priority | Component | Reporter | Contributor |
-|:---- |:---- | :--- |:---- |:---- |:---- |
+| [HADOOP-3813](https://issues.apache.org/jira/browse/HADOOP-3813) | RPC queue 
overload of JobTracker |  Major | . | Christian Kunz | Amareshwari Sriramadasu |
+| [HADOOP-3859](https://issues.apache.org/jira/browse/HADOOP-3859) | 1000  
concurrent read on a single file failing  the task/client |  Blocker | . | Koji 
Noguchi | Johan Oskarsson |
+| [HADOOP-3931](https://issues.apache.org/jira/browse/HADOOP-3931) | Bug in 
MapTask.MapOutputBuffer.collect leads to an unnecessary and harmful 'reset' |  
Blocker | . | Arun C Murthy | Chris Douglas |
+| [HADOOP-4773](https://issues.apache.org/jira/browse/HADOOP-4773) | namenode 
startup error, hadoop-user-namenode.pid permission denied. |  Critical | . | 
Focus |  |
 
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/19041008/hadoop-common-project/hadoop-common/src/site/markdown/release/0.17.2/RELEASENOTES.0.17.2.md
----------------------------------------------------------------------
diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.17.2/RELEASENOTES.0.17.2.md
 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.17.2/RELEASENOTES.0.17.2.md
index 27e4924..ca7c081 100644
--- 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.17.2/RELEASENOTES.0.17.2.md
+++ 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.17.2/RELEASENOTES.0.17.2.md
@@ -23,13 +23,6 @@ These release notes cover new developer and user-facing 
incompatibilities, impor
 
 ---
 
-* [HADOOP-3859](https://issues.apache.org/jira/browse/HADOOP-3859) | *Blocker* 
| **1000  concurrent read on a single file failing  the task/client**
-
-Allows the user to change the maximum number of xceivers in the datanode.
-
-
----
-
 * [HADOOP-3760](https://issues.apache.org/jira/browse/HADOOP-3760) | *Blocker* 
| **DFS operations fail because of Stream closed error**
 
 Fix a bug with HDFS file close() mistakenly introduced by HADOOP-3681.
@@ -49,4 +42,9 @@ NameNode keeps a count of number of blocks scheduled to be 
written to a datanode
 Avoid spurious exceptions logged at DataNode when clients read from DFS.
 
 
+---
+
+* [HADOOP-3859](https://issues.apache.org/jira/browse/HADOOP-3859) | *Blocker* 
| **1000  concurrent read on a single file failing  the task/client**
+
+Allows the user to change the maximum number of xceivers in the datanode.
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/19041008/hadoop-common-project/hadoop-common/src/site/markdown/release/0.17.3/CHANGES.0.17.3.md
----------------------------------------------------------------------
diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.17.3/CHANGES.0.17.3.md
 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.17.3/CHANGES.0.17.3.md
index 4b7b7b1..5a97d7c 100644
--- 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.17.3/CHANGES.0.17.3.md
+++ 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.17.3/CHANGES.0.17.3.md
@@ -18,55 +18,21 @@
 -->
 # Apache Hadoop Changelog
 
-## Release 0.17.3 - Unreleased (as of 2016-03-04)
+## Release 0.17.3 - Unreleased (as of 2017-08-28)
 
-### INCOMPATIBLE CHANGES:
-
-| JIRA | Summary | Priority | Component | Reporter | Contributor |
-|:---- |:---- | :--- |:---- |:---- |:---- |
-
-
-### IMPORTANT ISSUES:
-
-| JIRA | Summary | Priority | Component | Reporter | Contributor |
-|:---- |:---- | :--- |:---- |:---- |:---- |
-
-
-### NEW FEATURES:
-
-| JIRA | Summary | Priority | Component | Reporter | Contributor |
-|:---- |:---- | :--- |:---- |:---- |:---- |
-
-
-### IMPROVEMENTS:
-
-| JIRA | Summary | Priority | Component | Reporter | Contributor |
-|:---- |:---- | :--- |:---- |:---- |:---- |
 
 
 ### BUG FIXES:
 
 | JIRA | Summary | Priority | Component | Reporter | Contributor |
 |:---- |:---- | :--- |:---- |:---- |:---- |
-| [HADOOP-4326](https://issues.apache.org/jira/browse/HADOOP-4326) | 
ChecksumFileSystem does not override all create(...) methods |  Blocker | fs | 
Tsz Wo Nicholas Sze | Tsz Wo Nicholas Sze |
-| [HADOOP-4318](https://issues.apache.org/jira/browse/HADOOP-4318) | distcp 
fails |  Blocker | . | Christian Kunz | Tsz Wo Nicholas Sze |
 | [HADOOP-4277](https://issues.apache.org/jira/browse/HADOOP-4277) | Checksum 
verification is disabled for LocalFS |  Blocker | . | Raghu Angadi | Raghu 
Angadi |
 | [HADOOP-4271](https://issues.apache.org/jira/browse/HADOOP-4271) | Bug in 
FSInputChecker makes it possible to read from an invalid buffer |  Blocker | fs 
| Ning Li | Ning Li |
+| [HADOOP-4318](https://issues.apache.org/jira/browse/HADOOP-4318) | distcp 
fails |  Blocker | . | Christian Kunz | Tsz Wo Nicholas Sze |
+| [HADOOP-4326](https://issues.apache.org/jira/browse/HADOOP-4326) | 
ChecksumFileSystem does not override all create(...) methods |  Blocker | fs | 
Tsz Wo Nicholas Sze | Tsz Wo Nicholas Sze |
 | [HADOOP-3217](https://issues.apache.org/jira/browse/HADOOP-3217) | [HOD] Be 
less agressive when querying job status from resource manager. |  Blocker | 
contrib/hod | Hemanth Yamijala | Hemanth Yamijala |
 
 
-### TESTS:
-
-| JIRA | Summary | Priority | Component | Reporter | Contributor |
-|:---- |:---- | :--- |:---- |:---- |:---- |
-
-
-### SUB-TASKS:
-
-| JIRA | Summary | Priority | Component | Reporter | Contributor |
-|:---- |:---- | :--- |:---- |:---- |:---- |
-
-
 ### OTHER:
 
 | JIRA | Summary | Priority | Component | Reporter | Contributor |


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org

Reply via email to