http://git-wip-us.apache.org/repos/asf/hadoop/blob/d759b4bd/hadoop-common-project/hadoop-common/src/site/markdown/release/0.17.0/CHANGES.0.17.0.md
----------------------------------------------------------------------
diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.17.0/CHANGES.0.17.0.md
 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.17.0/CHANGES.0.17.0.md
new file mode 100644
index 0000000..bf0fd32
--- /dev/null
+++ 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.17.0/CHANGES.0.17.0.md
@@ -0,0 +1,265 @@
+
+<!---
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+-->
+# Apache Hadoop Changelog
+
+## Release 0.17.0 - 2008-05-20
+
+### INCOMPATIBLE CHANGES:
+
+| JIRA | Summary | Priority | Component | Reporter | Contributor |
+|:---- |:---- | :--- |:---- |:---- |:---- |
+| [HADOOP-3280](https://issues.apache.org/jira/browse/HADOOP-3280) | virtual 
address space limits break streaming apps |  Blocker | . | Rick Cox | Arun C 
Murthy |
+| [HADOOP-3266](https://issues.apache.org/jira/browse/HADOOP-3266) | Remove 
HOD changes from CHANGES.txt, as they are now inside src/contrib/hod |  Major | 
contrib/hod | Hemanth Yamijala | Hemanth Yamijala |
+| [HADOOP-3239](https://issues.apache.org/jira/browse/HADOOP-3239) | exists() 
calls logs FileNotFoundException in namenode log |  Major | . | Lohit 
Vijayarenu | Lohit Vijayarenu |
+| [HADOOP-3137](https://issues.apache.org/jira/browse/HADOOP-3137) | [HOD] 
Update hod version number |  Major | contrib/hod | Hemanth Yamijala | Hemanth 
Yamijala |
+| [HADOOP-3091](https://issues.apache.org/jira/browse/HADOOP-3091) | hadoop 
dfs -put should support multiple src |  Major | . | Lohit Vijayarenu | Lohit 
Vijayarenu |
+| [HADOOP-3060](https://issues.apache.org/jira/browse/HADOOP-3060) | 
MiniMRCluster is ignoring parameter taskTrackerFirst |  Major | . | Amareshwari 
Sriramadasu | Amareshwari Sriramadasu |
+| [HADOOP-2873](https://issues.apache.org/jira/browse/HADOOP-2873) | Namenode 
fails to re-start after cluster shutdown - DFSClient: Could not obtain blocks 
even all datanodes were up & live |  Major | . | André Martin | dhruba 
borthakur |
+| [HADOOP-2854](https://issues.apache.org/jira/browse/HADOOP-2854) | Remove 
the deprecated ipc.Server.getUserInfo() |  Blocker | . | Tsz Wo Nicholas Sze | 
Lohit Vijayarenu |
+| [HADOOP-2839](https://issues.apache.org/jira/browse/HADOOP-2839) | Remove 
deprecated methods in FileSystem |  Blocker | fs | Hairong Kuang | Lohit 
Vijayarenu |
+| [HADOOP-2831](https://issues.apache.org/jira/browse/HADOOP-2831) | Remove 
the deprecated INode.getAbsoluteName() |  Blocker | . | Tsz Wo Nicholas Sze | 
Lohit Vijayarenu |
+| [HADOOP-2828](https://issues.apache.org/jira/browse/HADOOP-2828) | Remove 
deprecated methods in Configuration.java |  Major | conf | Amareshwari 
Sriramadasu | Amareshwari Sriramadasu |
+| [HADOOP-2826](https://issues.apache.org/jira/browse/HADOOP-2826) | 
FileSplit.getFile(), LineRecordReader. readLine() need to be removed |  Major | 
. | Amareshwari Sriramadasu | Amareshwari Sriramadasu |
+| [HADOOP-2825](https://issues.apache.org/jira/browse/HADOOP-2825) | 
MapOutputLocation.getFile() needs to be removed |  Major | . | Amareshwari 
Sriramadasu | Amareshwari Sriramadasu |
+| [HADOOP-2824](https://issues.apache.org/jira/browse/HADOOP-2824) | One of 
MiniMRCluster constructors needs tobe removed |  Major | . | Amareshwari 
Sriramadasu | Amareshwari Sriramadasu |
+| [HADOOP-2823](https://issues.apache.org/jira/browse/HADOOP-2823) | 
SimpleCharStream.getColumn(),  getLine() methods to be removed. |  Major | 
record | Amareshwari Sriramadasu | Amareshwari Sriramadasu |
+| [HADOOP-2822](https://issues.apache.org/jira/browse/HADOOP-2822) | Remove 
deprecated classes in mapred |  Major | . | Amareshwari Sriramadasu | 
Amareshwari Sriramadasu |
+| [HADOOP-2821](https://issues.apache.org/jira/browse/HADOOP-2821) | Remove 
deprecated classes in util |  Major | util | Amareshwari Sriramadasu | 
Amareshwari Sriramadasu |
+| [HADOOP-2820](https://issues.apache.org/jira/browse/HADOOP-2820) | Remove 
deprecated classes in streaming |  Major | . | Amareshwari Sriramadasu | 
Amareshwari Sriramadasu |
+| [HADOOP-2819](https://issues.apache.org/jira/browse/HADOOP-2819) | Remove 
deprecated methods in JobConf() |  Major | . | Amareshwari Sriramadasu | 
Amareshwari Sriramadasu |
+| [HADOOP-2818](https://issues.apache.org/jira/browse/HADOOP-2818) | Remove 
deprecated Counters.getDisplayName(),  getCounterNames(),   getCounter(String 
counterName) |  Major | . | Amareshwari Sriramadasu | Amareshwari Sriramadasu |
+| [HADOOP-2765](https://issues.apache.org/jira/browse/HADOOP-2765) | setting 
memory limits for tasks |  Major | . | Joydeep Sen Sarma | Amareshwari 
Sriramadasu |
+| [HADOOP-2634](https://issues.apache.org/jira/browse/HADOOP-2634) | Deprecate 
exists() and isDir() to simplify ClientProtocol. |  Blocker | . | Konstantin 
Shvachko | Lohit Vijayarenu |
+| [HADOOP-2563](https://issues.apache.org/jira/browse/HADOOP-2563) | Remove 
deprecated FileSystem#listPaths() |  Blocker | fs | Doug Cutting | Lohit 
Vijayarenu |
+| [HADOOP-2470](https://issues.apache.org/jira/browse/HADOOP-2470) | Open and 
isDir should be removed from ClientProtocol |  Major | . | Hairong Kuang | Tsz 
Wo Nicholas Sze |
+| [HADOOP-2410](https://issues.apache.org/jira/browse/HADOOP-2410) | Make EC2 
cluster nodes more independent of each other |  Major | contrib/cloud | Tom 
White | Chris K Wensel |
+| [HADOOP-2399](https://issues.apache.org/jira/browse/HADOOP-2399) | Input key 
and value to combiner and reducer should be reused |  Major | . | Owen O'Malley 
| Owen O'Malley |
+| [HADOOP-2345](https://issues.apache.org/jira/browse/HADOOP-2345) | new 
transactions to support HDFS Appends |  Major | . | dhruba borthakur | dhruba 
borthakur |
+| [HADOOP-2219](https://issues.apache.org/jira/browse/HADOOP-2219) | du like 
command to count number of files under a given directory |  Major | . | Koji 
Noguchi | Tsz Wo Nicholas Sze |
+| [HADOOP-2192](https://issues.apache.org/jira/browse/HADOOP-2192) | dfs mv 
command differs from POSIX standards |  Major | . | Mukund Madhugiri | Mahadev 
konar |
+| [HADOOP-2178](https://issues.apache.org/jira/browse/HADOOP-2178) | Job 
history on HDFS |  Major | . | Amareshwari Sriramadasu | Amareshwari 
Sriramadasu |
+| [HADOOP-2116](https://issues.apache.org/jira/browse/HADOOP-2116) | 
Job.local.dir to be exposed to tasks |  Major | . | Milind Bhandarkar | 
Amareshwari Sriramadasu |
+| [HADOOP-2027](https://issues.apache.org/jira/browse/HADOOP-2027) | 
FileSystem should provide byte ranges for file locations |  Major | fs | Owen 
O'Malley | Lohit Vijayarenu |
+| [HADOOP-1986](https://issues.apache.org/jira/browse/HADOOP-1986) | Add 
support for a general serialization mechanism for Map Reduce |  Major | . | Tom 
White | Tom White |
+| [HADOOP-1985](https://issues.apache.org/jira/browse/HADOOP-1985) | Abstract 
node to switch mapping into a topology service class used by namenode and 
jobtracker |  Major | . | eric baldeschwieler | Devaraj Das |
+| [HADOOP-771](https://issues.apache.org/jira/browse/HADOOP-771) | Namenode 
should return error when trying to delete non-empty directory |  Major | . | 
Milind Bhandarkar | Mahadev konar |
+
+
+### NEW FEATURES:
+
+| JIRA | Summary | Priority | Component | Reporter | Contributor |
+|:---- |:---- | :--- |:---- |:---- |:---- |
+| [HADOOP-3152](https://issues.apache.org/jira/browse/HADOOP-3152) | Make 
index interval configuable when using MapFileOutputFormat for map-reduce job |  
Minor | io | Rong-En Fan | Doug Cutting |
+| [HADOOP-3048](https://issues.apache.org/jira/browse/HADOOP-3048) | 
Stringifier |  Blocker | io | Enis Soztutar | Enis Soztutar |
+| [HADOOP-3001](https://issues.apache.org/jira/browse/HADOOP-3001) | 
FileSystems should track how many bytes are read and written |  Blocker | fs | 
Owen O'Malley | Owen O'Malley |
+| [HADOOP-2951](https://issues.apache.org/jira/browse/HADOOP-2951) | contrib 
package provides a utility to build or update an index
+A contrib package to update an index using Map/Reduce |  Major | . | Ning Li | 
Doug Cutting |
+| [HADOOP-2906](https://issues.apache.org/jira/browse/HADOOP-2906) | output 
format classes that can write to different files depending on  keys and/or 
config variable |  Major | . | Runping Qi | Runping Qi |
+| [HADOOP-2657](https://issues.apache.org/jira/browse/HADOOP-2657) | 
Enhancements to DFSClient to support flushing data at any point in time |  
Major | . | dhruba borthakur | dhruba borthakur |
+| [HADOOP-2063](https://issues.apache.org/jira/browse/HADOOP-2063) | Command 
to pull corrupted files |  Blocker | fs | Koji Noguchi | Tsz Wo Nicholas Sze |
+| [HADOOP-2055](https://issues.apache.org/jira/browse/HADOOP-2055) | JobConf 
should have a setInputPathFilter method |  Minor | . | Alejandro Abdelnur | 
Alejandro Abdelnur |
+| [HADOOP-1593](https://issues.apache.org/jira/browse/HADOOP-1593) | FsShell 
should work with paths in non-default FileSystem |  Major | fs | Doug Cutting | 
Mahadev konar |
+
+
+### IMPROVEMENTS:
+
+| JIRA | Summary | Priority | Component | Reporter | Contributor |
+|:---- |:---- | :--- |:---- |:---- |:---- |
+| [HADOOP-3174](https://issues.apache.org/jira/browse/HADOOP-3174) | Improve 
documentation and supply an example for MultiFileInputFormat |  Major | 
documentation | Enis Soztutar | Enis Soztutar |
+| [HADOOP-3143](https://issues.apache.org/jira/browse/HADOOP-3143) | Decrease 
the number of slaves in TestMiniMRDFSSort to 3. |  Major | test | Owen O'Malley 
| Nigel Daley |
+| [HADOOP-3123](https://issues.apache.org/jira/browse/HADOOP-3123) | Build 
native libraries on Solaris |  Major | build | Tom White | Tom White |
+| [HADOOP-3099](https://issues.apache.org/jira/browse/HADOOP-3099) | Need new 
options in distcp for preserving ower, group and permission |  Blocker | util | 
Tsz Wo Nicholas Sze | Tsz Wo Nicholas Sze |
+| [HADOOP-3092](https://issues.apache.org/jira/browse/HADOOP-3092) | Show 
counter values from "job -status" command |  Major | scripts | Tom White | Tom 
White |
+| [HADOOP-3046](https://issues.apache.org/jira/browse/HADOOP-3046) | Text and 
BytesWritable's raw comparators should use the lengths provided instead of 
rebuilding them from scratch using readInt |  Blocker | . | Owen O'Malley | 
Owen O'Malley |
+| [HADOOP-2996](https://issues.apache.org/jira/browse/HADOOP-2996) | 
StreamUtils abuses StringBuffers |  Trivial | . | Dave Brosius | Dave Brosius |
+| [HADOOP-2994](https://issues.apache.org/jira/browse/HADOOP-2994) | DFSClient 
calls toString on strings. |  Trivial | . | Dave Brosius | Dave Brosius |
+| [HADOOP-2993](https://issues.apache.org/jira/browse/HADOOP-2993) | Specify 
which JAVA\_HOME should be set |  Major | documentation | Jason Rennie | Arun C 
Murthy |
+| [HADOOP-2947](https://issues.apache.org/jira/browse/HADOOP-2947) | [HOD] Hod 
should redirect stderr and stdout of Hadoop daemons to assist debugging |  
Blocker | contrib/hod | Hemanth Yamijala | Vinod Kumar Vavilapalli |
+| [HADOOP-2939](https://issues.apache.org/jira/browse/HADOOP-2939) | Make the 
Hudson patch process an executable ant target |  Minor | test | Nigel Daley | 
Nigel Daley |
+| [HADOOP-2919](https://issues.apache.org/jira/browse/HADOOP-2919) | Create 
fewer copies of buffer data during sort/spill |  Blocker | . | Chris Douglas | 
Chris Douglas |
+| [HADOOP-2902](https://issues.apache.org/jira/browse/HADOOP-2902) | replace 
accesss of "fs.default.name" with FileSystem accessor methods |  Major | fs | 
Doug Cutting | Doug Cutting |
+| [HADOOP-2895](https://issues.apache.org/jira/browse/HADOOP-2895) | String 
for configuring profiling should be customizable |  Major | . | Martin Traverso 
| Martin Traverso |
+| [HADOOP-2888](https://issues.apache.org/jira/browse/HADOOP-2888) | 
Enhancements to gridmix scripts |  Major | test | Mukund Madhugiri | Mukund 
Madhugiri |
+| [HADOOP-2886](https://issues.apache.org/jira/browse/HADOOP-2886) | Track 
individual RPC metrics. |  Major | metrics | girish vaitheeswaran | dhruba 
borthakur |
+| [HADOOP-2841](https://issues.apache.org/jira/browse/HADOOP-2841) | Dfs 
methods should not throw RemoteException |  Major | . | Hairong Kuang | 
Konstantin Shvachko |
+| [HADOOP-2810](https://issues.apache.org/jira/browse/HADOOP-2810) | Need new 
Hadoop Core logo |  Minor | documentation | Nigel Daley | Nigel Daley |
+| [HADOOP-2804](https://issues.apache.org/jira/browse/HADOOP-2804) | 
Formatable changes log as html |  Minor | documentation | Nigel Daley | Nigel 
Daley |
+| [HADOOP-2796](https://issues.apache.org/jira/browse/HADOOP-2796) | For 
script option hod should exit with distinguishable exit codes for script code 
and hod exit code. |  Major | contrib/hod | Karam Singh | Hemanth Yamijala |
+| [HADOOP-2758](https://issues.apache.org/jira/browse/HADOOP-2758) | Reduce 
memory copies when data is read from DFS |  Major | . | Raghu Angadi | Raghu 
Angadi |
+| [HADOOP-2690](https://issues.apache.org/jira/browse/HADOOP-2690) | Adding 
support into build.xml to build a special hadoop jar file that has the 
MiniDFSCluster and MiniMRCluster classes among others necessary for building 
and running the unit tests of Pig on the local mini cluster |  Major | build | 
Xu Zhang | Enis Soztutar |
+| [HADOOP-2559](https://issues.apache.org/jira/browse/HADOOP-2559) | DFS 
should place one replica per rack |  Major | . | Runping Qi | Lohit Vijayarenu |
+| [HADOOP-2555](https://issues.apache.org/jira/browse/HADOOP-2555) | Refactor 
the HTable#get and HTable#getRow methods to avoid repetition of 
retry-on-failure logic |  Minor | . | Peter Dolan | Bryan Duxbury |
+| [HADOOP-2551](https://issues.apache.org/jira/browse/HADOOP-2551) | 
hadoop-env.sh needs finer granularity |  Blocker | scripts | Allen Wittenauer | 
Raghu Angadi |
+| [HADOOP-2473](https://issues.apache.org/jira/browse/HADOOP-2473) | EC2 
termination script should support termination by group |  Major | contrib/cloud 
| Tom White | Chris K Wensel |
+| [HADOOP-2423](https://issues.apache.org/jira/browse/HADOOP-2423) | The codes 
in FSDirectory.mkdirs(...) is inefficient. |  Major | . | Tsz Wo Nicholas Sze | 
Tsz Wo Nicholas Sze |
+| [HADOOP-2239](https://issues.apache.org/jira/browse/HADOOP-2239) | Security: 
 Need to be able to encrypt Hadoop socket connections |  Major | . | Allen 
Wittenauer | Chris Douglas |
+| [HADOOP-2148](https://issues.apache.org/jira/browse/HADOOP-2148) | 
Inefficient FSDataset.getBlockFile() |  Major | . | Konstantin Shvachko | 
Konstantin Shvachko |
+| [HADOOP-2057](https://issues.apache.org/jira/browse/HADOOP-2057) | streaming 
should optionally treat a non-zero exit status of a child process as a failed 
task |  Major | . | Rick Cox | Rick Cox |
+| [HADOOP-1677](https://issues.apache.org/jira/browse/HADOOP-1677) | improve 
semantics of the hadoop dfs command |  Minor | . | Nigel Daley | Mahadev konar |
+| [HADOOP-1622](https://issues.apache.org/jira/browse/HADOOP-1622) | Hadoop 
should provide a way to allow the user to specify jar file(s) the user job 
depends on |  Major | . | Runping Qi | Mahadev konar |
+| [HADOOP-1228](https://issues.apache.org/jira/browse/HADOOP-1228) | Eclipse 
project files |  Minor | build | Albert Strasheim | Tom White |
+| [HADOOP-910](https://issues.apache.org/jira/browse/HADOOP-910) | Reduces can 
do merges for the on-disk map output files in parallel with their copying |  
Major | . | Devaraj Das | Amar Kamat |
+| [HADOOP-730](https://issues.apache.org/jira/browse/HADOOP-730) | Local file 
system uses copy to implement rename |  Major | fs | Owen O'Malley | Chris 
Douglas |
+
+
+### BUG FIXES:
+
+| JIRA | Summary | Priority | Component | Reporter | Contributor |
+|:---- |:---- | :--- |:---- |:---- |:---- |
+| [HADOOP-3701](https://issues.apache.org/jira/browse/HADOOP-3701) | Too many 
trash sockets and trash pipes opened |  Major | . | He Yongqiang |  |
+| [HADOOP-3382](https://issues.apache.org/jira/browse/HADOOP-3382) | Memory 
leak when files are not cleanly closed |  Blocker | . | Raghu Angadi | Raghu 
Angadi |
+| [HADOOP-3372](https://issues.apache.org/jira/browse/HADOOP-3372) | 
TestUlimit fails on LINUX |  Blocker | . | Lohit Vijayarenu | Arun C Murthy |
+| [HADOOP-3322](https://issues.apache.org/jira/browse/HADOOP-3322) | Hadoop 
rpc metrics do not get pushed to the MetricsRecord |  Blocker | metrics | 
girish vaitheeswaran | girish vaitheeswaran |
+| [HADOOP-3286](https://issues.apache.org/jira/browse/HADOOP-3286) | Gridmix 
jobs'  output dir names may collide |  Major | test | Runping Qi | Runping Qi |
+| [HADOOP-3285](https://issues.apache.org/jira/browse/HADOOP-3285) | map tasks 
with node local splits do not always read from local nodes |  Blocker | . | 
Runping Qi | Owen O'Malley |
+| [HADOOP-3279](https://issues.apache.org/jira/browse/HADOOP-3279) | 
TaskTracker should check for SUCCEEDED task status in addition to 
COMMIT\_PENDING status when it fails maps due to lost map outputs |  Blocker | 
. | Devaraj Das | Devaraj Das |
+| [HADOOP-3263](https://issues.apache.org/jira/browse/HADOOP-3263) | job 
history browser throws exception if job name or user name is null. |  Blocker | 
. | Amareshwari Sriramadasu | Arun C Murthy |
+| [HADOOP-3256](https://issues.apache.org/jira/browse/HADOOP-3256) | 
JobHistory file on HDFS should not use the 'job name' |  Blocker | . | Arun C 
Murthy | Arun C Murthy |
+| [HADOOP-3251](https://issues.apache.org/jira/browse/HADOOP-3251) | WARN 
message on command line when a hadoop jar command is executed |  Blocker | . | 
Mukund Madhugiri | Arun C Murthy |
+| [HADOOP-3247](https://issues.apache.org/jira/browse/HADOOP-3247) | gridmix 
scripts have a few bugs |  Major | test | Runping Qi | Runping Qi |
+| [HADOOP-3242](https://issues.apache.org/jira/browse/HADOOP-3242) | 
SequenceFileAsBinaryRecordReader seems always to read from the start of a file, 
not the start of the split. |  Major | . | Runping Qi | Chris Douglas |
+| [HADOOP-3237](https://issues.apache.org/jira/browse/HADOOP-3237) | Unit test 
failed on windows: TestDFSShell.testErrOutPut |  Blocker | . | Mukund Madhugiri 
| Mahadev konar |
+| [HADOOP-3229](https://issues.apache.org/jira/browse/HADOOP-3229) | Map 
OutputCollector does not report progress on writes |  Major | . | Alejandro 
Abdelnur | Doug Cutting |
+| [HADOOP-3225](https://issues.apache.org/jira/browse/HADOOP-3225) | FsShell 
showing null instead of a error message |  Blocker | . | Tsz Wo Nicholas Sze | 
Mahadev konar |
+| [HADOOP-3224](https://issues.apache.org/jira/browse/HADOOP-3224) | hadoop 
dfs -du /dirPath does not work with hadoop-0.17 branch |  Blocker | . | Runping 
Qi | Lohit Vijayarenu |
+| [HADOOP-3223](https://issues.apache.org/jira/browse/HADOOP-3223) | Hadoop 
dfs -help for permissions contains a typo |  Blocker | . | Milind Bhandarkar | 
Raghu Angadi |
+| [HADOOP-3220](https://issues.apache.org/jira/browse/HADOOP-3220) | Safemode 
log message need to be corrected. |  Major | . | Konstantin Shvachko | 
Konstantin Shvachko |
+| [HADOOP-3208](https://issues.apache.org/jira/browse/HADOOP-3208) | 
WritableDeserializer does not pass the Configuration to deserialized Writables 
|  Blocker | . | Enis Soztutar | Enis Soztutar |
+| [HADOOP-3204](https://issues.apache.org/jira/browse/HADOOP-3204) | 
LocalFSMerger needs to catch throwable |  Blocker | . | Koji Noguchi | Amar 
Kamat |
+| [HADOOP-3183](https://issues.apache.org/jira/browse/HADOOP-3183) | Unit test 
fails on Windows: TestJobShell.testJobShell |  Blocker | . | Mukund Madhugiri | 
Mahadev konar |
+| [HADOOP-3178](https://issues.apache.org/jira/browse/HADOOP-3178) | gridmix 
scripts for small and medium jobs need to be changed to handle input paths 
differently |  Blocker | test | Mukund Madhugiri | Mukund Madhugiri |
+| [HADOOP-3175](https://issues.apache.org/jira/browse/HADOOP-3175) | "-get 
file -" does not work |  Blocker | fs | Raghu Angadi | Edward J. Yoon |
+| [HADOOP-3168](https://issues.apache.org/jira/browse/HADOOP-3168) | reduce 
amount of logging in hadoop streaming |  Major | . | Joydeep Sen Sarma | Zheng 
Shao |
+| [HADOOP-3166](https://issues.apache.org/jira/browse/HADOOP-3166) | 
SpillThread throws ArrayIndexOutOfBoundsException, which is ignored by MapTask 
|  Blocker | . | Chris Douglas | Chris Douglas |
+| [HADOOP-3165](https://issues.apache.org/jira/browse/HADOOP-3165) | FsShell 
no longer accepts stdin as a source for -put/-copyFromLocal |  Blocker | . | 
Chris Douglas | Lohit Vijayarenu |
+| [HADOOP-3162](https://issues.apache.org/jira/browse/HADOOP-3162) | 
Map/reduce stops working with comma separated input paths |  Blocker | . | 
Runping Qi | Amareshwari Sriramadasu |
+| [HADOOP-3161](https://issues.apache.org/jira/browse/HADOOP-3161) | 
TestFileAppend fails on Mac since HADOOP-2655 was committed |  Minor | test | 
Nigel Daley | Nigel Daley |
+| [HADOOP-3157](https://issues.apache.org/jira/browse/HADOOP-3157) | 
TestMiniMRLocalFS fails in trunk on Windows |  Blocker | test | Lohit 
Vijayarenu | Doug Cutting |
+| [HADOOP-3153](https://issues.apache.org/jira/browse/HADOOP-3153) | [HOD] Hod 
should deallocate cluster if there's a problem in writing information to the 
state file |  Major | contrib/hod | Hemanth Yamijala | Vinod Kumar Vavilapalli |
+| [HADOOP-3146](https://issues.apache.org/jira/browse/HADOOP-3146) | 
DFSOutputStream.flush should be renamed as DFSOutputStream.fsync |  Blocker | . 
| Runping Qi | dhruba borthakur |
+| [HADOOP-3140](https://issues.apache.org/jira/browse/HADOOP-3140) | 
JobTracker should not try to promote a (map) task if it does not write to DFS 
at all |  Major | . | Runping Qi | Amar Kamat |
+| [HADOOP-3124](https://issues.apache.org/jira/browse/HADOOP-3124) | DFS data 
node should not use hard coded 10 minutes as write timeout. |  Major | . | 
Runping Qi | Raghu Angadi |
+| [HADOOP-3118](https://issues.apache.org/jira/browse/HADOOP-3118) | Namenode 
NPE while loading fsimage after a cluster upgrade from older disk format |  
Blocker | . | dhruba borthakur | dhruba borthakur |
+| [HADOOP-3114](https://issues.apache.org/jira/browse/HADOOP-3114) | 
TestDFSShell fails on Windows. |  Major | fs | Konstantin Shvachko | Lohit 
Vijayarenu |
+| [HADOOP-3106](https://issues.apache.org/jira/browse/HADOOP-3106) | Update 
documentation in mapred\_tutorial to add Debugging |  Major | documentation | 
Amareshwari Sriramadasu | Amareshwari Sriramadasu |
+| [HADOOP-3094](https://issues.apache.org/jira/browse/HADOOP-3094) | 
BytesWritable.toString prints bytes above 0x80 as FFFFFF80 |  Major | io | Owen 
O'Malley | Owen O'Malley |
+| [HADOOP-3093](https://issues.apache.org/jira/browse/HADOOP-3093) | ma/reduce 
throws the following exception if "io.serializations" is not set: |  Major | . 
| Runping Qi | Amareshwari Sriramadasu |
+| [HADOOP-3089](https://issues.apache.org/jira/browse/HADOOP-3089) | streaming 
should accept stderr from task before first key arrives |  Major | . | Rick Cox 
| Rick Cox |
+| [HADOOP-3087](https://issues.apache.org/jira/browse/HADOOP-3087) | JobInfo 
session object is not refreshed in loadHistory.jsp  if same job is accessed 
again. |  Major | . | Amareshwari Sriramadasu | Amareshwari Sriramadasu |
+| [HADOOP-3086](https://issues.apache.org/jira/browse/HADOOP-3086) | Test case 
was missed in commit of HADOOP-3040 |  Major | . | Amareshwari Sriramadasu | 
Amareshwari Sriramadasu |
+| [HADOOP-3083](https://issues.apache.org/jira/browse/HADOOP-3083) | Remove 
lease when file is renamed |  Blocker | . | dhruba borthakur | dhruba borthakur 
|
+| [HADOOP-3080](https://issues.apache.org/jira/browse/HADOOP-3080) | Remove 
flush calls from JobHistory |  Blocker | . | Devaraj Das | Amareshwari 
Sriramadasu |
+| [HADOOP-3073](https://issues.apache.org/jira/browse/HADOOP-3073) | 
SocketOutputStream.close() should close the channel. |  Blocker | ipc | Raghu 
Angadi | Raghu Angadi |
+| [HADOOP-3067](https://issues.apache.org/jira/browse/HADOOP-3067) | 
DFSInputStream 'pread' does not close its sockets |  Blocker | . | Raghu Angadi 
| Raghu Angadi |
+| [HADOOP-3066](https://issues.apache.org/jira/browse/HADOOP-3066) | Should 
not require superuser privilege to query if hdfs is in safe mode |  Major | . | 
Jim Kellerman | Jim Kellerman |
+| [HADOOP-3065](https://issues.apache.org/jira/browse/HADOOP-3065) | Namenode 
does not process block report if the rack-location script is not provided on 
namenode |  Blocker | . | dhruba borthakur | Devaraj Das |
+| [HADOOP-3064](https://issues.apache.org/jira/browse/HADOOP-3064) | Exception 
with file globbing closures |  Major | . | Tom White | Hairong Kuang |
+| [HADOOP-3050](https://issues.apache.org/jira/browse/HADOOP-3050) | Cluster 
fall into infinite loop trying to replicate a block to a target that aready has 
this replica. |  Blocker | . | Konstantin Shvachko | Hairong Kuang |
+| [HADOOP-3044](https://issues.apache.org/jira/browse/HADOOP-3044) | NNBench 
does not use the right configuration for the mapper |  Major | test | Hairong 
Kuang | Hairong Kuang |
+| [HADOOP-3041](https://issues.apache.org/jira/browse/HADOOP-3041) | Within a 
task, the value ofJobConf.getOutputPath() method is modified |  Blocker | . | 
Alejandro Abdelnur | Amareshwari Sriramadasu |
+| [HADOOP-3040](https://issues.apache.org/jira/browse/HADOOP-3040) | Streaming 
should assume an empty key if the first character on a line is the seperator 
(stream.map.output.field.separator, by default, tab) |  Major | . | Amareshwari 
Sriramadasu | Amareshwari Sriramadasu |
+| [HADOOP-3036](https://issues.apache.org/jira/browse/HADOOP-3036) | Fix 
findBugs warnings in UpgradeUtilities. |  Major | test | Konstantin Shvachko | 
Konstantin Shvachko |
+| [HADOOP-3031](https://issues.apache.org/jira/browse/HADOOP-3031) | Remove 
compiler warnings for ant test |  Minor | . | Amareshwari Sriramadasu | Chris 
Douglas |
+| [HADOOP-3030](https://issues.apache.org/jira/browse/HADOOP-3030) | 
InMemoryFileSystem.reserveSpaceWithChecksum does not look at failures while 
reserving space for the file in question |  Major | fs | Devaraj Das | Devaraj 
Das |
+| [HADOOP-3029](https://issues.apache.org/jira/browse/HADOOP-3029) | 
Misleading log message "firstbadlink" printed by datanodes |  Major | . | 
dhruba borthakur | dhruba borthakur |
+| [HADOOP-3025](https://issues.apache.org/jira/browse/HADOOP-3025) | 
ChecksumFileSystem needs to support the new delete method |  Blocker | fs | 
Devaraj Das | Mahadev konar |
+| [HADOOP-3018](https://issues.apache.org/jira/browse/HADOOP-3018) | Eclipse 
plugin fails to compile due to missing RPC.stopClient() method |  Blocker | 
contrib/eclipse-plugin | Tom White | Christophe Taton |
+| [HADOOP-3012](https://issues.apache.org/jira/browse/HADOOP-3012) | dfs -mv 
file to user home directory fails silently if the user home directory does not 
exist |  Blocker | fs | Mukund Madhugiri | Mahadev konar |
+| [HADOOP-3009](https://issues.apache.org/jira/browse/HADOOP-3009) | 
TestFileCreation fails while restarting cluster |  Major | . | dhruba borthakur 
| dhruba borthakur |
+| [HADOOP-3008](https://issues.apache.org/jira/browse/HADOOP-3008) | 
SocketIOWithTimeout does not handle thread interruption |  Major | . | Raghu 
Angadi | Raghu Angadi |
+| [HADOOP-3006](https://issues.apache.org/jira/browse/HADOOP-3006) | DataNode 
sends wrong length in header while pipelining. |  Major | . | Raghu Angadi | 
Raghu Angadi |
+| [HADOOP-2995](https://issues.apache.org/jira/browse/HADOOP-2995) | 
StreamBaseRecordReader's getProgress returns just 0 or 1 |  Minor | . | Dave 
Brosius | Dave Brosius |
+| [HADOOP-2992](https://issues.apache.org/jira/browse/HADOOP-2992) | 
Sequential distributed upgrades. |  Major | test | Konstantin Shvachko | 
Konstantin Shvachko |
+| [HADOOP-2983](https://issues.apache.org/jira/browse/HADOOP-2983) | [HOD] 
local\_fqdn() returns None when gethostbyname\_ex doesnt return any FQDNs. |  
Blocker | contrib/hod | Craig Macdonald | Hemanth Yamijala |
+| [HADOOP-2982](https://issues.apache.org/jira/browse/HADOOP-2982) | [HOD] 
checknodes should look for free nodes without the jobs attribute |  Blocker | 
contrib/hod | Hemanth Yamijala | Hemanth Yamijala |
+| [HADOOP-2976](https://issues.apache.org/jira/browse/HADOOP-2976) | Blocks 
staying underreplicated (for unclosed file) |  Minor | . | Koji Noguchi | 
dhruba borthakur |
+| [HADOOP-2974](https://issues.apache.org/jira/browse/HADOOP-2974) | ipc unit 
tests fail due to connection errors |  Blocker | ipc | Mukund Madhugiri | Raghu 
Angadi |
+| [HADOOP-2973](https://issues.apache.org/jira/browse/HADOOP-2973) | Unit test 
fails on Windows: org.apache.hadoop.dfs.TestLocalDFS.testWorkingDirectory |  
Blocker | . | Mukund Madhugiri | Tsz Wo Nicholas Sze |
+| [HADOOP-2972](https://issues.apache.org/jira/browse/HADOOP-2972) | 
org.apache.hadoop.dfs.TestDFSShell.testErrOutPut fails on Windows with 
NullPointerException |  Blocker | . | Mukund Madhugiri | Mahadev konar |
+| [HADOOP-2971](https://issues.apache.org/jira/browse/HADOOP-2971) | 
SocketTimeoutException in unit tests |  Major | io | Raghu Angadi | Raghu 
Angadi |
+| [HADOOP-2970](https://issues.apache.org/jira/browse/HADOOP-2970) | Wrong 
class definition for hodlib/Hod/hod.py for Python \< 2.5.1 |  Major | 
contrib/hod | Luca Telloli | Vinod Kumar Vavilapalli |
+| [HADOOP-2955](https://issues.apache.org/jira/browse/HADOOP-2955) | ant test 
fail for TestCrcCorruption with OutofMemory. |  Blocker | . | Mahadev konar | 
Raghu Angadi |
+| [HADOOP-2943](https://issues.apache.org/jira/browse/HADOOP-2943) | 
Compression for intermediate map output is broken |  Major | . | Chris Douglas 
| Chris Douglas |
+| [HADOOP-2938](https://issues.apache.org/jira/browse/HADOOP-2938) | some of 
the fs commands don't globPaths. |  Major | fs | Raghu Angadi | Tsz Wo Nicholas 
Sze |
+| [HADOOP-2936](https://issues.apache.org/jira/browse/HADOOP-2936) | HOD 
should generate hdfs://host:port on the client side configs. |  Major | 
contrib/hod | Mahadev konar | Vinod Kumar Vavilapalli |
+| [HADOOP-2934](https://issues.apache.org/jira/browse/HADOOP-2934) | NPE while 
loading  FSImage |  Major | . | Raghu Angadi | dhruba borthakur |
+| [HADOOP-2932](https://issues.apache.org/jira/browse/HADOOP-2932) | Trash 
initialization generates "deprecated filesystem name" warning even if the name 
is correct. |  Blocker | conf, fs | Konstantin Shvachko | Mahadev konar |
+| [HADOOP-2927](https://issues.apache.org/jira/browse/HADOOP-2927) | Unit test 
fails on Windows: org.apache.hadoop.fs.TestDU.testDU |  Blocker | fs | Mukund 
Madhugiri | Konstantin Shvachko |
+| [HADOOP-2924](https://issues.apache.org/jira/browse/HADOOP-2924) | HOD is 
trying to bring up task tracker on  port which is already in close\_wait state 
|  Critical | contrib/hod | Aroop Maliakkal | Vinod Kumar Vavilapalli |
+| [HADOOP-2912](https://issues.apache.org/jira/browse/HADOOP-2912) | Unit test 
fails: org.apache.hadoop.dfs.TestFsck.testFsck. This is a regression |  Blocker 
| . | Mukund Madhugiri | Mahadev konar |
+| [HADOOP-2908](https://issues.apache.org/jira/browse/HADOOP-2908) | forrest 
docs for dfs shell commands and semantics. |  Major | documentation | Mahadev 
konar | Mahadev konar |
+| [HADOOP-2901](https://issues.apache.org/jira/browse/HADOOP-2901) | the job 
tracker should not start 2 info servers |  Blocker | . | Owen O'Malley | 
Amareshwari Sriramadasu |
+| [HADOOP-2899](https://issues.apache.org/jira/browse/HADOOP-2899) | [HOD] 
hdfs:///mapredsystem directory not cleaned up after deallocation |  Major | 
contrib/hod | Luca Telloli | Hemanth Yamijala |
+| [HADOOP-2891](https://issues.apache.org/jira/browse/HADOOP-2891) | The 
dfsclient on exit deletes files that are open and not closed. |  Major | . | 
Mahadev konar | dhruba borthakur |
+| [HADOOP-2890](https://issues.apache.org/jira/browse/HADOOP-2890) | HDFS 
should recover when  replicas of block have different sizes (due to corrupted 
block) |  Major | . | Lohit Vijayarenu | dhruba borthakur |
+| [HADOOP-2871](https://issues.apache.org/jira/browse/HADOOP-2871) | Unit 
tests (16) fail on Windows due to java.lang.IllegalArgumentException causing 
MiniMRCluster to not start up |  Blocker | . | Mukund Madhugiri | Amareshwari 
Sriramadasu |
+| [HADOOP-2870](https://issues.apache.org/jira/browse/HADOOP-2870) | 
Datanode.shutdown() and Namenode.stop() should close all rpc connections |  
Major | ipc | Hairong Kuang | Hairong Kuang |
+| [HADOOP-2863](https://issues.apache.org/jira/browse/HADOOP-2863) | 
FSDataOutputStream should not flush() inside close(). |  Major | fs | Raghu 
Angadi | Raghu Angadi |
+| [HADOOP-2855](https://issues.apache.org/jira/browse/HADOOP-2855) | [HOD] HOD 
fails to allocate a cluster if the tarball specified is a relative path |  
Blocker | contrib/hod | Hemanth Yamijala | Vinod Kumar Vavilapalli |
+| [HADOOP-2848](https://issues.apache.org/jira/browse/HADOOP-2848) | [HOD] If 
a cluster directory is deleted, hod -o list must show it, and deallocate should 
work. |  Major | contrib/hod | Hemanth Yamijala | Hemanth Yamijala |
+| [HADOOP-2845](https://issues.apache.org/jira/browse/HADOOP-2845) | dfsadmin 
disk utilization report on Solaris is wrong |  Major | fs | Martin Traverso | 
Martin Traverso |
+| [HADOOP-2844](https://issues.apache.org/jira/browse/HADOOP-2844) | A 
SequenceFile.Reader object is not closed properly in CopyFiles |  Major | util 
| Tsz Wo Nicholas Sze | Tsz Wo Nicholas Sze |
+| [HADOOP-2832](https://issues.apache.org/jira/browse/HADOOP-2832) | bad code 
indentation in DFSClient |  Major | . | dhruba borthakur | dhruba borthakur |
+| [HADOOP-2817](https://issues.apache.org/jira/browse/HADOOP-2817) | Remove 
deprecated mapred.tasktracker.tasks.maximum and clusterStatus.getMaxTasks() |  
Major | . | Amareshwari Sriramadasu | Amareshwari Sriramadasu |
+| [HADOOP-2806](https://issues.apache.org/jira/browse/HADOOP-2806) | Streaming 
has no way to force entire record (or null) as key |  Minor | . | Marco Nicosia 
| Amareshwari Sriramadasu |
+| [HADOOP-2800](https://issues.apache.org/jira/browse/HADOOP-2800) | 
SetFile.Writer deprecated by mistake? |  Trivial | io | Johan Oskarsson | Johan 
Oskarsson |
+| [HADOOP-2790](https://issues.apache.org/jira/browse/HADOOP-2790) | 
TaskInProgress.hasSpeculativeTask is very inefficient |  Major | . | Owen 
O'Malley | Owen O'Malley |
+| [HADOOP-2783](https://issues.apache.org/jira/browse/HADOOP-2783) | 
hod/hodlib/Common/xmlrpc.py uses HodInterruptException without importing it |  
Minor | contrib/hod | Vinod Kumar Vavilapalli | Vinod Kumar Vavilapalli |
+| [HADOOP-2779](https://issues.apache.org/jira/browse/HADOOP-2779) | build 
scripts broken by moving hbase to subproject |  Major | build | Owen O'Malley | 
Owen O'Malley |
+| [HADOOP-2767](https://issues.apache.org/jira/browse/HADOOP-2767) | 
org.apache.hadoop.net.NetworkTopology.InnerNode#getLeaf does not return the 
last node on a rack when used with an excluded node |  Minor | . | Mark Butler 
| Hairong Kuang |
+| [HADOOP-2738](https://issues.apache.org/jira/browse/HADOOP-2738) | Text is 
not subclassable because set(Text) and compareTo(Object) access the other 
instance's private members directly |  Minor | io | Jim Kellerman | Jim 
Kellerman |
+| [HADOOP-2727](https://issues.apache.org/jira/browse/HADOOP-2727) | Web UI 
links to Hadoop homepage has to change to new hadoop homepage |  Blocker | . | 
Amareshwari Sriramadasu | Amareshwari Sriramadasu |
+| [HADOOP-2679](https://issues.apache.org/jira/browse/HADOOP-2679) | There is 
a small typeo in hdfs\_test.c when testing the success of the local hadoop 
initialization |  Trivial | . | Jason | dhruba borthakur |
+| [HADOOP-2655](https://issues.apache.org/jira/browse/HADOOP-2655) | Copy on 
write for data and metadata files in the presence of snapshots |  Major | . | 
dhruba borthakur | dhruba borthakur |
+| [HADOOP-2606](https://issues.apache.org/jira/browse/HADOOP-2606) | Namenode 
unstable when replicating 500k blocks at once |  Major | . | Koji Noguchi | 
Konstantin Shvachko |
+| [HADOOP-2373](https://issues.apache.org/jira/browse/HADOOP-2373) | Name node 
silently changes state |  Major | . | Robert Chansler | Konstantin Shvachko |
+| [HADOOP-2346](https://issues.apache.org/jira/browse/HADOOP-2346) | DataNode 
should have timeout on socket writes. |  Major | . | Raghu Angadi | Raghu 
Angadi |
+| [HADOOP-2195](https://issues.apache.org/jira/browse/HADOOP-2195) | dfs mkdir 
command differs from POSIX standards |  Major | . | Mukund Madhugiri | Mahadev 
konar |
+| [HADOOP-2194](https://issues.apache.org/jira/browse/HADOOP-2194) | dfs cat 
on a file that does not exist throws a java IOException |  Major | . | Mukund 
Madhugiri | Mahadev konar |
+| [HADOOP-2193](https://issues.apache.org/jira/browse/HADOOP-2193) | dfs rm 
and rmr commands differ from POSIX standards |  Major | . | Mukund Madhugiri | 
Mahadev konar |
+| [HADOOP-2191](https://issues.apache.org/jira/browse/HADOOP-2191) | dfs du 
and dus commands differ from POSIX standards |  Major | . | Mukund Madhugiri | 
Mahadev konar |
+| [HADOOP-2190](https://issues.apache.org/jira/browse/HADOOP-2190) | dfs ls 
and lsr commands differ from POSIX standards |  Major | . | Mukund Madhugiri | 
Mahadev konar |
+| [HADOOP-2119](https://issues.apache.org/jira/browse/HADOOP-2119) | 
JobTracker becomes non-responsive if the task trackers finish task too fast |  
Critical | . | Runping Qi | Amar Kamat |
+| [HADOOP-1967](https://issues.apache.org/jira/browse/HADOOP-1967) | hadoop 
dfs -ls, -get, -mv command's source/destination URI are inconsistent |  Major | 
. | Lohit Vijayarenu | Doug Cutting |
+| [HADOOP-1911](https://issues.apache.org/jira/browse/HADOOP-1911) | infinite 
loop in dfs -cat command. |  Blocker | . | Koji Noguchi | Chris Douglas |
+| [HADOOP-1902](https://issues.apache.org/jira/browse/HADOOP-1902) | du 
command throws an exception when the directory is not specified |  Major | . | 
Mukund Madhugiri | Mahadev konar |
+| [HADOOP-1373](https://issues.apache.org/jira/browse/HADOOP-1373) | 
checkPath() throws IllegalArgumentException |  Blocker | fs | Konstantin 
Shvachko | Edward J. Yoon |
+
+
+### TESTS:
+
+| JIRA | Summary | Priority | Component | Reporter | Contributor |
+|:---- |:---- | :--- |:---- |:---- |:---- |
+| [HADOOP-2997](https://issues.apache.org/jira/browse/HADOOP-2997) | Add test 
for non-writable serializer |  Blocker | . | Tom White | Tom White |
+| [HADOOP-2775](https://issues.apache.org/jira/browse/HADOOP-2775) | [HOD] Put 
in place unit test framework for HOD |  Major | contrib/hod | Hemanth Yamijala 
| Vinod Kumar Vavilapalli |
+
+
+### SUB-TASKS:
+
+| JIRA | Summary | Priority | Component | Reporter | Contributor |
+|:---- |:---- | :--- |:---- |:---- |:---- |
+
+
+### OTHER:
+
+| JIRA | Summary | Priority | Component | Reporter | Contributor |
+|:---- |:---- | :--- |:---- |:---- |:---- |
+| [HADOOP-2981](https://issues.apache.org/jira/browse/HADOOP-2981) | Follow 
Apache process for getting ready to put crypto code in to project |  Major | . 
| Owen O'Malley | Owen O'Malley |
+
+

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d759b4bd/hadoop-common-project/hadoop-common/src/site/markdown/release/0.17.0/RELEASENOTES.0.17.0.md
----------------------------------------------------------------------
diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.17.0/RELEASENOTES.0.17.0.md
 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.17.0/RELEASENOTES.0.17.0.md
new file mode 100644
index 0000000..467f2ac
--- /dev/null
+++ 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.17.0/RELEASENOTES.0.17.0.md
@@ -0,0 +1,604 @@
+
+<!---
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+-->
+# Apache Hadoop  0.17.0 Release Notes
+
+These release notes cover new developer and user-facing incompatibilities, 
features, and major improvements.
+
+
+---
+
+* [HADOOP-3382](https://issues.apache.org/jira/browse/HADOOP-3382) | *Blocker* 
| **Memory leak when files are not cleanly closed**
+
+Fixed a memory leak associated with 'abandoned' files (i.e. not cleanly 
closed). This held up significant amounts of memory depending on activity and 
how long NameNode has been running.
+
+
+---
+
+* [HADOOP-3280](https://issues.apache.org/jira/browse/HADOOP-3280) | *Blocker* 
| **virtual address space limits break streaming apps**
+
+This patch adds the mapred.child.ulimit to limit the virtual memory for 
children processes to the given value.
+
+
+---
+
+* [HADOOP-3266](https://issues.apache.org/jira/browse/HADOOP-3266) | *Major* | 
**Remove HOD changes from CHANGES.txt, as they are now inside src/contrib/hod**
+
+Moved HOD change items from CHANGES.txt to a new file 
src/contrib/hod/CHANGES.txt.
+
+
+---
+
+* [HADOOP-3239](https://issues.apache.org/jira/browse/HADOOP-3239) | *Major* | 
**exists() calls logs FileNotFoundException in namenode log**
+
+getFileInfo returns null for File not found instead of throwing 
FileNotFoundException
+
+
+---
+
+* [HADOOP-3223](https://issues.apache.org/jira/browse/HADOOP-3223) | *Blocker* 
| **Hadoop dfs -help for permissions contains a typo**
+
+Minor typo fix in help message for chmod. impact : none.
+
+
+---
+
+* [HADOOP-3204](https://issues.apache.org/jira/browse/HADOOP-3204) | *Blocker* 
| **LocalFSMerger needs to catch throwable**
+
+Fixes LocalFSMerger in ReduceTask.java to handle errors/exceptions better. 
Prior to this all exceptions except IOException would be silently ignored.
+
+
+---
+
+* [HADOOP-3168](https://issues.apache.org/jira/browse/HADOOP-3168) | *Major* | 
**reduce amount of logging in hadoop streaming**
+
+Decreases the frequency of logging from streaming from every 100 records to 
every 10,000 records.
+
+
+---
+
+* [HADOOP-3162](https://issues.apache.org/jira/browse/HADOOP-3162) | *Blocker* 
| **Map/reduce stops working with comma separated input paths**
+
+The public methods org.apache.hadoop.mapred.JobConf.setInputPath(Path) and 
org.apache.hadoop.mapred.JobConf.addInputPath(Path) are deprecated. And the 
methods have the semantics of branch 0.16.
+The following public APIs  are added in 
org.apache.hadoop.mapred.FileInputFormat :
+public static void setInputPaths(JobConf job, Path... paths);
+public static void setInputPaths(JobConf job, String commaSeparatedPaths);
+public static void addInputPath(JobConf job, Path path);
+public static void addInputPaths(JobConf job, String commaSeparatedPaths);
+Earlier code calling JobConf.setInputPath(Path), JobConf.addInputPath(Path) 
should now call FileInputFormat.setInputPaths(JobConf, Path...) and 
FileInputFormat.addInputPath(Path) respectively
+
+
+---
+
+* [HADOOP-3152](https://issues.apache.org/jira/browse/HADOOP-3152) | *Minor* | 
**Make index interval configuable when using MapFileOutputFormat for map-reduce 
job**
+
+Add a static method MapFile#setIndexInterval(Configuration, int interval) so 
that MapReduce jobs that use MapFileOutputFormat can set the index interval.
+
+
+---
+
+* [HADOOP-3140](https://issues.apache.org/jira/browse/HADOOP-3140) | *Major* | 
**JobTracker should not try to promote a (map) task if it does not write to DFS 
at all**
+
+Tasks that don't generate any output are not inserted in the commit queue of 
the JobTracker. They are marked as SUCCESSFUL by the TaskTracker and the 
JobTracker updates their state short-circuiting the commit queue.
+
+
+---
+
+* [HADOOP-3137](https://issues.apache.org/jira/browse/HADOOP-3137) | *Major* | 
**[HOD] Update hod version number**
+
+Build script was changed to make HOD versions follow Hadoop version numbers. 
As a result of this change, the next version of HOD would not be 0.5, but would 
be synchronized to the Hadoop version number. Users who rely on the version 
number of HOD should note the unexpected jump in version numbers.
+
+
+---
+
+* [HADOOP-3124](https://issues.apache.org/jira/browse/HADOOP-3124) | *Major* | 
**DFS data node should not use hard coded 10 minutes as write timeout.**
+
+Makes DataNode socket write timeout configurable. User impact : none.
+
+
+---
+
+* [HADOOP-3099](https://issues.apache.org/jira/browse/HADOOP-3099) | *Blocker* 
| **Need new options in distcp for preserving ower, group and permission**
+
+Added a new option -p to distcp for preserving file/directory status.
+-p[rbugp]              Preserve status
+                       r: replication number
+                       b: block size
+                       u: user
+                       g: group
+                       p: permission
+                       -p alone is equivalent to -prbugp
+
+
+---
+
+* [HADOOP-3093](https://issues.apache.org/jira/browse/HADOOP-3093) | *Major* | 
**ma/reduce throws the following exception if "io.serializations" is not set:**
+
+The following public APIs  are added in org.apache.hadoop.conf.Configuration
+ String[] Configuration.getStrings(String name, String... defaultValue)  and
+ void Configuration.setStrings(String name, String... values)
+
+
+---
+
+* [HADOOP-3091](https://issues.apache.org/jira/browse/HADOOP-3091) | *Major* | 
**hadoop dfs -put should support multiple src**
+
+hadoop dfs -put accepts multiple sources when destination is a directory.
+
+
+---
+
+* [HADOOP-3073](https://issues.apache.org/jira/browse/HADOOP-3073) | *Blocker* 
| **SocketOutputStream.close() should close the channel.**
+
+SocketOutputStream.close() closes the underlying channel. Increase 
compatibility with java.net.Socket.getOutputStream. User Impact : none.
+
+
+---
+
+* [HADOOP-3060](https://issues.apache.org/jira/browse/HADOOP-3060) | *Major* | 
**MiniMRCluster is ignoring parameter taskTrackerFirst**
+
+The parameter boolean taskTrackerFirst is removed from 
org.apache.hadoop.mapred.MiniMRCluster constructors.
+Thus signature of following APIs
+  public MiniMRCluster(int jobTrackerPort, int taskTrackerPort, int 
numTaskTrackers, String namenode, boolean taskTrackerFirst, int numDir)
+  public MiniMRCluster(int jobTrackerPort, int taskTrackerPort, int 
numTaskTrackers, String namenode, boolean taskTrackerFirst, int numDir, 
String[] racks)
+  public MiniMRCluster(int jobTrackerPort, int taskTrackerPort, int 
numTaskTrackers, String namenode, boolean taskTrackerFirst, int numDir, 
String[] racks, String[] hosts)
+  public MiniMRCluster(int jobTrackerPort, int taskTrackerPort, int 
numTaskTrackers, String namenode, boolean taskTrackerFirst, int numDir, 
String[] racks, String[] hosts, UnixUserGroupInformation ugi )
+is changed to
+  public MiniMRCluster(int jobTrackerPort, int taskTrackerPort, int 
numTaskTrackers, String namenode, int numDir)
+  public MiniMRCluster(int jobTrackerPort, int taskTrackerPort, int 
numTaskTrackers, String namenode, int numDir, String[] racks)
+  public MiniMRCluster(int jobTrackerPort, int taskTrackerPort, int 
numTaskTrackers, String namenode, int numDir, String[] racks, String[] hosts)
+  public MiniMRCluster(int jobTrackerPort, int taskTrackerPort, int 
numTaskTrackers, String namenode, int numDir, String[] racks, String[] hosts, 
UnixUserGroupInformation ugi )
+respectively.
+Since the old signatures were not deprecated, any code using the old 
constructors must be changed to use the new constructors.
+
+
+---
+
+* [HADOOP-3048](https://issues.apache.org/jira/browse/HADOOP-3048) | *Blocker* 
| **Stringifier**
+
+ A new Interface and a default implementation to convert and restore 
serializations of objects to strings.
+
+
+---
+
+* [HADOOP-3041](https://issues.apache.org/jira/browse/HADOOP-3041) | *Blocker* 
| **Within a task, the value ofJobConf.getOutputPath() method is modified**
+
+1. Deprecates JobConf.setOutputPath and JobConf.getOutputPath
+JobConf.getOutputPath() still returns the same value that it used to return. 
+2. Deprecates OutputFormatBase. Adds FileOutputFormat. Existing output formats 
extending OutputFormatBase, now extend FileOutputFormat.
+3. Adds the following APIs in FileOutputFormat :
+public static void setOutputPath(JobConf conf, Path outputDir); // sets 
mapred.output.dir
+public static Path getOutputPath(JobConf conf) ; // gets mapred.output.dir
+public static Path getWorkOutputPath(JobConf conf); // gets 
mapred.work.output.dir
+4. static void setWorkOutputPath(JobConf conf, Path outputDir) is also added 
to FileOutputFormat. This is used by the framework to set 
mapred.work.output.dir as task's temporary output dir .
+
+
+---
+
+* [HADOOP-3040](https://issues.apache.org/jira/browse/HADOOP-3040) | *Major* | 
**Streaming should assume an empty key if the first character on a line is the 
seperator (stream.map.output.field.separator, by default, tab)**
+
+If the first character on a line is the separator, empty key is assumed, and 
the whole line is the value (due to a bug this was not the case).
+
+
+---
+
+* [HADOOP-3001](https://issues.apache.org/jira/browse/HADOOP-3001) | *Blocker* 
| **FileSystems should track how many bytes are read and written**
+
+Adds new framework map/reduce counters that track the number of bytes read and 
written to HDFS, local, KFS, and S3 file systems.
+
+
+---
+
+* [HADOOP-2982](https://issues.apache.org/jira/browse/HADOOP-2982) | *Blocker* 
| **[HOD] checknodes should look for free nodes without the jobs attribute**
+
+The number of free nodes in the cluster is computed using a better algorithm 
that filters out inconsistencies in node status as reported by Torque.
+
+
+---
+
+* [HADOOP-2947](https://issues.apache.org/jira/browse/HADOOP-2947) | *Blocker* 
| **[HOD] Hod should redirect stderr and stdout of Hadoop daemons to assist 
debugging**
+
+The stdout and stderr streams of daemons are redirected to files that are 
created under the hadoop log directory. Users can now send kill 3 signals to 
the daemons to get stack traces and thread dumps for debugging.
+
+
+---
+
+* [HADOOP-2899](https://issues.apache.org/jira/browse/HADOOP-2899) | *Major* | 
**[HOD] hdfs:///mapredsystem directory not cleaned up after deallocation**
+
+The mapred system directory generated by HOD is cleaned up at cluster 
deallocation time.
+
+
+---
+
+* [HADOOP-2873](https://issues.apache.org/jira/browse/HADOOP-2873) | *Major* | 
**Namenode fails to re-start after cluster shutdown - DFSClient: Could not 
obtain blocks even all datanodes were up & live**
+
+**WARNING: No release note provided for this incompatible change.**
+
+
+---
+
+* [HADOOP-2855](https://issues.apache.org/jira/browse/HADOOP-2855) | *Blocker* 
| **[HOD] HOD fails to allocate a cluster if the tarball specified is a 
relative path**
+
+Changes were made to handle relative paths correctly for important HOD options 
such as the cluster directory, tarball option, and script file.
+
+
+---
+
+* [HADOOP-2854](https://issues.apache.org/jira/browse/HADOOP-2854) | *Blocker* 
| **Remove the deprecated ipc.Server.getUserInfo()**
+
+Removes deprecated method Server.getUserInfo()
+
+
+---
+
+* [HADOOP-2839](https://issues.apache.org/jira/browse/HADOOP-2839) | *Blocker* 
| **Remove deprecated methods in FileSystem**
+
+Removes deprecated API FileSystem#globPaths()
+
+
+---
+
+* [HADOOP-2831](https://issues.apache.org/jira/browse/HADOOP-2831) | *Blocker* 
| **Remove the deprecated INode.getAbsoluteName()**
+
+Removes deprecated method INode#getAbsoluteName()
+
+
+---
+
+* [HADOOP-2828](https://issues.apache.org/jira/browse/HADOOP-2828) | *Major* | 
**Remove deprecated methods in Configuration.java**
+
+The following deprecated methods in org.apache.hadoop.conf.Configuration are 
removed.
+public Object getObject(String name)
+public void setObject(String name, Object value)
+public Object get(String name, Object defaultValue)
+public void set(String name, Object value)
+and public Iterator entries()
+
+
+---
+
+* [HADOOP-2826](https://issues.apache.org/jira/browse/HADOOP-2826) | *Major* | 
**FileSplit.getFile(), LineRecordReader. readLine() need to be removed**
+
+The deprecated methods, public File 
org.apache.hadoop.mapred.FileSplit.getFile() and 
+  public static  long 
org.apache.hadoop.mapred.LineRecordReader.readLine(InputStream in,  
OutputStream out)
+are removed.
+The constructor 
org.apache.hadoop.mapred.LineRecordReader.LineReader(InputStream in, 
Configuration conf) 's visibility is made public.
+The signature of the public 
org.apache.hadoop.streaming.UTF8ByteArrayUtils.readLIne(InputStream) method is 
changed to UTF8ByteArrayUtils.readLIne(LineReader, Text).  Since the old 
signature is not deprecated, any code using the old method must be changed to 
use the new method.
+
+
+---
+
+* [HADOOP-2825](https://issues.apache.org/jira/browse/HADOOP-2825) | *Major* | 
**MapOutputLocation.getFile() needs to be removed**
+
+The deprecated method, public long 
org.apache.hadoop.mapred.MapOutputLocation.getFile(FileSystem fileSys, Path 
localFilename, int reduce, Progressable pingee, int timeout) is removed.
+
+
+---
+
+* [HADOOP-2824](https://issues.apache.org/jira/browse/HADOOP-2824) | *Major* | 
**One of MiniMRCluster constructors needs tobe removed**
+
+The deprecated constructor 
org.apache.hadoop.mapred.MiniMRCluster.MiniMRCluster(int jobTrackerPort, int 
taskTrackerPort, int numTaskTrackers, String namenode, boolean 
taskTrackerFirst) is removed.
+
+
+---
+
+* [HADOOP-2823](https://issues.apache.org/jira/browse/HADOOP-2823) | *Major* | 
**SimpleCharStream.getColumn(),  getLine() methods to be removed.**
+
+The deprecated methods in 
org.apache.hadoop.record.compiler.generated.SimpleCharStream :
+public int getColumn()
+and public int getLine() are removed
+
+
+---
+
+* [HADOOP-2822](https://issues.apache.org/jira/browse/HADOOP-2822) | *Major* | 
**Remove deprecated classes in mapred**
+
+The deprecated classes org.apache.hadoop.mapred.InputFormatBase and 
org.apache.hadoop.mapred.PhasedFileSystem are removed.
+
+
+---
+
+* [HADOOP-2821](https://issues.apache.org/jira/browse/HADOOP-2821) | *Major* | 
**Remove deprecated classes in util**
+
+The deprecated classes org.apache.hadoop.util.ShellUtil and 
org.apache.hadoop.util.ToolBase are removed.
+
+
+---
+
+* [HADOOP-2820](https://issues.apache.org/jira/browse/HADOOP-2820) | *Major* | 
**Remove deprecated classes in streaming**
+
+The deprecated classes org.apache.hadoop.streaming.StreamLineRecordReader,  
org.apache.hadoop.streaming.StreamOutputFormat and 
org.apache.hadoop.streaming.StreamSequenceRecordReader are removed
+
+
+---
+
+* [HADOOP-2819](https://issues.apache.org/jira/browse/HADOOP-2819) | *Major* | 
**Remove deprecated methods in JobConf()**
+
+The following deprecated methods are removed from org.apache.hadoop.JobConf :
+public Class getInputKeyClass()
+public void setInputKeyClass(Class theClass)
+public Class getInputValueClass()
+public void setInputValueClass(Class theClass)
+
+The methods, public boolean 
org.apache.hadoop.JobConf.getSpeculativeExecution() and 
+public void org.apache.hadoop.JobConf.setSpeculativeExecution(boolean 
speculativeExecution) are undeprecated.
+
+
+---
+
+* [HADOOP-2818](https://issues.apache.org/jira/browse/HADOOP-2818) | *Major* | 
**Remove deprecated Counters.getDisplayName(),  getCounterNames(),   
getCounter(String counterName)**
+
+The deprecated methods public String 
org.apache.hadoop.mapred.Counters.getDisplayName(String counter) and 
+public synchronized Collection\<String\> 
org.apache.hadoop.mapred.Counters.getCounterNames() are removed.
+The deprecated method public synchronized long 
org.apache.hadoop.mapred.Counters.getCounter(String counterName) is 
undeprecated.
+
+
+---
+
+* [HADOOP-2817](https://issues.apache.org/jira/browse/HADOOP-2817) | *Major* | 
**Remove deprecated mapred.tasktracker.tasks.maximum and 
clusterStatus.getMaxTasks()**
+
+The deprecated method public int 
org.apache.hadoop.mapred.ClusterStatus.getMaxTasks() is removed.
+The deprecated configuration property "mapred.tasktracker.tasks.maximum" is 
removed.
+
+
+---
+
+* [HADOOP-2796](https://issues.apache.org/jira/browse/HADOOP-2796) | *Major* | 
**For script option hod should exit with distinguishable exit codes for script 
code and hod exit code.**
+
+A provision to reliably detect a failing script's exit code was added. In case 
the hod script option returned a non-zero exit code, users can now look for a 
'script.exitcode' file written to the HOD cluster directory. If this file is 
present, it means the script failed with the returned exit code.
+
+
+---
+
+* [HADOOP-2775](https://issues.apache.org/jira/browse/HADOOP-2775) | *Major* | 
**[HOD] Put in place unit test framework for HOD**
+
+A unit testing framework based on pyunit is added to HOD. Developers 
contributing patches to HOD should now contribute unit tests along with the 
patches where possible.
+
+
+---
+
+* [HADOOP-2765](https://issues.apache.org/jira/browse/HADOOP-2765) | *Major* | 
**setting memory limits for tasks**
+
+This feature enables specifying ulimits for streaming/pipes tasks. Now pipes 
and streaming tasks have same virtual memory available as the java process 
which invokes them. Ulimit value will be the same as -Xmx value for java 
processes provided using mapred.child.java.opts.
+
+
+---
+
+* [HADOOP-2758](https://issues.apache.org/jira/browse/HADOOP-2758) | *Major* | 
**Reduce memory copies when data is read from DFS**
+
+DataNode takes 50% less CPU while serving data to clients.
+
+
+---
+
+* [HADOOP-2657](https://issues.apache.org/jira/browse/HADOOP-2657) | *Major* | 
**Enhancements to DFSClient to support flushing data at any point in time**
+
+A new API DFSOututStream.flush() flushes all outstanding data to the pipeline 
of datanodes.
+
+
+---
+
+* [HADOOP-2634](https://issues.apache.org/jira/browse/HADOOP-2634) | *Blocker* 
| **Deprecate exists() and isDir() to simplify ClientProtocol.**
+
+Deprecates exists() from ClientProtocol
+
+
+---
+
+* [HADOOP-2563](https://issues.apache.org/jira/browse/HADOOP-2563) | *Blocker* 
| **Remove deprecated FileSystem#listPaths()**
+
+Removes deprecated method FileSystem#listPaths()
+
+
+---
+
+* [HADOOP-2559](https://issues.apache.org/jira/browse/HADOOP-2559) | *Major* | 
**DFS should place one replica per rack**
+
+Change DFS block placement to allocate the first replica locally, the second 
off-rack, and the third intra-rack from the second.
+
+
+---
+
+* [HADOOP-2551](https://issues.apache.org/jira/browse/HADOOP-2551) | *Blocker* 
| **hadoop-env.sh needs finer granularity**
+
+New environment variables were introduced to allow finer grained control of 
Java options passed to server and client JVMs.  See the new *\_OPTS variables 
in conf/hadoop-env.sh.
+
+
+---
+
+* [HADOOP-2470](https://issues.apache.org/jira/browse/HADOOP-2470) | *Major* | 
**Open and isDir should be removed from ClientProtocol**
+
+Open and isDir were removed from ClientProtocol.
+
+
+---
+
+* [HADOOP-2423](https://issues.apache.org/jira/browse/HADOOP-2423) | *Major* | 
**The codes in FSDirectory.mkdirs(...) is inefficient.**
+
+Improved FSDirectory.mkdirs(...) performance.  In 
NNThroughputBenchmark-create, the ops per sec in  was improved ~54%.
+
+
+---
+
+* [HADOOP-2410](https://issues.apache.org/jira/browse/HADOOP-2410) | *Major* | 
**Make EC2 cluster nodes more independent of each other**
+
+The command "hadoop-ec2 run" has been replaced by "hadoop-ec2 launch-cluster 
\<group\> \<number of instances\>", and "hadoop-ec2 start-hadoop" has been 
removed since Hadoop is started on instance start up. See 
http://wiki.apache.org/hadoop/AmazonEC2 for details.
+
+
+---
+
+* [HADOOP-2399](https://issues.apache.org/jira/browse/HADOOP-2399) | *Major* | 
**Input key and value to combiner and reducer should be reused**
+
+The key and value objects that are given to the Combiner and Reducer are now 
reused between calls. This is much more efficient, but the user can not assume 
the objects are constant.
+
+
+---
+
+* [HADOOP-2345](https://issues.apache.org/jira/browse/HADOOP-2345) | *Major* | 
**new transactions to support HDFS Appends**
+
+Introduce new namenode transactions to support appending to HDFS files.
+
+
+---
+
+* [HADOOP-2239](https://issues.apache.org/jira/browse/HADOOP-2239) | *Major* | 
**Security:  Need to be able to encrypt Hadoop socket connections**
+
+This patch adds a new FileSystem, HftpsFileSystem, that allows access to HDFS 
data over HTTPS.
+
+
+---
+
+* [HADOOP-2219](https://issues.apache.org/jira/browse/HADOOP-2219) | *Major* | 
**du like command to count number of files under a given directory**
+
+Added a new fs command fs -count for counting the number of bytes, files and 
directories under a given path.
+
+Added a new RPC getContentSummary(String path) to ClientProtocol.
+
+
+---
+
+* [HADOOP-2192](https://issues.apache.org/jira/browse/HADOOP-2192) | *Major* | 
**dfs mv command differs from POSIX standards**
+
+this patch makes dfs -mv more like linux mv command getting rid of unnecessary 
output in dfs -mv and returns an error message when moving non existent 
files/directories --- mv: cannot stat "filename": No such file or directory.
+
+
+---
+
+* [HADOOP-2178](https://issues.apache.org/jira/browse/HADOOP-2178) | *Major* | 
**Job history on HDFS**
+
+This feature provides facility to store job history on DFS. Now cluster admin 
can provide either localFS location or DFS location using configuration 
property "mapred.job.history.location"  to store job histroy. History will be 
logged in user specified location also. User can specify history location using 
configuration property "mapred.job.history.user.location" .
+The classes org.apache.hadoop.mapred.DefaultJobHistoryParser.MasterIndex and 
org.apache.hadoop.mapred.DefaultJobHistoryParser.MasterIndexParseListener, and 
public method org.apache.hadoop.mapred.DefaultJobHistoryParser.parseMasterIndex 
are not available.
+The signature of public method 
org.apache.hadoop.mapred.DefaultJobHistoryParser.parseJobTasks(File 
jobHistoryFile, JobHistory.JobInfo job) is changed to 
DefaultJobHistoryParser.parseJobTasks(String jobHistoryFile, JobHistory.JobInfo 
job, FileSystem fs).
+The signature of public method 
org.apache.hadoop.mapred.JobHistory.parseHistory(File path, Listener l) is 
changed to JobHistory.parseHistoryFromFS(String path, Listener l, FileSystem fs)
+
+
+---
+
+* [HADOOP-2119](https://issues.apache.org/jira/browse/HADOOP-2119) | 
*Critical* | **JobTracker becomes non-responsive if the task trackers finish 
task too fast**
+
+This removes many inefficiencies in task placement and scheduling logic. The 
JobTracker would perform linear scans of the list of submitted tasks in cases 
where it did not find an obvious candidate task for a node. With better data 
structures for managing job state, all task placement operations now run in 
constant time (in most cases). Also, the task output promotions are batched.
+
+
+---
+
+* [HADOOP-2116](https://issues.apache.org/jira/browse/HADOOP-2116) | *Major* | 
**Job.local.dir to be exposed to tasks**
+
+This issue restructures local job directory on the tasktracker.
+Users are provided with a job-specific shared directory  
(mapred-local/taskTracker/jobcache/$jobid/ work) for using it as scratch space, 
through configuration property and system property "job.local.dir". Now, the 
directory "../work" is not available from the task's cwd.
+
+
+---
+
+* [HADOOP-2063](https://issues.apache.org/jira/browse/HADOOP-2063) | *Blocker* 
| **Command to pull corrupted files**
+
+Added a new option -ignoreCrc to fs -get, or equivalently, fs -copyToLocal, 
such that crc checksum will be ignored for the command.  The use of this option 
is to download the corrupted files.
+
+
+---
+
+* [HADOOP-2055](https://issues.apache.org/jira/browse/HADOOP-2055) | *Minor* | 
**JobConf should have a setInputPathFilter method**
+
+This issue provides users the ability to specify what paths to ignore for 
processing in the job input directory (apart from the filenames that start with 
"\_" and "."). Defines two new APIs - 
FileInputFormat.setInputPathFilter(JobConf, PathFilter), and, 
FileInputFormat.getInputPathFilter(JobConf).
+
+
+---
+
+* [HADOOP-2027](https://issues.apache.org/jira/browse/HADOOP-2027) | *Major* | 
**FileSystem should provide byte ranges for file locations**
+
+New FileSystem API getFileBlockLocations to return the number of bytes in each 
block in a file via a single rpc to the namenode to speed up job planning. 
Deprecates getFileCacheHints.
+
+
+---
+
+* [HADOOP-1986](https://issues.apache.org/jira/browse/HADOOP-1986) | *Major* | 
**Add support for a general serialization mechanism for Map Reduce**
+
+Programs that implement the raw Mapper or Reducer interfaces will need 
modification to compile with this release. For example, 
+
+class MyMapper implements Mapper {
+  public void map(WritableComparable key, Writable val,
+    OutputCollector out, Reporter reporter) throws IOException {
+    // ...
+  }
+  // ...
+}
+
+will need to be changed to refer to the parameterized type. For example:
+
+class MyMapper implements Mapper\<WritableComparable, Writable, 
WritableComparable, Writable\> {
+  public void map(WritableComparable key, Writable val,
+    OutputCollector\<WritableComparable, Writable\> out, Reporter reporter) 
throws IOException {
+    // ...
+  }
+  // ...
+}
+
+Similarly implementations of the following raw interfaces will need 
modification: InputFormat, OutputCollector, OutputFormat, Partitioner, 
RecordReader, RecordWriter
+
+
+---
+
+* [HADOOP-1985](https://issues.apache.org/jira/browse/HADOOP-1985) | *Major* | 
**Abstract node to switch mapping into a topology service class used by 
namenode and jobtracker**
+
+This issue introduces rack awareness for map tasks. It also moves the rack 
resolution logic to the central servers - NameNode & JobTracker. The 
administrator can specify a loadable class given by 
topology.node.switch.mapping.impl to specify the class implementing the logic 
for rack resolution. The class must implement a method - resolve(List\<String\> 
names), where names is the list of DNS-names/IP-addresses that we want 
resolved. The return value is a list of resolved network paths of the form 
/foo/rack, where rack is the rackID where the node belongs to and foo is the 
switch where multiple racks are connected, and so on. The default 
implementation of this class is packaged along with hadoop and points to 
org.apache.hadoop.net.ScriptBasedMapping and this class loads a script that can 
be used for rack resolution. The script location is configurable. It is 
specified by topology.script.file.name and defaults to an empty script. In the 
case where the script name is empty, /default-rack
  is returned for all dns-names/IP-addresses. The loadable 
topology.node.switch.mapping.impl provides administrators fleixibilty to define 
how their site's node resolution should happen.
+For mapred, one can also specify the level of the cache w.r.t the number of 
levels in the resolved network path - defaults to two. This means that the 
JobTracker will cache tasks at the host level and at the rack level. 
+Known issue: the task caching will not work with levels greater than 2 (beyond 
racks). This bug is tracked in HADOOP-3296.
+
+
+---
+
+* [HADOOP-1622](https://issues.apache.org/jira/browse/HADOOP-1622) | *Major* | 
**Hadoop should provide a way to allow the user to specify jar file(s) the user 
job depends on**
+
+This patch allows new command line options for 
+
+hadoop jar 
+which are 
+
+hadoop jar -files \<comma seperated list of files\> -libjars \<comma seperated 
list of jars\> -archives \<comma seperated list of archives\>
+
+-files options allows you to speficy comma seperated list of path which would 
be present in your current working directory of your task
+-libjars option allows you to add jars to the classpaths of the maps and 
reduces. 
+-archives allows you to pass archives as arguments that are unzipped/unjarred 
and a link with name of the jar/zip are created in the current working 
directory if tasks.
+
+
+---
+
+* [HADOOP-1593](https://issues.apache.org/jira/browse/HADOOP-1593) | *Major* | 
**FsShell should work with paths in non-default FileSystem**
+
+This bug allows non default path to specifeid in fsshell commands.
+
+So, you can now specify hadoop dfs -ls hdfs://remotehost1:port/path 
+  and  hadoop dfs -ls hdfs://remotehost2:port/path without changing the config.
+
+
+---
+
+* [HADOOP-910](https://issues.apache.org/jira/browse/HADOOP-910) | *Major* | 
**Reduces can do merges for the on-disk map output files in parallel with their 
copying**
+
+Reducers now perform merges of shuffle data (both in-memory and on disk) while 
fetching map outputs. Earlier, during shuffle they used to merge only the 
in-memory outputs.
+
+
+---
+
+* [HADOOP-771](https://issues.apache.org/jira/browse/HADOOP-771) | *Major* | 
**Namenode should return error when trying to delete non-empty directory**
+
+This patch adds a new api to file system i.e delete(path, boolean), 
deprecating the previous delete(path). 
+the new api recursively deletes files only if boolean is set to true. 
+If path is a file, the boolean value does not matter, if path is a directory 
and the directory is non empty delete(path, false) will throw an exception and 
delete(path, true) will delete all files recursively.
+
+
+

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d759b4bd/hadoop-common-project/hadoop-common/src/site/markdown/release/0.17.1/CHANGES.0.17.1.md
----------------------------------------------------------------------
diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.17.1/CHANGES.0.17.1.md
 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.17.1/CHANGES.0.17.1.md
new file mode 100644
index 0000000..991cbd7
--- /dev/null
+++ 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.17.1/CHANGES.0.17.1.md
@@ -0,0 +1,74 @@
+
+<!---
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+-->
+# Apache Hadoop Changelog
+
+## Release 0.17.1 - 2008-06-23
+
+### INCOMPATIBLE CHANGES:
+
+| JIRA | Summary | Priority | Component | Reporter | Contributor |
+|:---- |:---- | :--- |:---- |:---- |:---- |
+| [HADOOP-3565](https://issues.apache.org/jira/browse/HADOOP-3565) | 
JavaSerialization can throw java.io.StreamCorruptedException |  Major | . | Tom 
White | Tom White |
+
+
+### NEW FEATURES:
+
+| JIRA | Summary | Priority | Component | Reporter | Contributor |
+|:---- |:---- | :--- |:---- |:---- |:---- |
+
+
+### IMPROVEMENTS:
+
+| JIRA | Summary | Priority | Component | Reporter | Contributor |
+|:---- |:---- | :--- |:---- |:---- |:---- |
+
+
+### BUG FIXES:
+
+| JIRA | Summary | Priority | Component | Reporter | Contributor |
+|:---- |:---- | :--- |:---- |:---- |:---- |
+| [HADOOP-3550](https://issues.apache.org/jira/browse/HADOOP-3550) | Reduce 
tasks failing with OOM |  Blocker | . | Arun C Murthy | Chris Douglas |
+| [HADOOP-3526](https://issues.apache.org/jira/browse/HADOOP-3526) | 
contrib/data\_join doesn't work |  Blocker | . | Spyros Blanas | Spyros Blanas |
+| [HADOOP-3522](https://issues.apache.org/jira/browse/HADOOP-3522) | 
ValuesIterator.next() doesn't return a new object, thus failing many equals() 
tests. |  Major | . | Spyros Blanas | Owen O'Malley |
+| [HADOOP-3477](https://issues.apache.org/jira/browse/HADOOP-3477) | release 
tar.gz contains duplicate files |  Major | build | Adam Heath | Adam Heath |
+| [HADOOP-3475](https://issues.apache.org/jira/browse/HADOOP-3475) | 
MapOutputBuffer allocates 4x as much space to record capacity as intended |  
Major | . | Chris Douglas | Chris Douglas |
+| [HADOOP-3472](https://issues.apache.org/jira/browse/HADOOP-3472) | 
MapFile.Reader getClosest() function returns incorrect results when before is 
true |  Major | io | Todd Lipcon | stack |
+| [HADOOP-3442](https://issues.apache.org/jira/browse/HADOOP-3442) | QuickSort 
may get into unbounded recursion |  Blocker | . | Runping Qi | Chris Douglas |
+| [HADOOP-2159](https://issues.apache.org/jira/browse/HADOOP-2159) | Namenode 
stuck in safemode |  Major | . | Christian Kunz | Hairong Kuang |
+| [HADOOP-1979](https://issues.apache.org/jira/browse/HADOOP-1979) | fsck on 
namenode without datanodes takes too much time |  Minor | . | Koji Noguchi | 
Lohit Vijayarenu |
+
+
+### TESTS:
+
+| JIRA | Summary | Priority | Component | Reporter | Contributor |
+|:---- |:---- | :--- |:---- |:---- |:---- |
+
+
+### SUB-TASKS:
+
+| JIRA | Summary | Priority | Component | Reporter | Contributor |
+|:---- |:---- | :--- |:---- |:---- |:---- |
+
+
+### OTHER:
+
+| JIRA | Summary | Priority | Component | Reporter | Contributor |
+|:---- |:---- | :--- |:---- |:---- |:---- |
+
+

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d759b4bd/hadoop-common-project/hadoop-common/src/site/markdown/release/0.17.1/RELEASENOTES.0.17.1.md
----------------------------------------------------------------------
diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.17.1/RELEASENOTES.0.17.1.md
 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.17.1/RELEASENOTES.0.17.1.md
new file mode 100644
index 0000000..7cc43a3
--- /dev/null
+++ 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.17.1/RELEASENOTES.0.17.1.md
@@ -0,0 +1,38 @@
+
+<!---
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+-->
+# Apache Hadoop  0.17.1 Release Notes
+
+These release notes cover new developer and user-facing incompatibilities, 
features, and major improvements.
+
+
+---
+
+* [HADOOP-3565](https://issues.apache.org/jira/browse/HADOOP-3565) | *Major* | 
**JavaSerialization can throw java.io.StreamCorruptedException**
+
+Change the Java serialization framework, which is not enabled by default, to 
correctly make the objects independent of the previous objects.
+
+
+---
+
+* [HADOOP-1979](https://issues.apache.org/jira/browse/HADOOP-1979) | *Minor* | 
**fsck on namenode without datanodes takes too much time**
+
+Improved performance of {{fsck}} by better management of the data stream on 
the client side.
+
+
+

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d759b4bd/hadoop-common-project/hadoop-common/src/site/markdown/release/0.17.2/CHANGES.0.17.2.md
----------------------------------------------------------------------
diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.17.2/CHANGES.0.17.2.md
 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.17.2/CHANGES.0.17.2.md
new file mode 100644
index 0000000..2ee5df6
--- /dev/null
+++ 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.17.2/CHANGES.0.17.2.md
@@ -0,0 +1,77 @@
+
+<!---
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+-->
+# Apache Hadoop Changelog
+
+## Release 0.17.2 - 2008-08-11
+
+### INCOMPATIBLE CHANGES:
+
+| JIRA | Summary | Priority | Component | Reporter | Contributor |
+|:---- |:---- | :--- |:---- |:---- |:---- |
+
+
+### NEW FEATURES:
+
+| JIRA | Summary | Priority | Component | Reporter | Contributor |
+|:---- |:---- | :--- |:---- |:---- |:---- |
+
+
+### IMPROVEMENTS:
+
+| JIRA | Summary | Priority | Component | Reporter | Contributor |
+|:---- |:---- | :--- |:---- |:---- |:---- |
+
+
+### BUG FIXES:
+
+| JIRA | Summary | Priority | Component | Reporter | Contributor |
+|:---- |:---- | :--- |:---- |:---- |:---- |
+| [HADOOP-4773](https://issues.apache.org/jira/browse/HADOOP-4773) | namenode 
startup error, hadoop-user-namenode.pid permission denied. |  Critical | . | 
Focus |  |
+| [HADOOP-3931](https://issues.apache.org/jira/browse/HADOOP-3931) | Bug in 
MapTask.MapOutputBuffer.collect leads to an unnecessary and harmful 'reset' |  
Blocker | . | Arun C Murthy | Chris Douglas |
+| [HADOOP-3859](https://issues.apache.org/jira/browse/HADOOP-3859) | 1000  
concurrent read on a single file failing  the task/client |  Blocker | . | Koji 
Noguchi | Johan Oskarsson |
+| [HADOOP-3813](https://issues.apache.org/jira/browse/HADOOP-3813) | RPC queue 
overload of JobTracker |  Major | . | Christian Kunz | Amareshwari Sriramadasu |
+| [HADOOP-3760](https://issues.apache.org/jira/browse/HADOOP-3760) | DFS 
operations fail because of Stream closed error |  Blocker | . | Amar Kamat | 
Lohit Vijayarenu |
+| [HADOOP-3758](https://issues.apache.org/jira/browse/HADOOP-3758) | Excessive 
exceptions in HDFS namenode log file |  Blocker | . | Jim Huang | Lohit 
Vijayarenu |
+| [HADOOP-3707](https://issues.apache.org/jira/browse/HADOOP-3707) | Frequent 
DiskOutOfSpaceException on almost-full datanodes |  Blocker | . | Koji Noguchi 
| Raghu Angadi |
+| [HADOOP-3685](https://issues.apache.org/jira/browse/HADOOP-3685) | 
Unbalanced replication target |  Blocker | . | Koji Noguchi | Hairong Kuang |
+| [HADOOP-3681](https://issues.apache.org/jira/browse/HADOOP-3681) | Infinite 
loop in dfs close |  Blocker | . | Koji Noguchi | Lohit Vijayarenu |
+| [HADOOP-3678](https://issues.apache.org/jira/browse/HADOOP-3678) | Avoid 
spurious "DataXceiver: java.io.IOException: Connection reset by peer" errors in 
DataNode log |  Blocker | . | Raghu Angadi | Raghu Angadi |
+| [HADOOP-3633](https://issues.apache.org/jira/browse/HADOOP-3633) | Uncaught 
exception in DataXceiveServer |  Blocker | . | Koji Noguchi | Konstantin 
Shvachko |
+| [HADOOP-3370](https://issues.apache.org/jira/browse/HADOOP-3370) | failed 
tasks may stay forever in TaskTracker.runningJobs |  Critical | . | Zheng Shao 
| Zheng Shao |
+| [HADOOP-3002](https://issues.apache.org/jira/browse/HADOOP-3002) | HDFS 
should not remove blocks while in safemode. |  Blocker | . | Konstantin 
Shvachko | Konstantin Shvachko |
+
+
+### TESTS:
+
+| JIRA | Summary | Priority | Component | Reporter | Contributor |
+|:---- |:---- | :--- |:---- |:---- |:---- |
+
+
+### SUB-TASKS:
+
+| JIRA | Summary | Priority | Component | Reporter | Contributor |
+|:---- |:---- | :--- |:---- |:---- |:---- |
+
+
+### OTHER:
+
+| JIRA | Summary | Priority | Component | Reporter | Contributor |
+|:---- |:---- | :--- |:---- |:---- |:---- |
+
+

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d759b4bd/hadoop-common-project/hadoop-common/src/site/markdown/release/0.17.2/RELEASENOTES.0.17.2.md
----------------------------------------------------------------------
diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.17.2/RELEASENOTES.0.17.2.md
 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.17.2/RELEASENOTES.0.17.2.md
new file mode 100644
index 0000000..22c90b8
--- /dev/null
+++ 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.17.2/RELEASENOTES.0.17.2.md
@@ -0,0 +1,52 @@
+
+<!---
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+-->
+# Apache Hadoop  0.17.2 Release Notes
+
+These release notes cover new developer and user-facing incompatibilities, 
features, and major improvements.
+
+
+---
+
+* [HADOOP-3859](https://issues.apache.org/jira/browse/HADOOP-3859) | *Blocker* 
| **1000  concurrent read on a single file failing  the task/client**
+
+Allows the user to change the maximum number of xceivers in the datanode.
+
+
+---
+
+* [HADOOP-3760](https://issues.apache.org/jira/browse/HADOOP-3760) | *Blocker* 
| **DFS operations fail because of Stream closed error**
+
+Fix a bug with HDFS file close() mistakenly introduced by HADOOP-3681.
+
+
+---
+
+* [HADOOP-3707](https://issues.apache.org/jira/browse/HADOOP-3707) | *Blocker* 
| **Frequent DiskOutOfSpaceException on almost-full datanodes**
+
+NameNode keeps a count of number of blocks scheduled to be written to a 
datanode and uses it to avoid allocating more blocks than a datanode can hold.
+
+
+---
+
+* [HADOOP-3678](https://issues.apache.org/jira/browse/HADOOP-3678) | *Blocker* 
| **Avoid spurious "DataXceiver: java.io.IOException: Connection reset by peer" 
errors in DataNode log**
+
+Avoid spurious exceptions logged at DataNode when clients read from DFS.
+
+
+

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d759b4bd/hadoop-common-project/hadoop-common/src/site/markdown/release/0.17.3/CHANGES.0.17.3.md
----------------------------------------------------------------------
diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.17.3/CHANGES.0.17.3.md
 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.17.3/CHANGES.0.17.3.md
new file mode 100644
index 0000000..1b8a3ed
--- /dev/null
+++ 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.17.3/CHANGES.0.17.3.md
@@ -0,0 +1,70 @@
+
+<!---
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+-->
+# Apache Hadoop Changelog
+
+## Release 0.17.3 - Unreleased
+
+### INCOMPATIBLE CHANGES:
+
+| JIRA | Summary | Priority | Component | Reporter | Contributor |
+|:---- |:---- | :--- |:---- |:---- |:---- |
+
+
+### NEW FEATURES:
+
+| JIRA | Summary | Priority | Component | Reporter | Contributor |
+|:---- |:---- | :--- |:---- |:---- |:---- |
+
+
+### IMPROVEMENTS:
+
+| JIRA | Summary | Priority | Component | Reporter | Contributor |
+|:---- |:---- | :--- |:---- |:---- |:---- |
+
+
+### BUG FIXES:
+
+| JIRA | Summary | Priority | Component | Reporter | Contributor |
+|:---- |:---- | :--- |:---- |:---- |:---- |
+| [HADOOP-4326](https://issues.apache.org/jira/browse/HADOOP-4326) | 
ChecksumFileSystem does not override all create(...) methods |  Blocker | fs | 
Tsz Wo Nicholas Sze | Tsz Wo Nicholas Sze |
+| [HADOOP-4318](https://issues.apache.org/jira/browse/HADOOP-4318) | distcp 
fails |  Blocker | . | Christian Kunz | Tsz Wo Nicholas Sze |
+| [HADOOP-4277](https://issues.apache.org/jira/browse/HADOOP-4277) | Checksum 
verification is disabled for LocalFS |  Blocker | . | Raghu Angadi | Raghu 
Angadi |
+| [HADOOP-4271](https://issues.apache.org/jira/browse/HADOOP-4271) | Bug in 
FSInputChecker makes it possible to read from an invalid buffer |  Blocker | fs 
| Ning Li | Ning Li |
+| [HADOOP-3217](https://issues.apache.org/jira/browse/HADOOP-3217) | [HOD] Be 
less agressive when querying job status from resource manager. |  Blocker | 
contrib/hod | Hemanth Yamijala | Hemanth Yamijala |
+
+
+### TESTS:
+
+| JIRA | Summary | Priority | Component | Reporter | Contributor |
+|:---- |:---- | :--- |:---- |:---- |:---- |
+
+
+### SUB-TASKS:
+
+| JIRA | Summary | Priority | Component | Reporter | Contributor |
+|:---- |:---- | :--- |:---- |:---- |:---- |
+
+
+### OTHER:
+
+| JIRA | Summary | Priority | Component | Reporter | Contributor |
+|:---- |:---- | :--- |:---- |:---- |:---- |
+| [HADOOP-4164](https://issues.apache.org/jira/browse/HADOOP-4164) | Chinese 
translation of core docs |  Major | documentation | Xuebing Yan | Xuebing Yan |
+
+

http://git-wip-us.apache.org/repos/asf/hadoop/blob/d759b4bd/hadoop-common-project/hadoop-common/src/site/markdown/release/0.17.3/RELEASENOTES.0.17.3.md
----------------------------------------------------------------------
diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.17.3/RELEASENOTES.0.17.3.md
 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.17.3/RELEASENOTES.0.17.3.md
new file mode 100644
index 0000000..dd01926
--- /dev/null
+++ 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.17.3/RELEASENOTES.0.17.3.md
@@ -0,0 +1,45 @@
+
+<!---
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+-->
+# Apache Hadoop  0.17.3 Release Notes
+
+These release notes cover new developer and user-facing incompatibilities, 
features, and major improvements.
+
+
+---
+
+* [HADOOP-4277](https://issues.apache.org/jira/browse/HADOOP-4277) | *Blocker* 
| **Checksum verification is disabled for LocalFS**
+
+Checksum verification was mistakenly disabled for LocalFileSystem.
+
+
+---
+
+* [HADOOP-4271](https://issues.apache.org/jira/browse/HADOOP-4271) | *Blocker* 
| **Bug in FSInputChecker makes it possible to read from an invalid buffer**
+
+Checksum input stream can sometimes return invalid data to the user.
+
+
+---
+
+* [HADOOP-4164](https://issues.apache.org/jira/browse/HADOOP-4164) | *Major* | 
**Chinese translation of core docs**
+
+Chinese translation for hadoop 0.17.x core docs.
+
+
+

Reply via email to