http://git-wip-us.apache.org/repos/asf/hadoop/blob/19041008/hadoop-common-project/hadoop-common/src/site/markdown/release/2.3.0/RELEASENOTES.2.3.0.md
----------------------------------------------------------------------
diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/2.3.0/RELEASENOTES.2.3.0.md
 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/2.3.0/RELEASENOTES.2.3.0.md
index 43dc922..ad29c29 100644
--- 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/2.3.0/RELEASENOTES.2.3.0.md
+++ 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/2.3.0/RELEASENOTES.2.3.0.md
@@ -23,13 +23,6 @@ These release notes cover new developer and user-facing 
incompatibilities, impor
 
 ---
 
-* [HADOOP-10047](https://issues.apache.org/jira/browse/HADOOP-10047) | *Major* 
| **Add a directbuffer Decompressor API to hadoop**
-
-Direct Bytebuffer decompressors for Zlib (Deflate & Gzip) and Snappy
-
-
----
-
 * [HADOOP-9241](https://issues.apache.org/jira/browse/HADOOP-9241) | *Trivial* 
| **DU refresh interval is not configurable**
 
 The 'du' (disk usage command from Unix) script refresh monitor is now 
configurable in the same way as its 'df' counterpart, via the property 
'fs.du.interval', the default of which is 10 minute (in ms).
@@ -73,21 +66,32 @@ Additional information specified on github: 
https://github.com/DmitryMezhensky/H
 
 ---
 
-* [HDFS-5704](https://issues.apache.org/jira/browse/HDFS-5704) | *Major* | 
**Change OP\_UPDATE\_BLOCKS  with a new OP\_ADD\_BLOCK**
+* [MAPREDUCE-1176](https://issues.apache.org/jira/browse/MAPREDUCE-1176) | 
*Major* | **FixedLengthInputFormat and FixedLengthRecordReader**
 
-Add a new editlog record (OP\_ADD\_BLOCK) that only records allocation of the 
new block instead of the entire block list, on every block allocation.
+Addition of FixedLengthInputFormat and FixedLengthRecordReader in the 
org.apache.hadoop.mapreduce.lib.input package. These two classes can be used 
when you need to read data from files containing fixed length (fixed width) 
records. Such files have no CR/LF (or any combination thereof), no delimiters 
etc, but each record is a fixed length, and extra data is padded with spaces. 
The data is one gigantic line within a file. When creating a job that specifies 
this input format, the job must have the 
"mapreduce.input.fixedlengthinputformat.record.length" property set as follows 
myJobConf.setInt("mapreduce.input.fixedlengthinputformat.record.length",[myFixedRecordLength]);
+
+Please see javadoc for more details.
 
 
 ---
 
-* [HDFS-5663](https://issues.apache.org/jira/browse/HDFS-5663) | *Major* | 
**make the retry time and interval value configurable in openInfo()**
+* [HDFS-5502](https://issues.apache.org/jira/browse/HDFS-5502) | *Major* | 
**Fix HTTPS support in HsftpFileSystem**
 
-Makes the retries and time between retries getting the length of the last 
block on file configurable.  Below are the new configurations.
+Fix the https support in HsftpFileSystem. With the change the client now 
verifies the server certificate. In particular, client side will verify the 
Common Name of the certificate using a strategy specified by the configuration 
property "hadoop.ssl.hostname.verifier".
 
-dfs.client.retry.times.get-last-block-length
-dfs.client.retry.interval-ms.get-last-block-length
 
-They are set to the 3 and 4000 respectively, these being what was previously 
hardcoded.
+---
+
+* [HADOOP-10047](https://issues.apache.org/jira/browse/HADOOP-10047) | *Major* 
| **Add a directbuffer Decompressor API to hadoop**
+
+Direct Bytebuffer decompressors for Zlib (Deflate & Gzip) and Snappy
+
+
+---
+
+* [HDFS-4997](https://issues.apache.org/jira/browse/HDFS-4997) | *Major* | 
**libhdfs doesn't return correct error codes in most cases**
+
+libhdfs now returns correct codes in errno. Previously, due to a bug, many 
functions set errno to 255 instead of the more specific error code.
 
 
 ---
@@ -108,32 +112,28 @@ hadoop.ssl.enabled and dfs.https.enabled are deprecated. 
When the deprecated con
 
 ---
 
-* [HDFS-5502](https://issues.apache.org/jira/browse/HDFS-5502) | *Major* | 
**Fix HTTPS support in HsftpFileSystem**
+* [HDFS-4983](https://issues.apache.org/jira/browse/HDFS-4983) | *Major* | 
**Numeric usernames do not work with WebHDFS FS**
 
-Fix the https support in HsftpFileSystem. With the change the client now 
verifies the server certificate. In particular, client side will verify the 
Common Name of the certificate using a strategy specified by the configuration 
property "hadoop.ssl.hostname.verifier".
+Add a new configuration property "dfs.webhdfs.user.provider.user.pattern" for 
specifying user name filters for WebHDFS.
 
 
 ---
 
-* [HDFS-4997](https://issues.apache.org/jira/browse/HDFS-4997) | *Major* | 
**libhdfs doesn't return correct error codes in most cases**
-
-libhdfs now returns correct codes in errno. Previously, due to a bug, many 
functions set errno to 255 instead of the more specific error code.
-
+* [HDFS-5663](https://issues.apache.org/jira/browse/HDFS-5663) | *Major* | 
**make the retry time and interval value configurable in openInfo()**
 
----
+Makes the retries and time between retries getting the length of the last 
block on file configurable.  Below are the new configurations.
 
-* [HDFS-4983](https://issues.apache.org/jira/browse/HDFS-4983) | *Major* | 
**Numeric usernames do not work with WebHDFS FS**
+dfs.client.retry.times.get-last-block-length
+dfs.client.retry.interval-ms.get-last-block-length
 
-Add a new configuration property "dfs.webhdfs.user.provider.user.pattern" for 
specifying user name filters for WebHDFS.
+They are set to the 3 and 4000 respectively, these being what was previously 
hardcoded.
 
 
 ---
 
-* [MAPREDUCE-1176](https://issues.apache.org/jira/browse/MAPREDUCE-1176) | 
*Major* | **FixedLengthInputFormat and FixedLengthRecordReader**
-
-Addition of FixedLengthInputFormat and FixedLengthRecordReader in the 
org.apache.hadoop.mapreduce.lib.input package. These two classes can be used 
when you need to read data from files containing fixed length (fixed width) 
records. Such files have no CR/LF (or any combination thereof), no delimiters 
etc, but each record is a fixed length, and extra data is padded with spaces. 
The data is one gigantic line within a file. When creating a job that specifies 
this input format, the job must have the 
"mapreduce.input.fixedlengthinputformat.record.length" property set as follows 
myJobConf.setInt("mapreduce.input.fixedlengthinputformat.record.length",[myFixedRecordLength]);
 
+* [HDFS-5704](https://issues.apache.org/jira/browse/HDFS-5704) | *Major* | 
**Change OP\_UPDATE\_BLOCKS  with a new OP\_ADD\_BLOCK**
 
-Please see javadoc for more details.
+Add a new editlog record (OP\_ADD\_BLOCK) that only records allocation of the 
new block instead of the entire block list, on every block allocation.
 
 
 


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org

Reply via email to