http://git-wip-us.apache.org/repos/asf/hadoop/blob/19041008/hadoop-common-project/hadoop-common/src/site/markdown/release/0.19.0/RELEASENOTES.0.19.0.md
----------------------------------------------------------------------
diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.19.0/RELEASENOTES.0.19.0.md
 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.19.0/RELEASENOTES.0.19.0.md
index 187b087..a7c0fb2 100644
--- 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.19.0/RELEASENOTES.0.19.0.md
+++ 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.19.0/RELEASENOTES.0.19.0.md
@@ -23,281 +23,284 @@ These release notes cover new developer and user-facing 
incompatibilities, impor
 
 ---
 
-* [HADOOP-4466](https://issues.apache.org/jira/browse/HADOOP-4466) | *Blocker* 
| **SequenceFileOutputFormat is coupled to WritableComparable and Writable**
+* [HADOOP-3595](https://issues.apache.org/jira/browse/HADOOP-3595) | *Major* | 
**Remove deprecated mapred.combine.once functionality**
 
-Ensure that SequenceFileOutputFormat isn't tied to Writables and can be used 
with other Serialization frameworks.
+ Removed deprecated methods for mapred.combine.once functionality.
 
 
 ---
 
-* [HADOOP-4433](https://issues.apache.org/jira/browse/HADOOP-4433) | *Major* | 
**Improve data loader for collecting metrics and log files from hadoop and 
system**
+* [HADOOP-2664](https://issues.apache.org/jira/browse/HADOOP-2664) | *Major* | 
**lzop-compatible CompresionCodec**
 
-- Added startup and shutdown script
-- Added torque metrics data loader
-- Improve handling of Exec Plugin
-- Added Test cases for File Tailing Adaptors
-- Added Test cases for Start streaming at specific offset
+Introduced LZOP codec.
 
 
 ---
 
-* [HADOOP-4430](https://issues.apache.org/jira/browse/HADOOP-4430) | *Blocker* 
| **Namenode Web UI capacity report is inconsistent with Balancer**
+* [HADOOP-3667](https://issues.apache.org/jira/browse/HADOOP-3667) | *Major* | 
**Remove deprecated methods in JobConf**
 
-Changed reporting in the NameNode Web UI to more closely reflect the behavior 
of the re-balancer. Removed no longer used config parameter dfs.datanode.du.pct 
from hadoop-default.xml.
+Removed the following deprecated methods from JobConf:
+      addInputPath(Path)
+      getInputPaths()
+      getMapOutputCompressionType()
+      getOutputPath()
+      getSystemDir()
+      setInputPath(Path)
+      setMapOutputCompressionType(CompressionType style)
+      setOutputPath(Path)
 
 
 ---
 
-* [HADOOP-4293](https://issues.apache.org/jira/browse/HADOOP-4293) | *Major* | 
**Remove WritableJobConf**
+* [HADOOP-3652](https://issues.apache.org/jira/browse/HADOOP-3652) | *Major* | 
**Remove deprecated class OutputFormatBase**
 
-Made Configuration Writable and rename the old write method to writeXml.
+Removed deprecated org.apache.hadoop.mapred.OutputFormatBase.
 
 
 ---
 
-* [HADOOP-4281](https://issues.apache.org/jira/browse/HADOOP-4281) | *Blocker* 
| **Capacity reported in some of the commands is not consistent with the Web UI 
reported data**
+* [HADOOP-2325](https://issues.apache.org/jira/browse/HADOOP-2325) | *Major* | 
**Require Java 6**
 
-Changed command "hadoop dfsadmin -report" to be consistent with Web UI for 
both Namenode and Datanode reports. "Total raw bytes" is changed to "Configured 
Capacity". "Present Capacity" is newly added to indicate the present capacity 
of the DFS. "Remaining raw bytes" is changed to "DFS Remaining". "Used raw 
bytes" is changed to "DFS Used". "% used" is changed to "DFS Used%". 
Applications that parse command output should be reviewed.
+Hadoop now requires Java 6.
 
 
 ---
 
-* [HADOOP-4227](https://issues.apache.org/jira/browse/HADOOP-4227) | *Minor* | 
**Remove the deprecated, unused class ShellCommand.**
+* [HADOOP-3695](https://issues.apache.org/jira/browse/HADOOP-3695) | *Major* | 
**[HOD] Have an ability to run multiple slaves per node**
 
-Removed the deprecated class org.apache.hadoop.fs.ShellCommand.
+Added an ability in HOD to start multiple workers (TaskTrackers and/or 
DataNodes) per node to assist testing and simulation of scale. A configuration 
variable ringmaster.workers\_per\_ring was added to specify the number of 
workers to start.
 
 
 ---
 
-* [HADOOP-4205](https://issues.apache.org/jira/browse/HADOOP-4205) | *Major* | 
**[Hive] metastore and ql to use the refactored SerDe library**
+* [HADOOP-3149](https://issues.apache.org/jira/browse/HADOOP-3149) | *Major* | 
**supporting multiple outputs for M/R jobs**
 
-Improved Hive metastore and ql to use the refactored SerDe library.
+Introduced MultipleOutputs class so Map/Reduce jobs can write data to 
different output files. Each output can use a different OutputFormat. 
Outpufiles are created within the job output directory. 
FileOutputFormat.getPathForCustomFile() creates a filename under the outputdir 
that is named with the task ID and task type (i.e. myfile-r-00001).
 
 
 ---
 
-* [HADOOP-4190](https://issues.apache.org/jira/browse/HADOOP-4190) | *Blocker* 
| **Changes to JobHistory makes it backward incompatible**
+* [HADOOP-3684](https://issues.apache.org/jira/browse/HADOOP-3684) | *Major* | 
**The data\_join should allow the user to implement a customer cloning 
function**
 
-Changed job history format to add a dot at end of each line.
+Allowed user to overwrite clone function in a subclass of TaggedMapOutput 
class.
 
 
 ---
 
-* [HADOOP-4176](https://issues.apache.org/jira/browse/HADOOP-4176) | *Major* | 
**Implement getFileChecksum(Path) in HftpFileSystem**
+* [HADOOP-3478](https://issues.apache.org/jira/browse/HADOOP-3478) | *Major* | 
**The algorithm to decide map re-execution on fetch failures can be improved**
 
-Implemented getFileChecksum(Path) in HftpFileSystemfor distcp support.
+Changed reducers to fetch maps in the same order for a given host to speed up 
identification of the faulty maps; reducers still randomize the host selection 
to distribute load.
 
 
 ---
 
-* [HADOOP-4138](https://issues.apache.org/jira/browse/HADOOP-4138) | *Major* | 
**[Hive] refactor the SerDe library**
+* [HADOOP-3714](https://issues.apache.org/jira/browse/HADOOP-3714) | *Trivial* 
| **Bash tab completion support**
 
-Introduced new SerDe library for src/contrib/hive.
+Adds a new contrib, bash-tab-completion, which enables bash tab completion for 
the bin/hadoop script. See the README file in the contrib directory for the 
installation.
 
 
 ---
 
-* [HADOOP-4117](https://issues.apache.org/jira/browse/HADOOP-4117) | *Major* | 
**Improve configurability of Hadoop EC2 instances**
+* [HADOOP-3730](https://issues.apache.org/jira/browse/HADOOP-3730) | *Major* | 
**add new JobConf constructor that disables loading default configurations**
 
-Changed scripts to pass initialization script for EC2 instances at boot time 
(as EC2 user data) rather than embedding initialization information in the EC2 
image. This change makes it easy to customize the hadoop-site.xml file for your 
cluster before launch, by editing the hadoop-ec2-init-remote.sh script, or by 
setting the environment variable USER\_DATA\_FILE in hadoop-ec2-env.sh to run a 
script of your choice.
+ Added a JobConf constructor that disables loading  default configurations so 
as to take all default values from the JobTracker's configuration.
 
 
 ---
 
-* [HADOOP-4116](https://issues.apache.org/jira/browse/HADOOP-4116) | *Blocker* 
| **Balancer should provide better resource management**
+* [HADOOP-3485](https://issues.apache.org/jira/browse/HADOOP-3485) | *Minor* | 
**fix writes**
 
-Changed DataNode protocol version without impact to clients other than to 
compel use of current version of client application.
+Introduce write support for Fuse; requires Linux kernel 2.6.15 or better.
 
 
 ---
 
-* [HADOOP-4106](https://issues.apache.org/jira/browse/HADOOP-4106) | *Major* | 
**add time, permission and user attribute support to fuse-dfs**
+* [HADOOP-3412](https://issues.apache.org/jira/browse/HADOOP-3412) | *Minor* | 
**Refactor the scheduler out of the JobTracker**
 
-Added time, permission and user attribute support to libhdfs.
+Added the ability to chose between many schedulers, and to limit the number of 
running tasks per job.
 
 
 ---
 
-* [HADOOP-4086](https://issues.apache.org/jira/browse/HADOOP-4086) | *Major* | 
**Add limit to Hive QL**
+* [HADOOP-1700](https://issues.apache.org/jira/browse/HADOOP-1700) | *Major* | 
**Append to files in HDFS**
 
-Added LIMIT to Hive query language.
+Introduced append operation for HDFS files.
 
 
 ---
 
-* [HADOOP-4084](https://issues.apache.org/jira/browse/HADOOP-4084) | *Major* | 
**Add explain plan capabilities to Hive QL**
+* [HADOOP-3646](https://issues.apache.org/jira/browse/HADOOP-3646) | *Major* | 
**Providing bzip2 as codec**
 
-Introduced "EXPLAIN" plan for Hive.
+Introduced support for bzip2 compressed files.
 
 
 ---
 
-* [HADOOP-4018](https://issues.apache.org/jira/browse/HADOOP-4018) | *Major* | 
**limit memory usage in jobtracker**
+* [HADOOP-3796](https://issues.apache.org/jira/browse/HADOOP-3796) | *Major* | 
**fuse-dfs should take rw,ro,trashon,trashoff,protected=blah mount arguments 
rather than them being compiled in**
 
-Introduced new configuration parameter mapred.max.tasks.per.job to specifie 
the maximum number of tasks per job.
+Changed Fuse configuration to use mount options.
 
 
 ---
 
-* [HADOOP-3992](https://issues.apache.org/jira/browse/HADOOP-3992) | *Major* | 
**Synthetic Load Generator for NameNode testing**
+* [HADOOP-3837](https://issues.apache.org/jira/browse/HADOOP-3837) | *Major* | 
**hadop streaming does not use progress reporting to detect hung tasks**
 
-Added a synthetic load generation facility to the test directory.
+Changed streaming tasks to adhere to task timeout value specified in the job 
configuration.
 
 
 ---
 
-* [HADOOP-3981](https://issues.apache.org/jira/browse/HADOOP-3981) | *Major* | 
**Need a distributed file checksum algorithm for HDFS**
-
-Implemented MD5-of-xxxMD5-of-yyyCRC32 which is a distributed file checksum 
algorithm for HDFS, where xxx is the number of CRCs per block and yyy is the 
number of bytes per CRC.
+* [HADOOP-3792](https://issues.apache.org/jira/browse/HADOOP-3792) | *Minor* | 
**exit code from "hadoop dfs -test ..." is wrong for Unix shell**
 
-Changed DistCp to use file checksum for comparing files if both source and 
destination FileSystem(s) support getFileChecksum(...).
+Changed exit code from hadoop.fs.FsShell -test to match the usual Unix 
convention.
 
 
 ---
 
-* [HADOOP-3970](https://issues.apache.org/jira/browse/HADOOP-3970) | *Major* | 
**Counters written to the job history cannot be recovered back**
+* [HADOOP-2302](https://issues.apache.org/jira/browse/HADOOP-2302) | *Major* | 
** Streaming should provide an option for numerical sort of keys**
 
-Added getEscapedCompactString() and fromEscapedCompactString() to 
Counters.java to represent counters as Strings and to reconstruct the counters 
from the Strings.
+Introduced numerical key comparison for streaming.
 
 
 ---
 
-* [HADOOP-3963](https://issues.apache.org/jira/browse/HADOOP-3963) | *Minor* | 
**libhdfs should never exit on its own but rather return errors to the calling 
application**
+* [HADOOP-153](https://issues.apache.org/jira/browse/HADOOP-153) | *Major* | 
**skip records that fail Task**
 
-Modified libhdfs to return NULL or error code when unrecoverable error occurs 
rather than exiting itself.
+Introduced record skipping where tasks fail on certain records. 
(org.apache.hadoop.mapred.SkipBadRecords)
 
 
 ---
 
-* [HADOOP-3941](https://issues.apache.org/jira/browse/HADOOP-3941) | *Major* | 
**Extend FileSystem API to return file-checksums/file-digests**
+* [HADOOP-3719](https://issues.apache.org/jira/browse/HADOOP-3719) | *Major* | 
**Chukwa**
 
-Added new FileSystem APIs: FileChecksum and FileSystem.getFileChecksum(Path).
+Introduced Chukwa data collection and analysis framework.
 
 
 ---
 
-* [HADOOP-3939](https://issues.apache.org/jira/browse/HADOOP-3939) | *Major* | 
**DistCp should support an option for deleting non-existing files.**
+* [HADOOP-3873](https://issues.apache.org/jira/browse/HADOOP-3873) | *Major* | 
**DistCp should have an option for limiting the number of files/bytes being 
copied**
 
-Added a new option -delete to DistCp so that if the files/directories exist in 
dst but not in src will be deleted.  It uses FsShell to do delete, so that it 
will use trash if  the trash is enable.
+Added two new options -filelimit \<n\> and -sizelimit \<n\> to DistCp for 
limiting the total number of files and the total size in bytes, respectively.
 
 
 ---
 
-* [HADOOP-3938](https://issues.apache.org/jira/browse/HADOOP-3938) | *Major* | 
**Quotas for disk space management**
+* [HADOOP-3889](https://issues.apache.org/jira/browse/HADOOP-3889) | *Minor* | 
**distcp: Better Error Message should be thrown when accessing source 
files/directory with no read permission**
 
-Introducted byte space quotas for directories. The count shell command 
modified to report both name and byte quotas.
+Changed DistCp error messages when there is a RemoteException.  Changed the 
corresponding return value from -999 to -3.
 
 
 ---
 
-* [HADOOP-3930](https://issues.apache.org/jira/browse/HADOOP-3930) | *Major* | 
**Decide how to integrate scheduler info into CLI and job tracker web page**
+* [HADOOP-3585](https://issues.apache.org/jira/browse/HADOOP-3585) | *Minor* | 
**Hardware Failure Monitoring in large clusters running Hadoop/HDFS**
 
-Changed TaskScheduler to expose API for Web UI and Command Line Tool.
+Added FailMon as a contrib project for hardware failure monitoring and 
analysis, under /src/contrib/failmon. Created User Manual and Quick Start Guide.
 
 
 ---
 
-* [HADOOP-3911](https://issues.apache.org/jira/browse/HADOOP-3911) | *Minor* | 
**' -blocks ' option not being recognized**
+* [HADOOP-3549](https://issues.apache.org/jira/browse/HADOOP-3549) | *Major* | 
**meaningful errno values in libhdfs**
 
-Added a check to fsck options to make sure -files is not the first option so 
as to resolve conflicts with GenericOptionsParser.
+Improved error reporting for libhdfs so permission problems now return EACCES.
 
 
 ---
 
-* [HADOOP-3908](https://issues.apache.org/jira/browse/HADOOP-3908) | *Minor* | 
**Better error message if llibhdfs.so doesn't exist**
+* [HADOOP-3062](https://issues.apache.org/jira/browse/HADOOP-3062) | *Major* | 
**Need to capture the metrics for the network ios generate by dfs reads/writes 
and map/reduce shuffling  and break them down by racks**
 
-Improved Fuse-dfs better error message if llibhdfs.so doesn't exist.
+Introduced additional log records for data transfers.
 
 
 ---
 
-* [HADOOP-3889](https://issues.apache.org/jira/browse/HADOOP-3889) | *Minor* | 
**distcp: Better Error Message should be thrown when accessing source 
files/directory with no read permission**
+* [HADOOP-3854](https://issues.apache.org/jira/browse/HADOOP-3854) | *Major* | 
**org.apache.hadoop.http.HttpServer should support user configurable filter**
 
-Changed DistCp error messages when there is a RemoteException.  Changed the 
corresponding return value from -999 to -3.
+Added a configuration property hadoop.http.filter.initializers and a class 
org.apache.hadoop.http.FilterInitializer for supporting servlet filter.  
Cluster administrator could possibly configure customized filters for their web 
site.
 
 
 ---
 
-* [HADOOP-3873](https://issues.apache.org/jira/browse/HADOOP-3873) | *Major* | 
**DistCp should have an option for limiting the number of files/bytes being 
copied**
+* [HADOOP-3908](https://issues.apache.org/jira/browse/HADOOP-3908) | *Minor* | 
**Better error message if llibhdfs.so doesn't exist**
 
-Added two new options -filelimit \<n\> and -sizelimit \<n\> to DistCp for 
limiting the total number of files and the total size in bytes, respectively.
+Improved Fuse-dfs better error message if llibhdfs.so doesn't exist.
 
 
 ---
 
-* [HADOOP-3854](https://issues.apache.org/jira/browse/HADOOP-3854) | *Major* | 
**org.apache.hadoop.http.HttpServer should support user configurable filter**
+* [HADOOP-3746](https://issues.apache.org/jira/browse/HADOOP-3746) | *Minor* | 
**A fair sharing job scheduler**
 
-Added a configuration property hadoop.http.filter.initializers and a class 
org.apache.hadoop.http.FilterInitializer for supporting servlet filter.  
Cluster administrator could possibly configure customized filters for their web 
site.
+Introduced Fair Scheduler.
 
 
 ---
 
-* [HADOOP-3837](https://issues.apache.org/jira/browse/HADOOP-3837) | *Major* | 
**hadop streaming does not use progress reporting to detect hung tasks**
+* [HADOOP-3828](https://issues.apache.org/jira/browse/HADOOP-3828) | *Major* | 
**Write skipped records' bytes to DFS**
 
-Changed streaming tasks to adhere to task timeout value specified in the job 
configuration.
+Skipped records can optionally be written to the HDFS. Refer 
org.apache.hadoop.mapred.SkipBadRecords.setSkipOutputPath for setting the 
output path.
 
 
 ---
 
-* [HADOOP-3829](https://issues.apache.org/jira/browse/HADOOP-3829) | *Major* | 
**Narrown down skipped records based on user acceptable value**
+* [HADOOP-3939](https://issues.apache.org/jira/browse/HADOOP-3939) | *Major* | 
**DistCp should support an option for deleting non-existing files.**
 
-Introduced new config parameter 
org.apache.hadoop.mapred.SkipBadRecords.setMapperMaxSkipRecords to set range of 
records to be skipped in the neighborhood of a failed record.
+Added a new option -delete to DistCp so that if the files/directories exist in 
dst but not in src will be deleted.  It uses FsShell to do delete, so that it 
will use trash if  the trash is enable.
 
 
 ---
 
-* [HADOOP-3828](https://issues.apache.org/jira/browse/HADOOP-3828) | *Major* | 
**Write skipped records' bytes to DFS**
+* [HADOOP-3601](https://issues.apache.org/jira/browse/HADOOP-3601) | *Minor* | 
**Hive as a contrib project**
 
-Skipped records can optionally be written to the HDFS. Refer 
org.apache.hadoop.mapred.SkipBadRecords.setSkipOutputPath for setting the 
output path.
+Introduced Hive Data Warehouse built on top of Hadoop that enables structuring 
Hadoop files as tables and partitions and allows users to query this data 
through a SQL like language using a command line interface.
 
 
 ---
 
-* [HADOOP-3796](https://issues.apache.org/jira/browse/HADOOP-3796) | *Major* | 
**fuse-dfs should take rw,ro,trashon,trashoff,protected=blah mount arguments 
rather than them being compiled in**
+* [HADOOP-3498](https://issues.apache.org/jira/browse/HADOOP-3498) | *Major* | 
**File globbing alternation should be able to span path components**
 
-Changed Fuse configuration to use mount options.
+Extended file globbing alternation to cross path components. For example, 
{/a/b,/c/d} expands to a path that matches the files /a/b and /c/d.
 
 
 ---
 
-* [HADOOP-3792](https://issues.apache.org/jira/browse/HADOOP-3792) | *Minor* | 
**exit code from "hadoop dfs -test ..." is wrong for Unix shell**
+* [HADOOP-3150](https://issues.apache.org/jira/browse/HADOOP-3150) | *Major* | 
**Move task file promotion into the task**
 
-Changed exit code from hadoop.fs.FsShell -test to match the usual Unix 
convention.
+Moved task file promotion to the Task. When the task has finished, it will do 
a commit and is declared SUCCEDED. Job cleanup is done by a separate task. Job 
is declared SUCCEDED/FAILED after the cleanup task has finished. Added public 
classes org.apache.hadoop.mapred.JobContext, TaskAttemptContext, 
OutputCommitter and FileOutputCommiitter. Added public APIs:   public 
OutputCommitter getOutputCommitter() and
+public void setOutputCommitter(Class\<? extends OutputCommitter\> theClass) in 
org.apache.hadoop.mapred.JobConf
 
 
 ---
 
-* [HADOOP-3746](https://issues.apache.org/jira/browse/HADOOP-3746) | *Minor* | 
**A fair sharing job scheduler**
+* [HADOOP-3941](https://issues.apache.org/jira/browse/HADOOP-3941) | *Major* | 
**Extend FileSystem API to return file-checksums/file-digests**
 
-Introduced Fair Scheduler.
+Added new FileSystem APIs: FileChecksum and FileSystem.getFileChecksum(Path).
 
 
 ---
 
-* [HADOOP-3730](https://issues.apache.org/jira/browse/HADOOP-3730) | *Major* | 
**add new JobConf constructor that disables loading default configurations**
+* [HADOOP-3963](https://issues.apache.org/jira/browse/HADOOP-3963) | *Minor* | 
**libhdfs should never exit on its own but rather return errors to the calling 
application**
 
- Added a JobConf constructor that disables loading  default configurations so 
as to take all default values from the JobTracker's configuration.
+Modified libhdfs to return NULL or error code when unrecoverable error occurs 
rather than exiting itself.
 
 
 ---
 
-* [HADOOP-3722](https://issues.apache.org/jira/browse/HADOOP-3722) | *Minor* | 
**Provide a unified way to pass jobconf options from bin/hadoop**
+* [HADOOP-1869](https://issues.apache.org/jira/browse/HADOOP-1869) | *Major* | 
**access times of HDFS files**
 
-Changed streaming StreamJob and Submitter to implement Tool and Configurable, 
and to use GenericOptionsParser arguments  -fs, -jt, -conf, -D, -libjars, 
-files, and -archives. Deprecated -jobconf, -cacheArchive, -dfs, -cacheArchive, 
-additionalconfspec,  from streaming and pipes in favor of the generic options. 
Removed from streaming  -config, -mapred.job.tracker, and -cluster.
+Added HDFS file access times. By default, access times will be precise to the 
most recent hour boundary. A configuration parameter dfs.access.time.precision 
(milliseconds) is used to control this precision. Setting a value of 0 will 
disable persisting access times for HDFS files.
 
 
 ---
 
-* [HADOOP-3719](https://issues.apache.org/jira/browse/HADOOP-3719) | *Major* | 
**Chukwa**
+* [HADOOP-3581](https://issues.apache.org/jira/browse/HADOOP-3581) | *Major* | 
**Prevent memory intensive user tasks from taking down nodes**
 
-Introduced Chukwa data collection and analysis framework.
+Added the ability to kill process trees transgressing memory limits. 
TaskTracker uses the configuration parameters introduced in HADOOP-3759. In 
addition, mapred.tasktracker.taskmemorymanager.monitoring-interval specifies 
the interval for which TT waits between cycles of monitoring tasks' memory 
usage, and mapred.tasktracker.procfsbasedprocesstree.sleeptime-before-sigkill 
specifies the time TT waits for sending a SIGKILL to a process-tree that has 
overrun memory limits, after it has been sent a SIGTERM.
 
 
 ---
 
-* [HADOOP-3714](https://issues.apache.org/jira/browse/HADOOP-3714) | *Trivial* 
| **Bash tab completion support**
+* [HADOOP-3970](https://issues.apache.org/jira/browse/HADOOP-3970) | *Major* | 
**Counters written to the job history cannot be recovered back**
 
-Adds a new contrib, bash-tab-completion, which enables bash tab completion for 
the bin/hadoop script. See the README file in the contrib directory for the 
installation.
+Added getEscapedCompactString() and fromEscapedCompactString() to 
Counters.java to represent counters as Strings and to reconstruct the counters 
from the Strings.
 
 
 ---
@@ -309,221 +312,218 @@ Introduced ChainMapper and the ChainReducer classes to 
allow composing chains of
 
 ---
 
-* [HADOOP-3695](https://issues.apache.org/jira/browse/HADOOP-3695) | *Major* | 
**[HOD] Have an ability to run multiple slaves per node**
+* [HADOOP-3445](https://issues.apache.org/jira/browse/HADOOP-3445) | *Major* | 
**Implementing core scheduler functionality in Resource Manager (V1) for 
Hadoop**
 
-Added an ability in HOD to start multiple workers (TaskTrackers and/or 
DataNodes) per node to assist testing and simulation of scale. A configuration 
variable ringmaster.workers\_per\_ring was added to specify the number of 
workers to start.
+Introduced Capacity Task Scheduler.
 
 
 ---
 
-* [HADOOP-3684](https://issues.apache.org/jira/browse/HADOOP-3684) | *Major* | 
**The data\_join should allow the user to implement a customer cloning 
function**
+* [HADOOP-3992](https://issues.apache.org/jira/browse/HADOOP-3992) | *Major* | 
**Synthetic Load Generator for NameNode testing**
 
-Allowed user to overwrite clone function in a subclass of TaggedMapOutput 
class.
+Added a synthetic load generation facility to the test directory.
 
 
 ---
 
-* [HADOOP-3667](https://issues.apache.org/jira/browse/HADOOP-3667) | *Major* | 
**Remove deprecated methods in JobConf**
+* [HADOOP-3981](https://issues.apache.org/jira/browse/HADOOP-3981) | *Major* | 
**Need a distributed file checksum algorithm for HDFS**
 
-Removed the following deprecated methods from JobConf:
-      addInputPath(Path)
-      getInputPaths()
-      getMapOutputCompressionType()
-      getOutputPath()
-      getSystemDir()
-      setInputPath(Path)
-      setMapOutputCompressionType(CompressionType style)
-      setOutputPath(Path)
+Implemented MD5-of-xxxMD5-of-yyyCRC32 which is a distributed file checksum 
algorithm for HDFS, where xxx is the number of CRCs per block and yyy is the 
number of bytes per CRC.
+
+Changed DistCp to use file checksum for comparing files if both source and 
destination FileSystem(s) support getFileChecksum(...).
 
 
 ---
 
-* [HADOOP-3652](https://issues.apache.org/jira/browse/HADOOP-3652) | *Major* | 
**Remove deprecated class OutputFormatBase**
+* [HADOOP-3245](https://issues.apache.org/jira/browse/HADOOP-3245) | *Major* | 
**Provide ability to persist running jobs (extend HADOOP-1876)**
 
-Removed deprecated org.apache.hadoop.mapred.OutputFormatBase.
+Introduced recovery of jobs when JobTracker restarts. This facility is off by 
default. Introduced config parameters mapred.jobtracker.restart.recover, 
mapred.jobtracker.job.history.block.size, and 
mapred.jobtracker.job.history.buffer.size.
 
 
 ---
 
-* [HADOOP-3646](https://issues.apache.org/jira/browse/HADOOP-3646) | *Major* | 
**Providing bzip2 as codec**
+* [HADOOP-3911](https://issues.apache.org/jira/browse/HADOOP-3911) | *Minor* | 
**' -blocks ' option not being recognized**
 
-Introduced support for bzip2 compressed files.
+Added a check to fsck options to make sure -files is not the first option so 
as to resolve conflicts with GenericOptionsParser.
 
 
 ---
 
-* [HADOOP-3601](https://issues.apache.org/jira/browse/HADOOP-3601) | *Minor* | 
**Hive as a contrib project**
+* [HADOOP-4138](https://issues.apache.org/jira/browse/HADOOP-4138) | *Major* | 
**[Hive] refactor the SerDe library**
 
-Introduced Hive Data Warehouse built on top of Hadoop that enables structuring 
Hadoop files as tables and partitions and allows users to query this data 
through a SQL like language using a command line interface.
+Introduced new SerDe library for src/contrib/hive.
 
 
 ---
 
-* [HADOOP-3595](https://issues.apache.org/jira/browse/HADOOP-3595) | *Major* | 
**Remove deprecated mapred.combine.once functionality**
+* [HADOOP-3722](https://issues.apache.org/jira/browse/HADOOP-3722) | *Minor* | 
**Provide a unified way to pass jobconf options from bin/hadoop**
 
- Removed deprecated methods for mapred.combine.once functionality.
+Changed streaming StreamJob and Submitter to implement Tool and Configurable, 
and to use GenericOptionsParser arguments  -fs, -jt, -conf, -D, -libjars, 
-files, and -archives. Deprecated -jobconf, -cacheArchive, -dfs, -cacheArchive, 
-additionalconfspec,  from streaming and pipes in favor of the generic options. 
Removed from streaming  -config, -mapred.job.tracker, and -cluster.
 
 
 ---
 
-* [HADOOP-3585](https://issues.apache.org/jira/browse/HADOOP-3585) | *Minor* | 
**Hardware Failure Monitoring in large clusters running Hadoop/HDFS**
+* [HADOOP-4117](https://issues.apache.org/jira/browse/HADOOP-4117) | *Major* | 
**Improve configurability of Hadoop EC2 instances**
 
-Added FailMon as a contrib project for hardware failure monitoring and 
analysis, under /src/contrib/failmon. Created User Manual and Quick Start Guide.
+Changed scripts to pass initialization script for EC2 instances at boot time 
(as EC2 user data) rather than embedding initialization information in the EC2 
image. This change makes it easy to customize the hadoop-site.xml file for your 
cluster before launch, by editing the hadoop-ec2-init-remote.sh script, or by 
setting the environment variable USER\_DATA\_FILE in hadoop-ec2-env.sh to run a 
script of your choice.
 
 
 ---
 
-* [HADOOP-3581](https://issues.apache.org/jira/browse/HADOOP-3581) | *Major* | 
**Prevent memory intensive user tasks from taking down nodes**
+* [HADOOP-2411](https://issues.apache.org/jira/browse/HADOOP-2411) | *Major* | 
**Add support for larger EC2 instance types**
 
-Added the ability to kill process trees transgressing memory limits. 
TaskTracker uses the configuration parameters introduced in HADOOP-3759. In 
addition, mapred.tasktracker.taskmemorymanager.monitoring-interval specifies 
the interval for which TT waits between cycles of monitoring tasks' memory 
usage, and mapred.tasktracker.procfsbasedprocesstree.sleeptime-before-sigkill 
specifies the time TT waits for sending a SIGKILL to a process-tree that has 
overrun memory limits, after it has been sent a SIGTERM.
+Added support for c1.\* instance types and associated kernels for EC2.
 
 
 ---
 
-* [HADOOP-3549](https://issues.apache.org/jira/browse/HADOOP-3549) | *Major* | 
**meaningful errno values in libhdfs**
+* [HADOOP-3829](https://issues.apache.org/jira/browse/HADOOP-3829) | *Major* | 
**Narrown down skipped records based on user acceptable value**
 
-Improved error reporting for libhdfs so permission problems now return EACCES.
+Introduced new config parameter 
org.apache.hadoop.mapred.SkipBadRecords.setMapperMaxSkipRecords to set range of 
records to be skipped in the neighborhood of a failed record.
 
 
 ---
 
-* [HADOOP-3498](https://issues.apache.org/jira/browse/HADOOP-3498) | *Major* | 
**File globbing alternation should be able to span path components**
+* [HADOOP-4084](https://issues.apache.org/jira/browse/HADOOP-4084) | *Major* | 
**Add explain plan capabilities to Hive QL**
 
-Extended file globbing alternation to cross path components. For example, 
{/a/b,/c/d} expands to a path that matches the files /a/b and /c/d.
+Introduced "EXPLAIN" plan for Hive.
 
 
 ---
 
-* [HADOOP-3485](https://issues.apache.org/jira/browse/HADOOP-3485) | *Minor* | 
**fix writes**
+* [HADOOP-3930](https://issues.apache.org/jira/browse/HADOOP-3930) | *Major* | 
**Decide how to integrate scheduler info into CLI and job tracker web page**
 
-Introduce write support for Fuse; requires Linux kernel 2.6.15 or better.
+Changed TaskScheduler to expose API for Web UI and Command Line Tool.
 
 
 ---
 
-* [HADOOP-3478](https://issues.apache.org/jira/browse/HADOOP-3478) | *Major* | 
**The algorithm to decide map re-execution on fetch failures can be improved**
+* [HADOOP-4106](https://issues.apache.org/jira/browse/HADOOP-4106) | *Major* | 
**add time, permission and user attribute support to fuse-dfs**
 
-Changed reducers to fetch maps in the same order for a given host to speed up 
identification of the faulty maps; reducers still randomize the host selection 
to distribute load.
+Added time, permission and user attribute support to libhdfs.
 
 
 ---
 
-* [HADOOP-3445](https://issues.apache.org/jira/browse/HADOOP-3445) | *Major* | 
**Implementing core scheduler functionality in Resource Manager (V1) for 
Hadoop**
+* [HADOOP-4176](https://issues.apache.org/jira/browse/HADOOP-4176) | *Major* | 
**Implement getFileChecksum(Path) in HftpFileSystem**
 
-Introduced Capacity Task Scheduler.
+Implemented getFileChecksum(Path) in HftpFileSystemfor distcp support.
 
 
 ---
 
-* [HADOOP-3412](https://issues.apache.org/jira/browse/HADOOP-3412) | *Minor* | 
**Refactor the scheduler out of the JobTracker**
+* [HADOOP-249](https://issues.apache.org/jira/browse/HADOOP-249) | *Major* | 
**Improving Map -\> Reduce performance and Task JVM reuse**
 
-Added the ability to chose between many schedulers, and to limit the number of 
running tasks per job.
+Enabled task JVMs to be reused via the job config 
mapred.job.reuse.jvm.num.tasks.
 
 
 ---
 
-* [HADOOP-3245](https://issues.apache.org/jira/browse/HADOOP-3245) | *Major* | 
**Provide ability to persist running jobs (extend HADOOP-1876)**
+* [HADOOP-2816](https://issues.apache.org/jira/browse/HADOOP-2816) | *Major* | 
**Cluster summary at name node web has confusing report for space utilization**
 
-Introduced recovery of jobs when JobTracker restarts. This facility is off by 
default. Introduced config parameters mapred.jobtracker.restart.recover, 
mapred.jobtracker.job.history.block.size, and 
mapred.jobtracker.job.history.buffer.size.
+Improved space reporting for NameNode Web UI. Applications that parse the Web 
UI output should be reviewed.
 
 
 ---
 
-* [HADOOP-3150](https://issues.apache.org/jira/browse/HADOOP-3150) | *Major* | 
**Move task file promotion into the task**
+* [HADOOP-4227](https://issues.apache.org/jira/browse/HADOOP-4227) | *Minor* | 
**Remove the deprecated, unused class ShellCommand.**
 
-Moved task file promotion to the Task. When the task has finished, it will do 
a commit and is declared SUCCEDED. Job cleanup is done by a separate task. Job 
is declared SUCCEDED/FAILED after the cleanup task has finished. Added public 
classes org.apache.hadoop.mapred.JobContext, TaskAttemptContext, 
OutputCommitter and FileOutputCommiitter. Added public APIs:   public 
OutputCommitter getOutputCommitter() and 
-public void setOutputCommitter(Class\<? extends OutputCommitter\> theClass) in 
org.apache.hadoop.mapred.JobConf
+Removed the deprecated class org.apache.hadoop.fs.ShellCommand.
 
 
 ---
 
-* [HADOOP-3149](https://issues.apache.org/jira/browse/HADOOP-3149) | *Major* | 
**supporting multiple outputs for M/R jobs**
+* [HADOOP-3019](https://issues.apache.org/jira/browse/HADOOP-3019) | *Major* | 
**want input sampler & sorted partitioner**
 
-Introduced MultipleOutputs class so Map/Reduce jobs can write data to 
different output files. Each output can use a different OutputFormat. 
Outpufiles are created within the job output directory. 
FileOutputFormat.getPathForCustomFile() creates a filename under the outputdir 
that is named with the task ID and task type (i.e. myfile-r-00001).
+Added a partitioner that effects a total order of output data, and an input 
sampler for generating the partition keyset for TotalOrderPartitioner for when 
the map's input keytype and distribution approximates its output.
 
 
 ---
 
-* [HADOOP-3062](https://issues.apache.org/jira/browse/HADOOP-3062) | *Major* | 
**Need to capture the metrics for the network ios generate by dfs reads/writes 
and map/reduce shuffling  and break them down by racks**
+* [HADOOP-3938](https://issues.apache.org/jira/browse/HADOOP-3938) | *Major* | 
**Quotas for disk space management**
 
-Introduced additional log records for data transfers.
+Introducted byte space quotas for directories. The count shell command 
modified to report both name and byte quotas.
 
 
 ---
 
-* [HADOOP-3019](https://issues.apache.org/jira/browse/HADOOP-3019) | *Major* | 
**want input sampler & sorted partitioner**
+* [HADOOP-4205](https://issues.apache.org/jira/browse/HADOOP-4205) | *Major* | 
**[Hive] metastore and ql to use the refactored SerDe library**
 
-Added a partitioner that effects a total order of output data, and an input 
sampler for generating the partition keyset for TotalOrderPartitioner for when 
the map's input keytype and distribution approximates its output.
+Improved Hive metastore and ql to use the refactored SerDe library.
 
 
 ---
 
-* [HADOOP-2816](https://issues.apache.org/jira/browse/HADOOP-2816) | *Major* | 
**Cluster summary at name node web has confusing report for space utilization**
+* [HADOOP-4116](https://issues.apache.org/jira/browse/HADOOP-4116) | *Blocker* 
| **Balancer should provide better resource management**
 
-Improved space reporting for NameNode Web UI. Applications that parse the Web 
UI output should be reviewed.
+Changed DataNode protocol version without impact to clients other than to 
compel use of current version of client application.
 
 
 ---
 
-* [HADOOP-2664](https://issues.apache.org/jira/browse/HADOOP-2664) | *Major* | 
**lzop-compatible CompresionCodec**
+* [HADOOP-4190](https://issues.apache.org/jira/browse/HADOOP-4190) | *Blocker* 
| **Changes to JobHistory makes it backward incompatible**
 
-Introduced LZOP codec.
+Changed job history format to add a dot at end of each line.
 
 
 ---
 
-* [HADOOP-2411](https://issues.apache.org/jira/browse/HADOOP-2411) | *Major* | 
**Add support for larger EC2 instance types**
+* [HADOOP-4293](https://issues.apache.org/jira/browse/HADOOP-4293) | *Major* | 
**Remove WritableJobConf**
 
-Added support for c1.\* instance types and associated kernels for EC2.
+Made Configuration Writable and rename the old write method to writeXml.
 
 
 ---
 
-* [HADOOP-2325](https://issues.apache.org/jira/browse/HADOOP-2325) | *Major* | 
**Require Java 6**
+* [HADOOP-4281](https://issues.apache.org/jira/browse/HADOOP-4281) | *Blocker* 
| **Capacity reported in some of the commands is not consistent with the Web UI 
reported data**
 
-Hadoop now requires Java 6.
+Changed command "hadoop dfsadmin -report" to be consistent with Web UI for 
both Namenode and Datanode reports. "Total raw bytes" is changed to "Configured 
Capacity". "Present Capacity" is newly added to indicate the present capacity 
of the DFS. "Remaining raw bytes" is changed to "DFS Remaining". "Used raw 
bytes" is changed to "DFS Used". "% used" is changed to "DFS Used%". 
Applications that parse command output should be reviewed.
 
 
 ---
 
-* [HADOOP-2302](https://issues.apache.org/jira/browse/HADOOP-2302) | *Major* | 
** Streaming should provide an option for numerical sort of keys**
+* [HADOOP-4018](https://issues.apache.org/jira/browse/HADOOP-4018) | *Major* | 
**limit memory usage in jobtracker**
 
-Introduced numerical key comparison for streaming.
+Introduced new configuration parameter mapred.max.tasks.per.job to specifie 
the maximum number of tasks per job.
 
 
 ---
 
-* [HADOOP-1869](https://issues.apache.org/jira/browse/HADOOP-1869) | *Major* | 
**access times of HDFS files**
+* [HADOOP-4430](https://issues.apache.org/jira/browse/HADOOP-4430) | *Blocker* 
| **Namenode Web UI capacity report is inconsistent with Balancer**
 
-Added HDFS file access times. By default, access times will be precise to the 
most recent hour boundary. A configuration parameter dfs.access.time.precision 
(milliseconds) is used to control this precision. Setting a value of 0 will 
disable persisting access times for HDFS files.
+Changed reporting in the NameNode Web UI to more closely reflect the behavior 
of the re-balancer. Removed no longer used config parameter dfs.datanode.du.pct 
from hadoop-default.xml.
 
 
 ---
 
-* [HADOOP-1823](https://issues.apache.org/jira/browse/HADOOP-1823) | *Major* | 
**want InputFormat for bzip2 files**
+* [HADOOP-4086](https://issues.apache.org/jira/browse/HADOOP-4086) | *Major* | 
**Add limit to Hive QL**
 
-bzip2 provided as codec in 0.19.0 
https://issues.apache.org/jira/browse/HADOOP-3646
+Added LIMIT to Hive query language.
 
 
 ---
 
-* [HADOOP-1700](https://issues.apache.org/jira/browse/HADOOP-1700) | *Major* | 
**Append to files in HDFS**
+* [HADOOP-4466](https://issues.apache.org/jira/browse/HADOOP-4466) | *Blocker* 
| **SequenceFileOutputFormat is coupled to WritableComparable and Writable**
 
-Introduced append operation for HDFS files.
+Ensure that SequenceFileOutputFormat isn't tied to Writables and can be used 
with other Serialization frameworks.
 
 
 ---
 
-* [HADOOP-249](https://issues.apache.org/jira/browse/HADOOP-249) | *Major* | 
**Improving Map -\> Reduce performance and Task JVM reuse**
+* [HADOOP-4433](https://issues.apache.org/jira/browse/HADOOP-4433) | *Major* | 
**Improve data loader for collecting metrics and log files from hadoop and 
system**
 
-Enabled task JVMs to be reused via the job config 
mapred.job.reuse.jvm.num.tasks.
+- Added startup and shutdown script
+- Added torque metrics data loader
+- Improve handling of Exec Plugin
+- Added Test cases for File Tailing Adaptors
+- Added Test cases for Start streaming at specific offset
 
 
 ---
 
-* [HADOOP-153](https://issues.apache.org/jira/browse/HADOOP-153) | *Major* | 
**skip records that fail Task**
+* [HADOOP-1823](https://issues.apache.org/jira/browse/HADOOP-1823) | *Major* | 
**want InputFormat for bzip2 files**
 
-Introduced record skipping where tasks fail on certain records. 
(org.apache.hadoop.mapred.SkipBadRecords)
+bzip2 provided as codec in 0.19.0 
https://issues.apache.org/jira/browse/HADOOP-3646
 
 
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/19041008/hadoop-common-project/hadoop-common/src/site/markdown/release/0.19.1/CHANGES.0.19.1.md
----------------------------------------------------------------------
diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.19.1/CHANGES.0.19.1.md
 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.19.1/CHANGES.0.19.1.md
index fcc53f1..c9751c9 100644
--- 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.19.1/CHANGES.0.19.1.md
+++ 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.19.1/CHANGES.0.19.1.md
@@ -24,15 +24,9 @@
 
 | JIRA | Summary | Priority | Component | Reporter | Contributor |
 |:---- |:---- | :--- |:---- |:---- |:---- |
+| [HADOOP-4061](https://issues.apache.org/jira/browse/HADOOP-4061) | Large 
number of decommission freezes the Namenode |  Major | . | Koji Noguchi | Tsz 
Wo Nicholas Sze |
 | [HADOOP-5225](https://issues.apache.org/jira/browse/HADOOP-5225) | 
workaround for tmp file handling on DataNodes in 0.19.1 (HADOOP-4663) |  
Blocker | . | Nigel Daley | Raghu Angadi |
 | [HADOOP-5224](https://issues.apache.org/jira/browse/HADOOP-5224) | Disable 
append |  Blocker | . | Nigel Daley |  |
-| [HADOOP-4061](https://issues.apache.org/jira/browse/HADOOP-4061) | Large 
number of decommission freezes the Namenode |  Major | . | Koji Noguchi | Tsz 
Wo Nicholas Sze |
-
-
-### IMPORTANT ISSUES:
-
-| JIRA | Summary | Priority | Component | Reporter | Contributor |
-|:---- |:---- | :--- |:---- |:---- |:---- |
 
 
 ### NEW FEATURES:
@@ -46,73 +40,55 @@
 
 | JIRA | Summary | Priority | Component | Reporter | Contributor |
 |:---- |:---- | :--- |:---- |:---- |:---- |
-| [HADOOP-5127](https://issues.apache.org/jira/browse/HADOOP-5127) | 
FSDirectory should not have public methods. |  Major | . | Konstantin Shvachko 
| Jakob Homan |
-| [HADOOP-5086](https://issues.apache.org/jira/browse/HADOOP-5086) | Trash URI 
semantics can be relaxed |  Minor | fs | Chris Douglas | Chris Douglas |
 | [HADOOP-4739](https://issues.apache.org/jira/browse/HADOOP-4739) | Minor 
enhancements to some sections of the Map/Reduce tutorial |  Trivial | . | Vivek 
Ratan | Vivek Ratan |
 | [HADOOP-3894](https://issues.apache.org/jira/browse/HADOOP-3894) | DFSClient 
chould log errors better, and provide better diagnostics |  Trivial | . | Steve 
Loughran | Steve Loughran |
+| [HADOOP-5086](https://issues.apache.org/jira/browse/HADOOP-5086) | Trash URI 
semantics can be relaxed |  Minor | fs | Chris Douglas | Chris Douglas |
+| [HADOOP-5127](https://issues.apache.org/jira/browse/HADOOP-5127) | 
FSDirectory should not have public methods. |  Major | . | Konstantin Shvachko 
| Jakob Homan |
 
 
 ### BUG FIXES:
 
 | JIRA | Summary | Priority | Component | Reporter | Contributor |
 |:---- |:---- | :--- |:---- |:---- |:---- |
-| [HADOOP-5665](https://issues.apache.org/jira/browse/HADOOP-5665) | Namenode 
could not be formatted because the "whoami" program could not be run. |  Major 
| . | Evelyn Sylvia |  |
-| [HADOOP-5268](https://issues.apache.org/jira/browse/HADOOP-5268) | Using 
MultipleOutputFormat and setting reducers to 0 causes 
org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException and job to fail |  
Major | . | Thibaut |  |
-| [HADOOP-5193](https://issues.apache.org/jira/browse/HADOOP-5193) | 
SecondaryNameNode does not rollImage because of incorrect calculation of edits 
modification time. |  Major | . | Konstantin Shvachko | Konstantin Shvachko |
-| [HADOOP-5192](https://issues.apache.org/jira/browse/HADOOP-5192) | Block 
reciever should not remove a finalized block when block replication fails |  
Blocker | . | Hairong Kuang | Hairong Kuang |
-| [HADOOP-5166](https://issues.apache.org/jira/browse/HADOOP-5166) | 
JobTracker fails to restart if recovery and ACLs are enabled |  Blocker | . | 
Karam Singh | Amar Kamat |
-| [HADOOP-5161](https://issues.apache.org/jira/browse/HADOOP-5161) | Accepted 
sockets do not get placed in DataXceiverServer#childSockets |  Major | . | 
Hairong Kuang | Hairong Kuang |
-| [HADOOP-5156](https://issues.apache.org/jira/browse/HADOOP-5156) | 
TestHeartbeatHandling uses MiniDFSCluster.getNamesystem() which does not exist 
in branch 0.20 |  Major | test | Konstantin Shvachko | Hairong Kuang |
-| [HADOOP-5134](https://issues.apache.org/jira/browse/HADOOP-5134) | 
FSNamesystem#commitBlockSynchronization adds under-construction block locations 
to blocksMap |  Blocker | . | Hairong Kuang | dhruba borthakur |
-| [HADOOP-5067](https://issues.apache.org/jira/browse/HADOOP-5067) | 
Failed/Killed attempts column in jobdetails.jsp does not show the number of 
failed/killed attempts correctly |  Major | . | Amareshwari Sriramadasu | 
Amareshwari Sriramadasu |
-| [HADOOP-5009](https://issues.apache.org/jira/browse/HADOOP-5009) | 
DataNode#shutdown sometimes leaves data block scanner verification log unclosed 
|  Major | . | Hairong Kuang | Hairong Kuang |
-| [HADOOP-5008](https://issues.apache.org/jira/browse/HADOOP-5008) | 
TestReplication#testPendingReplicationRetry leaves an opened fd unclosed |  
Major | test | Hairong Kuang | Hairong Kuang |
-| [HADOOP-5002](https://issues.apache.org/jira/browse/HADOOP-5002) | 2 core 
tests TestFileOutputFormat and TestHarFileSystem are failing in branch 19 |  
Blocker | . | Ravi Gummadi | Amareshwari Sriramadasu |
+| [HADOOP-4616](https://issues.apache.org/jira/browse/HADOOP-4616) | assertion 
makes fuse-dfs exit when reading incomplete data |  Blocker | . | Marc-Olivier 
Fleury | Pete Wyckoff |
+| [HADOOP-4697](https://issues.apache.org/jira/browse/HADOOP-4697) | 
KFS::getBlockLocations() fails with files having multiple blocks |  Major | fs 
| Lohit Vijayarenu | Sriram Rao |
+| [HADOOP-4720](https://issues.apache.org/jira/browse/HADOOP-4720) | docs/api 
does not contain the hdfs directory after building |  Major | build | Ramya 
Sunil |  |
+| [HADOOP-4635](https://issues.apache.org/jira/browse/HADOOP-4635) | Memory 
leak ? |  Blocker | . | Marc-Olivier Fleury | Pete Wyckoff |
+| [HADOOP-4420](https://issues.apache.org/jira/browse/HADOOP-4420) | 
JobTracker.killJob() doesn't check for the JobID being valid |  Minor | . | 
Steve Loughran | Aaron Kimball |
+| [HADOOP-4632](https://issues.apache.org/jira/browse/HADOOP-4632) | 
TestJobHistoryVersion should not create directory in current dir. |  Major | . 
| Amareshwari Sriramadasu | Amar Kamat |
+| [HADOOP-4508](https://issues.apache.org/jira/browse/HADOOP-4508) | 
FSDataOutputStream.getPos() == 0when appending to existing file and should be 
file length |  Major | fs | Pete Wyckoff | dhruba borthakur |
+| [HADOOP-4727](https://issues.apache.org/jira/browse/HADOOP-4727) | Groups do 
not work for fuse-dfs out of the box on 0.19.0 |  Blocker | . | Brian Bockelman 
| Brian Bockelman |
+| [HADOOP-4731](https://issues.apache.org/jira/browse/HADOOP-4731) | Job is 
not removed from the waiting jobs queue upon completion. |  Major | . | Hemanth 
Yamijala | Amar Kamat |
+| [HADOOP-4836](https://issues.apache.org/jira/browse/HADOOP-4836) | Minor 
typos in documentation and comments |  Trivial | documentation | Jordà Polo | 
Jordà Polo |
+| [HADOOP-4821](https://issues.apache.org/jira/browse/HADOOP-4821) | Usage 
description in the Quotas guide documentations are incorrect |  Minor | 
documentation | Boris Shkolnik | Boris Shkolnik |
+| [HADOOP-4797](https://issues.apache.org/jira/browse/HADOOP-4797) | RPC 
Server can leave a lot of direct buffers |  Blocker | ipc | Raghu Angadi | 
Raghu Angadi |
+| [HADOOP-4924](https://issues.apache.org/jira/browse/HADOOP-4924) | Race 
condition in re-init of TaskTracker |  Blocker | . | Devaraj Das | Devaraj Das |
+| [HADOOP-4847](https://issues.apache.org/jira/browse/HADOOP-4847) | 
OutputCommitter is loaded in the TaskTracker in localizeConfiguration |  
Blocker | . | Owen O'Malley | Amareshwari Sriramadasu |
+| [HADOOP-4966](https://issues.apache.org/jira/browse/HADOOP-4966) | Setup 
tasks are not removed from JobTracker's taskIdToTIPMap even after the job 
completes |  Major | . | Amar Kamat | Amareshwari Sriramadasu |
 | [HADOOP-4992](https://issues.apache.org/jira/browse/HADOOP-4992) | 
TestCustomOutputCommitter fails on hadoop-0.19 |  Blocker | . | Amar Kamat | 
Amareshwari Sriramadasu |
-| [HADOOP-4983](https://issues.apache.org/jira/browse/HADOOP-4983) | Job 
counters sometimes go down as tasks run without task failures |  Critical | . | 
Owen O'Malley | Amareshwari Sriramadasu |
 | [HADOOP-4982](https://issues.apache.org/jira/browse/HADOOP-4982) | TestFsck 
does not run in Eclipse. |  Major | test | Konstantin Shvachko | Konstantin 
Shvachko |
-| [HADOOP-4967](https://issues.apache.org/jira/browse/HADOOP-4967) | 
Inconsistent state in JVM manager |  Major | . | Amareshwari Sriramadasu | 
Devaraj Das |
-| [HADOOP-4966](https://issues.apache.org/jira/browse/HADOOP-4966) | Setup 
tasks are not removed from JobTracker's taskIdToTIPMap even after the job 
completes |  Major | . | Amar Kamat | Amareshwari Sriramadasu |
-| [HADOOP-4965](https://issues.apache.org/jira/browse/HADOOP-4965) | DFSClient 
should log instead of printing into std err. |  Major | test | Konstantin 
Shvachko | Konstantin Shvachko |
-| [HADOOP-4955](https://issues.apache.org/jira/browse/HADOOP-4955) | Make 
DBOutputFormat us column names from setOutput(...) |  Major | . | Kevin 
Peterson | Kevin Peterson |
+| [HADOOP-5008](https://issues.apache.org/jira/browse/HADOOP-5008) | 
TestReplication#testPendingReplicationRetry leaves an opened fd unclosed |  
Major | test | Hairong Kuang | Hairong Kuang |
 | [HADOOP-4943](https://issues.apache.org/jira/browse/HADOOP-4943) | fair 
share scheduler does not utilize all slots if the task trackers are configured 
heterogeneously |  Major | . | Zheng Shao | Zheng Shao |
-| [HADOOP-4924](https://issues.apache.org/jira/browse/HADOOP-4924) | Race 
condition in re-init of TaskTracker |  Blocker | . | Devaraj Das | Devaraj Das |
-| [HADOOP-4918](https://issues.apache.org/jira/browse/HADOOP-4918) | Fix bzip2 
work with SequenceFile |  Major | io | Zheng Shao | Zheng Shao |
 | [HADOOP-4906](https://issues.apache.org/jira/browse/HADOOP-4906) | 
TaskTracker running out of memory after running several tasks |  Blocker | . | 
Arun C Murthy | Sharad Agarwal |
+| [HADOOP-4918](https://issues.apache.org/jira/browse/HADOOP-4918) | Fix bzip2 
work with SequenceFile |  Major | io | Zheng Shao | Zheng Shao |
+| [HADOOP-4965](https://issues.apache.org/jira/browse/HADOOP-4965) | DFSClient 
should log instead of printing into std err. |  Major | test | Konstantin 
Shvachko | Konstantin Shvachko |
+| [HADOOP-4967](https://issues.apache.org/jira/browse/HADOOP-4967) | 
Inconsistent state in JVM manager |  Major | . | Amareshwari Sriramadasu | 
Devaraj Das |
+| [HADOOP-5002](https://issues.apache.org/jira/browse/HADOOP-5002) | 2 core 
tests TestFileOutputFormat and TestHarFileSystem are failing in branch 19 |  
Blocker | . | Ravi Gummadi | Amareshwari Sriramadasu |
+| [HADOOP-4983](https://issues.apache.org/jira/browse/HADOOP-4983) | Job 
counters sometimes go down as tasks run without task failures |  Critical | . | 
Owen O'Malley | Amareshwari Sriramadasu |
+| [HADOOP-5009](https://issues.apache.org/jira/browse/HADOOP-5009) | 
DataNode#shutdown sometimes leaves data block scanner verification log unclosed 
|  Major | . | Hairong Kuang | Hairong Kuang |
+| [HADOOP-4955](https://issues.apache.org/jira/browse/HADOOP-4955) | Make 
DBOutputFormat us column names from setOutput(...) |  Major | . | Kevin 
Peterson | Kevin Peterson |
 | [HADOOP-4862](https://issues.apache.org/jira/browse/HADOOP-4862) | A 
spurious IOException log on DataNode is not completely removed |  Blocker | . | 
Raghu Angadi | Raghu Angadi |
-| [HADOOP-4847](https://issues.apache.org/jira/browse/HADOOP-4847) | 
OutputCommitter is loaded in the TaskTracker in localizeConfiguration |  
Blocker | . | Owen O'Malley | Amareshwari Sriramadasu |
-| [HADOOP-4836](https://issues.apache.org/jira/browse/HADOOP-4836) | Minor 
typos in documentation and comments |  Trivial | documentation | Jordà Polo | 
Jordà Polo |
-| [HADOOP-4821](https://issues.apache.org/jira/browse/HADOOP-4821) | Usage 
description in the Quotas guide documentations are incorrect |  Minor | 
documentation | Boris Shkolnik | Boris Shkolnik |
-| [HADOOP-4797](https://issues.apache.org/jira/browse/HADOOP-4797) | RPC 
Server can leave a lot of direct buffers |  Blocker | ipc | Raghu Angadi | 
Raghu Angadi |
-| [HADOOP-4760](https://issues.apache.org/jira/browse/HADOOP-4760) | HDFS 
streams should not throw exceptions when closed twice |  Major | fs, fs/s3 | 
Alejandro Abdelnur | Enis Soztutar |
+| [HADOOP-5156](https://issues.apache.org/jira/browse/HADOOP-5156) | 
TestHeartbeatHandling uses MiniDFSCluster.getNamesystem() which does not exist 
in branch 0.20 |  Major | test | Konstantin Shvachko | Hairong Kuang |
 | [HADOOP-4759](https://issues.apache.org/jira/browse/HADOOP-4759) | 
HADOOP-4654 to be fixed for branches \>= 0.19 |  Major | . | Amareshwari 
Sriramadasu | Amareshwari Sriramadasu |
-| [HADOOP-4731](https://issues.apache.org/jira/browse/HADOOP-4731) | Job is 
not removed from the waiting jobs queue upon completion. |  Major | . | Hemanth 
Yamijala | Amar Kamat |
-| [HADOOP-4727](https://issues.apache.org/jira/browse/HADOOP-4727) | Groups do 
not work for fuse-dfs out of the box on 0.19.0 |  Blocker | . | Brian Bockelman 
| Brian Bockelman |
-| [HADOOP-4720](https://issues.apache.org/jira/browse/HADOOP-4720) | docs/api 
does not contain the hdfs directory after building |  Major | build | Ramya 
Sunil |  |
-| [HADOOP-4697](https://issues.apache.org/jira/browse/HADOOP-4697) | 
KFS::getBlockLocations() fails with files having multiple blocks |  Major | fs 
| Lohit Vijayarenu | Sriram Rao |
-| [HADOOP-4635](https://issues.apache.org/jira/browse/HADOOP-4635) | Memory 
leak ? |  Blocker | . | Marc-Olivier Fleury | Pete Wyckoff |
-| [HADOOP-4632](https://issues.apache.org/jira/browse/HADOOP-4632) | 
TestJobHistoryVersion should not create directory in current dir. |  Major | . 
| Amareshwari Sriramadasu | Amar Kamat |
-| [HADOOP-4616](https://issues.apache.org/jira/browse/HADOOP-4616) | assertion 
makes fuse-dfs exit when reading incomplete data |  Blocker | . | Marc-Olivier 
Fleury | Pete Wyckoff |
-| [HADOOP-4508](https://issues.apache.org/jira/browse/HADOOP-4508) | 
FSDataOutputStream.getPos() == 0when appending to existing file and should be 
file length |  Major | fs | Pete Wyckoff | dhruba borthakur |
+| [HADOOP-5161](https://issues.apache.org/jira/browse/HADOOP-5161) | Accepted 
sockets do not get placed in DataXceiverServer#childSockets |  Major | . | 
Hairong Kuang | Hairong Kuang |
+| [HADOOP-5193](https://issues.apache.org/jira/browse/HADOOP-5193) | 
SecondaryNameNode does not rollImage because of incorrect calculation of edits 
modification time. |  Major | . | Konstantin Shvachko | Konstantin Shvachko |
 | [HADOOP-4494](https://issues.apache.org/jira/browse/HADOOP-4494) | libhdfs 
does not call FileSystem.append when O\_APPEND passed to hdfsOpenFile |  Major 
| . | Pete Wyckoff | Pete Wyckoff |
-| [HADOOP-4420](https://issues.apache.org/jira/browse/HADOOP-4420) | 
JobTracker.killJob() doesn't check for the JobID being valid |  Minor | . | 
Steve Loughran | Aaron Kimball |
-
-
-### TESTS:
-
-| JIRA | Summary | Priority | Component | Reporter | Contributor |
-|:---- |:---- | :--- |:---- |:---- |:---- |
-
-
-### SUB-TASKS:
-
-| JIRA | Summary | Priority | Component | Reporter | Contributor |
-|:---- |:---- | :--- |:---- |:---- |:---- |
-
-
-### OTHER:
-
-| JIRA | Summary | Priority | Component | Reporter | Contributor |
-|:---- |:---- | :--- |:---- |:---- |:---- |
+| [HADOOP-5166](https://issues.apache.org/jira/browse/HADOOP-5166) | 
JobTracker fails to restart if recovery and ACLs are enabled |  Blocker | . | 
Karam Singh | Amar Kamat |
+| [HADOOP-5192](https://issues.apache.org/jira/browse/HADOOP-5192) | Block 
reciever should not remove a finalized block when block replication fails |  
Blocker | . | Hairong Kuang | Hairong Kuang |
+| [HADOOP-5067](https://issues.apache.org/jira/browse/HADOOP-5067) | 
Failed/Killed attempts column in jobdetails.jsp does not show the number of 
failed/killed attempts correctly |  Major | . | Amareshwari Sriramadasu | 
Amareshwari Sriramadasu |
+| [HADOOP-4760](https://issues.apache.org/jira/browse/HADOOP-4760) | HDFS 
streams should not throw exceptions when closed twice |  Major | fs, fs/s3 | 
Alejandro Abdelnur | Enis Soztutar |
+| [HADOOP-5134](https://issues.apache.org/jira/browse/HADOOP-5134) | 
FSNamesystem#commitBlockSynchronization adds under-construction block locations 
to blocksMap |  Blocker | . | Hairong Kuang | dhruba borthakur |
+| [HADOOP-5268](https://issues.apache.org/jira/browse/HADOOP-5268) | Using 
MultipleOutputFormat and setting reducers to 0 causes 
org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException and job to fail |  
Major | . | Thibaut |  |
+| [HADOOP-5665](https://issues.apache.org/jira/browse/HADOOP-5665) | Namenode 
could not be formatted because the "whoami" program could not be run. |  Major 
| . | Evelyn Sylvia |  |
 
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/19041008/hadoop-common-project/hadoop-common/src/site/markdown/release/0.19.1/RELEASENOTES.0.19.1.md
----------------------------------------------------------------------
diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.19.1/RELEASENOTES.0.19.1.md
 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.19.1/RELEASENOTES.0.19.1.md
index f236710..a40576c 100644
--- 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.19.1/RELEASENOTES.0.19.1.md
+++ 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.19.1/RELEASENOTES.0.19.1.md
@@ -23,44 +23,44 @@ These release notes cover new developer and user-facing 
incompatibilities, impor
 
 ---
 
-* [HADOOP-5225](https://issues.apache.org/jira/browse/HADOOP-5225) | *Blocker* 
| **workaround for tmp file handling on DataNodes in 0.19.1 (HADOOP-4663)**
+* [HADOOP-4061](https://issues.apache.org/jira/browse/HADOOP-4061) | *Major* | 
**Large number of decommission freezes the Namenode**
 
-Work around for tmp file handling. sync() does not work as a result.
+Added a new conf property dfs.namenode.decommission.nodes.per.interval so that 
NameNode checks decommission status of x nodes for every y seconds, where x is 
the value of dfs.namenode.decommission.nodes.per.interval and y is the value of 
dfs.namenode.decommission.interval.
 
 
 ---
 
-* [HADOOP-5224](https://issues.apache.org/jira/browse/HADOOP-5224) | *Blocker* 
| **Disable append**
+* [HADOOP-4635](https://issues.apache.org/jira/browse/HADOOP-4635) | *Blocker* 
| **Memory leak ?**
 
-HDFS append() is disabled. It throws UnsupportedOperationException.
+fix memory leak of user/group information in fuse-dfs
 
 
 ---
 
-* [HADOOP-5034](https://issues.apache.org/jira/browse/HADOOP-5034) | *Major* | 
**NameNode should send both replication and deletion requests to DataNode in 
one reply to a heartbeat**
+* [HADOOP-4797](https://issues.apache.org/jira/browse/HADOOP-4797) | *Blocker* 
| **RPC Server can leave a lot of direct buffers**
 
-This patch changes the DatanodeProtocoal version number from 18 to 19. The 
patch allows NameNode to send both block replication and deletion request to a 
DataNode in response to a heartbeat.
+Improve how RPC server reads and writes large buffers. Avoids soft-leak of 
direct buffers and excess copies in NIO layer.
 
 
 ---
 
-* [HADOOP-5002](https://issues.apache.org/jira/browse/HADOOP-5002) | *Blocker* 
| **2 core tests TestFileOutputFormat and TestHarFileSystem are failing in 
branch 19**
+* [HADOOP-4943](https://issues.apache.org/jira/browse/HADOOP-4943) | *Major* | 
**fair share scheduler does not utilize all slots if the task trackers are 
configured heterogeneously**
 
-This patch solves the null pointer exception issue in the 2 core tests 
TestFileOutputFormat and TestHarFileSystem in branch 19.
+HADOOP-4943: Fixed fair share scheduler to utilize all slots when the task 
trackers are configured heterogeneously.
 
 
 ---
 
-* [HADOOP-4943](https://issues.apache.org/jira/browse/HADOOP-4943) | *Major* | 
**fair share scheduler does not utilize all slots if the task trackers are 
configured heterogeneously**
+* [HADOOP-4906](https://issues.apache.org/jira/browse/HADOOP-4906) | *Blocker* 
| **TaskTracker running out of memory after running several tasks**
 
-HADOOP-4943: Fixed fair share scheduler to utilize all slots when the task 
trackers are configured heterogeneously.
+Fix the tasktracker for OOM exception by sharing the jobconf properties across 
tasks of the same job. Earlier a new instance was held for each task. With this 
fix, the job level configuration properties are shared across tasks of the same 
job.
 
 
 ---
 
-* [HADOOP-4906](https://issues.apache.org/jira/browse/HADOOP-4906) | *Blocker* 
| **TaskTracker running out of memory after running several tasks**
+* [HADOOP-5002](https://issues.apache.org/jira/browse/HADOOP-5002) | *Blocker* 
| **2 core tests TestFileOutputFormat and TestHarFileSystem are failing in 
branch 19**
 
-Fix the tasktracker for OOM exception by sharing the jobconf properties across 
tasks of the same job. Earlier a new instance was held for each task. With this 
fix, the job level configuration properties are shared across tasks of the same 
job.
+This patch solves the null pointer exception issue in the 2 core tests 
TestFileOutputFormat and TestHarFileSystem in branch 19.
 
 
 ---
@@ -72,30 +72,30 @@ Minor : HADOOP-3678 did not remove all the cases of 
spurious IOExceptions logged
 
 ---
 
-* [HADOOP-4797](https://issues.apache.org/jira/browse/HADOOP-4797) | *Blocker* 
| **RPC Server can leave a lot of direct buffers**
+* [HADOOP-5034](https://issues.apache.org/jira/browse/HADOOP-5034) | *Major* | 
**NameNode should send both replication and deletion requests to DataNode in 
one reply to a heartbeat**
 
-Improve how RPC server reads and writes large buffers. Avoids soft-leak of 
direct buffers and excess copies in NIO layer.
+This patch changes the DatanodeProtocoal version number from 18 to 19. The 
patch allows NameNode to send both block replication and deletion request to a 
DataNode in response to a heartbeat.
 
 
 ---
 
-* [HADOOP-4635](https://issues.apache.org/jira/browse/HADOOP-4635) | *Blocker* 
| **Memory leak ?**
+* [HADOOP-4494](https://issues.apache.org/jira/browse/HADOOP-4494) | *Major* | 
**libhdfs does not call FileSystem.append when O\_APPEND passed to 
hdfsOpenFile**
 
-fix memory leak of user/group information in fuse-dfs
+libhdfs supports O\_APPEND flag
 
 
 ---
 
-* [HADOOP-4494](https://issues.apache.org/jira/browse/HADOOP-4494) | *Major* | 
**libhdfs does not call FileSystem.append when O\_APPEND passed to 
hdfsOpenFile**
+* [HADOOP-5225](https://issues.apache.org/jira/browse/HADOOP-5225) | *Blocker* 
| **workaround for tmp file handling on DataNodes in 0.19.1 (HADOOP-4663)**
 
-libhdfs supports O\_APPEND flag
+Work around for tmp file handling. sync() does not work as a result.
 
 
 ---
 
-* [HADOOP-4061](https://issues.apache.org/jira/browse/HADOOP-4061) | *Major* | 
**Large number of decommission freezes the Namenode**
+* [HADOOP-5224](https://issues.apache.org/jira/browse/HADOOP-5224) | *Blocker* 
| **Disable append**
 
-Added a new conf property dfs.namenode.decommission.nodes.per.interval so that 
NameNode checks decommission status of x nodes for every y seconds, where x is 
the value of dfs.namenode.decommission.nodes.per.interval and y is the value of 
dfs.namenode.decommission.interval.
+HDFS append() is disabled. It throws UnsupportedOperationException.
 
 
 

http://git-wip-us.apache.org/repos/asf/hadoop/blob/19041008/hadoop-common-project/hadoop-common/src/site/markdown/release/0.19.2/CHANGES.0.19.2.md
----------------------------------------------------------------------
diff --git 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.19.2/CHANGES.0.19.2.md
 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.19.2/CHANGES.0.19.2.md
index 2071806..e83c766 100644
--- 
a/hadoop-common-project/hadoop-common/src/site/markdown/release/0.19.2/CHANGES.0.19.2.md
+++ 
b/hadoop-common-project/hadoop-common/src/site/markdown/release/0.19.2/CHANGES.0.19.2.md
@@ -27,18 +27,6 @@
 | [HADOOP-5332](https://issues.apache.org/jira/browse/HADOOP-5332) | Make 
support for file append API configurable |  Blocker | . | Nigel Daley | dhruba 
borthakur |
 
 
-### IMPORTANT ISSUES:
-
-| JIRA | Summary | Priority | Component | Reporter | Contributor |
-|:---- |:---- | :--- |:---- |:---- |:---- |
-
-
-### NEW FEATURES:
-
-| JIRA | Summary | Priority | Component | Reporter | Contributor |
-|:---- |:---- | :--- |:---- |:---- |:---- |
-
-
 ### IMPROVEMENTS:
 
 | JIRA | Summary | Priority | Component | Reporter | Contributor |
@@ -50,61 +38,43 @@
 
 | JIRA | Summary | Priority | Component | Reporter | Contributor |
 |:---- |:---- | :--- |:---- |:---- |:---- |
-| [HADOOP-6017](https://issues.apache.org/jira/browse/HADOOP-6017) | NameNode 
and SecondaryNameNode fail to restart because of abnormal filenames. |  Blocker 
| . | Raghu Angadi | Tsz Wo Nicholas Sze |
-| [HADOOP-5951](https://issues.apache.org/jira/browse/HADOOP-5951) | 
StorageInfo needs Apache license header. |  Major | . | Suresh Srinivas | 
Suresh Srinivas |
-| [HADOOP-5816](https://issues.apache.org/jira/browse/HADOOP-5816) | 
ArrayIndexOutOfBoundsException when using KeyFieldBasedComparator |  Minor | . 
| Min Zhou | He Yongqiang |
-| [HADOOP-5728](https://issues.apache.org/jira/browse/HADOOP-5728) | 
FSEditLog.printStatistics may cause IndexOutOfBoundsException |  Major | . | 
Wang Xu | Wang Xu |
-| [HADOOP-5671](https://issues.apache.org/jira/browse/HADOOP-5671) | 
DistCp.sameFile(..) should return true if src fs does not support checksum |  
Major | . | Tsz Wo Nicholas Sze | Tsz Wo Nicholas Sze |
-| [HADOOP-5644](https://issues.apache.org/jira/browse/HADOOP-5644) | Namnode 
is stuck in safe mode |  Major | . | Suresh Srinivas | Suresh Srinivas |
-| [HADOOP-5579](https://issues.apache.org/jira/browse/HADOOP-5579) | libhdfs 
does not set errno correctly |  Major | . | Brian Bockelman | Brian Bockelman |
-| [HADOOP-5557](https://issues.apache.org/jira/browse/HADOOP-5557) | Two minor 
problems in TestOverReplicatedBlocks |  Minor | test | Tsz Wo Nicholas Sze | 
Tsz Wo Nicholas Sze |
-| [HADOOP-5554](https://issues.apache.org/jira/browse/HADOOP-5554) | 
DataNodeCluster should create blocks with the same generation stamp as the 
blocks created in CreateEditsLog |  Major | test | Hairong Kuang | Hairong 
Kuang |
-| [HADOOP-5551](https://issues.apache.org/jira/browse/HADOOP-5551) | Namenode 
permits directory destruction on overwrite |  Critical | . | Brian Bockelman | 
Brian Bockelman |
-| [HADOOP-5549](https://issues.apache.org/jira/browse/HADOOP-5549) | 
ReplicationMonitor should schedule both replication and deletion work in one 
iteration |  Major | . | Hairong Kuang | Hairong Kuang |
-| [HADOOP-5522](https://issues.apache.org/jira/browse/HADOOP-5522) | Document 
job setup/cleaup tasks and task cleanup tasks in mapred tutorial |  Blocker | . 
| Amareshwari Sriramadasu | Amareshwari Sriramadasu |
-| [HADOOP-5479](https://issues.apache.org/jira/browse/HADOOP-5479) | NameNode 
should not send empty block replication request to DataNode |  Critical | . | 
Hairong Kuang | Hairong Kuang |
-| [HADOOP-5465](https://issues.apache.org/jira/browse/HADOOP-5465) | Blocks 
remain under-replicated |  Blocker | . | Hairong Kuang | Hairong Kuang |
-| [HADOOP-5449](https://issues.apache.org/jira/browse/HADOOP-5449) | Verify if 
JobHistory.HistoryCleaner works as expected |  Blocker | . | Amar Kamat | 
Amareshwari Sriramadasu |
-| [HADOOP-5446](https://issues.apache.org/jira/browse/HADOOP-5446) | 
TaskTracker metrics are disabled |  Major | metrics | Chris Douglas | Chris 
Douglas |
-| [HADOOP-5440](https://issues.apache.org/jira/browse/HADOOP-5440) | 
Successful taskid are not removed from TaskMemoryManager |  Blocker | . | 
Amareshwari Sriramadasu | Amareshwari Sriramadasu |
-| [HADOOP-5421](https://issues.apache.org/jira/browse/HADOOP-5421) | 
HADOOP-4638 has broken 0.19 compilation |  Blocker | . | Amar Kamat | Devaraj 
Das |
-| [HADOOP-5392](https://issues.apache.org/jira/browse/HADOOP-5392) | 
JobTracker crashes during recovery if job files are garbled |  Blocker | . | 
Amar Kamat | Amar Kamat |
-| [HADOOP-5384](https://issues.apache.org/jira/browse/HADOOP-5384) | 
DataNodeCluster should not create blocks with generationStamp == 1 |  Blocker | 
test | Tsz Wo Nicholas Sze | Tsz Wo Nicholas Sze |
-| [HADOOP-5376](https://issues.apache.org/jira/browse/HADOOP-5376) | 
JobInProgress.obtainTaskCleanupTask() throws an ArrayIndexOutOfBoundsException 
|  Blocker | . | Vinod Kumar Vavilapalli | Amareshwari Sriramadasu |
-| [HADOOP-5374](https://issues.apache.org/jira/browse/HADOOP-5374) | NPE in 
JobTracker.getTasksToSave() method |  Major | . | Vinod Kumar Vavilapalli | 
Amareshwari Sriramadasu |
-| [HADOOP-5333](https://issues.apache.org/jira/browse/HADOOP-5333) | The 
libhdfs append API is not coded correctly |  Major | . | dhruba borthakur | 
dhruba borthakur |
-| [HADOOP-5326](https://issues.apache.org/jira/browse/HADOOP-5326) | bzip2 
codec (CBZip2OutputStream) creates corrupted output file for some inputs |  
Major | io | Rodrigo Schmidt | Rodrigo Schmidt |
-| [HADOOP-5285](https://issues.apache.org/jira/browse/HADOOP-5285) | 
JobTracker hangs for long periods of time |  Blocker | . | Vinod Kumar 
Vavilapalli | Devaraj Das |
-| [HADOOP-5280](https://issues.apache.org/jira/browse/HADOOP-5280) | When 
expiring a lost launched task, JT doesn't remove the attempt from the 
taskidToTIPMap. |  Blocker | . | Vinod Kumar Vavilapalli | Devaraj Das |
 | [HADOOP-5269](https://issues.apache.org/jira/browse/HADOOP-5269) | 
TaskTracker.runningTasks holding FAILED\_UNCLEAN and KILLED\_UNCLEAN 
taskStatuses forever in some cases. |  Blocker | . | Amareshwari Sriramadasu | 
Amareshwari Sriramadasu |
+| [HADOOP-5233](https://issues.apache.org/jira/browse/HADOOP-5233) | Reducer 
not Succeded after 100% |  Blocker | . | Amareshwari Sriramadasu | Amareshwari 
Sriramadasu |
 | [HADOOP-5247](https://issues.apache.org/jira/browse/HADOOP-5247) | NPEs in 
JobTracker and JobClient when mapred.jobtracker.completeuserjobs.maximum is set 
to zero. |  Blocker | . | Vinod Kumar Vavilapalli | Amar Kamat |
+| [HADOOP-5285](https://issues.apache.org/jira/browse/HADOOP-5285) | 
JobTracker hangs for long periods of time |  Blocker | . | Vinod Kumar 
Vavilapalli | Devaraj Das |
 | [HADOOP-5241](https://issues.apache.org/jira/browse/HADOOP-5241) | Reduce 
tasks get stuck because of over-estimated task size (regression from 0.18) |  
Blocker | . | Andy Pavlo | Sharad Agarwal |
-| [HADOOP-5233](https://issues.apache.org/jira/browse/HADOOP-5233) | Reducer 
not Succeded after 100% |  Blocker | . | Amareshwari Sriramadasu | Amareshwari 
Sriramadasu |
-| [HADOOP-5231](https://issues.apache.org/jira/browse/HADOOP-5231) | Negative 
number of maps in cluster summary |  Major | . | Amareshwari Sriramadasu | 
Amareshwari Sriramadasu |
-| [HADOOP-5213](https://issues.apache.org/jira/browse/HADOOP-5213) | 
BZip2CompressionOutputStream NullPointerException |  Blocker | io | Zheng Shao 
| Zheng Shao |
+| [HADOOP-5280](https://issues.apache.org/jira/browse/HADOOP-5280) | When 
expiring a lost launched task, JT doesn't remove the attempt from the 
taskidToTIPMap. |  Blocker | . | Vinod Kumar Vavilapalli | Devaraj Das |
 | [HADOOP-5154](https://issues.apache.org/jira/browse/HADOOP-5154) | 4-way 
deadlock in FairShare scheduler |  Blocker | . | Vinod Kumar Vavilapalli | 
Matei Zaharia |
 | [HADOOP-5146](https://issues.apache.org/jira/browse/HADOOP-5146) | 
LocalDirAllocator misses files on the local filesystem |  Blocker | . | Arun C 
Murthy | Devaraj Das |
-| [HADOOP-4780](https://issues.apache.org/jira/browse/HADOOP-4780) | Task 
Tracker  burns a lot of cpu in calling getLocalCache |  Major | . | Runping Qi 
| He Yongqiang |
-| [HADOOP-4719](https://issues.apache.org/jira/browse/HADOOP-4719) | The ls 
shell command documentation is out-dated |  Major | documentation | Tsz Wo 
Nicholas Sze | Ravi Phulari |
+| [HADOOP-5326](https://issues.apache.org/jira/browse/HADOOP-5326) | bzip2 
codec (CBZip2OutputStream) creates corrupted output file for some inputs |  
Major | io | Rodrigo Schmidt | Rodrigo Schmidt |
 | [HADOOP-4638](https://issues.apache.org/jira/browse/HADOOP-4638) | Exception 
thrown in/from RecoveryManager.recover() should be caught and handled |  
Blocker | . | Amar Kamat | Amar Kamat |
+| [HADOOP-5384](https://issues.apache.org/jira/browse/HADOOP-5384) | 
DataNodeCluster should not create blocks with generationStamp == 1 |  Blocker | 
test | Tsz Wo Nicholas Sze | Tsz Wo Nicholas Sze |
+| [HADOOP-5376](https://issues.apache.org/jira/browse/HADOOP-5376) | 
JobInProgress.obtainTaskCleanupTask() throws an ArrayIndexOutOfBoundsException 
|  Blocker | . | Vinod Kumar Vavilapalli | Amareshwari Sriramadasu |
+| [HADOOP-5421](https://issues.apache.org/jira/browse/HADOOP-5421) | 
HADOOP-4638 has broken 0.19 compilation |  Blocker | . | Amar Kamat | Devaraj 
Das |
+| [HADOOP-5392](https://issues.apache.org/jira/browse/HADOOP-5392) | 
JobTracker crashes during recovery if job files are garbled |  Blocker | . | 
Amar Kamat | Amar Kamat |
+| [HADOOP-5333](https://issues.apache.org/jira/browse/HADOOP-5333) | The 
libhdfs append API is not coded correctly |  Major | . | dhruba borthakur | 
dhruba borthakur |
 | [HADOOP-3998](https://issues.apache.org/jira/browse/HADOOP-3998) | Got an 
exception from ClientFinalizer when the JT is terminated |  Blocker | . | Amar 
Kamat | dhruba borthakur |
-
-
-### TESTS:
-
-| JIRA | Summary | Priority | Component | Reporter | Contributor |
-|:---- |:---- | :--- |:---- |:---- |:---- |
-
-
-### SUB-TASKS:
-
-| JIRA | Summary | Priority | Component | Reporter | Contributor |
-|:---- |:---- | :--- |:---- |:---- |:---- |
-
-
-### OTHER:
-
-| JIRA | Summary | Priority | Component | Reporter | Contributor |
-|:---- |:---- | :--- |:---- |:---- |:---- |
+| [HADOOP-5440](https://issues.apache.org/jira/browse/HADOOP-5440) | 
Successful taskid are not removed from TaskMemoryManager |  Blocker | . | 
Amareshwari Sriramadasu | Amareshwari Sriramadasu |
+| [HADOOP-5446](https://issues.apache.org/jira/browse/HADOOP-5446) | 
TaskTracker metrics are disabled |  Major | metrics | Chris Douglas | Chris 
Douglas |
+| [HADOOP-5449](https://issues.apache.org/jira/browse/HADOOP-5449) | Verify if 
JobHistory.HistoryCleaner works as expected |  Blocker | . | Amar Kamat | 
Amareshwari Sriramadasu |
+| [HADOOP-5465](https://issues.apache.org/jira/browse/HADOOP-5465) | Blocks 
remain under-replicated |  Blocker | . | Hairong Kuang | Hairong Kuang |
+| [HADOOP-5479](https://issues.apache.org/jira/browse/HADOOP-5479) | NameNode 
should not send empty block replication request to DataNode |  Critical | . | 
Hairong Kuang | Hairong Kuang |
+| [HADOOP-5522](https://issues.apache.org/jira/browse/HADOOP-5522) | Document 
job setup/cleaup tasks and task cleanup tasks in mapred tutorial |  Blocker | . 
| Amareshwari Sriramadasu | Amareshwari Sriramadasu |
+| [HADOOP-5549](https://issues.apache.org/jira/browse/HADOOP-5549) | 
ReplicationMonitor should schedule both replication and deletion work in one 
iteration |  Major | . | Hairong Kuang | Hairong Kuang |
+| [HADOOP-5554](https://issues.apache.org/jira/browse/HADOOP-5554) | 
DataNodeCluster should create blocks with the same generation stamp as the 
blocks created in CreateEditsLog |  Major | test | Hairong Kuang | Hairong 
Kuang |
+| [HADOOP-5557](https://issues.apache.org/jira/browse/HADOOP-5557) | Two minor 
problems in TestOverReplicatedBlocks |  Minor | test | Tsz Wo Nicholas Sze | 
Tsz Wo Nicholas Sze |
+| [HADOOP-5231](https://issues.apache.org/jira/browse/HADOOP-5231) | Negative 
number of maps in cluster summary |  Major | . | Amareshwari Sriramadasu | 
Amareshwari Sriramadasu |
+| [HADOOP-4719](https://issues.apache.org/jira/browse/HADOOP-4719) | The ls 
shell command documentation is out-dated |  Major | documentation | Tsz Wo 
Nicholas Sze | Ravi Phulari |
+| [HADOOP-5374](https://issues.apache.org/jira/browse/HADOOP-5374) | NPE in 
JobTracker.getTasksToSave() method |  Major | . | Vinod Kumar Vavilapalli | 
Amareshwari Sriramadasu |
+| [HADOOP-4780](https://issues.apache.org/jira/browse/HADOOP-4780) | Task 
Tracker  burns a lot of cpu in calling getLocalCache |  Major | . | Runping Qi 
| He Yongqiang |
+| [HADOOP-5551](https://issues.apache.org/jira/browse/HADOOP-5551) | Namenode 
permits directory destruction on overwrite |  Critical | . | Brian Bockelman | 
Brian Bockelman |
+| [HADOOP-5644](https://issues.apache.org/jira/browse/HADOOP-5644) | Namnode 
is stuck in safe mode |  Major | . | Suresh Srinivas | Suresh Srinivas |
+| [HADOOP-5671](https://issues.apache.org/jira/browse/HADOOP-5671) | 
DistCp.sameFile(..) should return true if src fs does not support checksum |  
Major | . | Tsz Wo Nicholas Sze | Tsz Wo Nicholas Sze |
+| [HADOOP-5213](https://issues.apache.org/jira/browse/HADOOP-5213) | 
BZip2CompressionOutputStream NullPointerException |  Blocker | io | Zheng Shao 
| Zheng Shao |
+| [HADOOP-5579](https://issues.apache.org/jira/browse/HADOOP-5579) | libhdfs 
does not set errno correctly |  Major | . | Brian Bockelman | Brian Bockelman |
+| [HADOOP-5728](https://issues.apache.org/jira/browse/HADOOP-5728) | 
FSEditLog.printStatistics may cause IndexOutOfBoundsException |  Major | . | 
Wang Xu | Wang Xu |
+| [HADOOP-5816](https://issues.apache.org/jira/browse/HADOOP-5816) | 
ArrayIndexOutOfBoundsException when using KeyFieldBasedComparator |  Minor | . 
| Min Zhou | He Yongqiang |
+| [HADOOP-5951](https://issues.apache.org/jira/browse/HADOOP-5951) | 
StorageInfo needs Apache license header. |  Major | . | Suresh Srinivas | 
Suresh Srinivas |
+| [HADOOP-6017](https://issues.apache.org/jira/browse/HADOOP-6017) | NameNode 
and SecondaryNameNode fail to restart because of abnormal filenames. |  Blocker 
| . | Raghu Angadi | Tsz Wo Nicholas Sze |
 
 


---------------------------------------------------------------------
To unsubscribe, e-mail: common-commits-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-commits-h...@hadoop.apache.org

Reply via email to