Hi all,
I propose to cut a Hadoop 1.0.1 release candidate this Monday, 30 January.
Code freeze to be Sunday the 29th, unless someone has a strong reason to
wait longer.
This is strictly a maintenance release, so it will contain only bug fixes
that interfered with field use of 1.0.0. The current
Remove getProtocolVersion and getProtocolSignature from the client side
translator and server side implementation
-
Key: HADOOP-7994
URL: https://issue
Hadoop ignores old-style config options for enabling compressed output
--
Key: HADOOP-7993
URL: https://issues.apache.org/jira/browse/HADOOP-7993
Project: Hadoop Common
Issu
[
https://issues.apache.org/jira/browse/HADOOP-7992?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Suresh Srinivas resolved HADOOP-7992.
-
Resolution: Fixed
Hadoop Flags: Reviewed
I committed the patch. Thank you Bikas.
Dear Hadoop developers,
I would like to add the following line to the Hadoop wiki at:
http://wiki.apache.org/hadoop/PoweredBy#U
* [University of Twente, Database Group|http://db.cs.utwente.nl]
** 16 node cluster (dual core Xeon E3110 64 bit processors with 6MB
cache, 8GB main memory, 1TB disk) as
Hadoop fs -put operates on a single thread at a time, and writes the data to
HDFS in order. Depending on the connectivity between the filer/NFS server and
the datanodes it may be difficult to saturate that connection. Which is the
only way to really speed things up. If there are multiple file
http://$jobtrackerhost:50030/metrics
http://$namenode:50070/metrics
should give you enough metrics? also you could use ganglia to collect them
jabir
On Tue, Jan 24, 2012 at 10:17 PM, Arun C Murthy wrote:
> You can currently get CPU & memory stats for each task and aggregated
> stats per job via
You will more likely be hitting NFS server limits way before you can see any
noticible issues with HDFS.
Writes to a file are sequential. Total throughput for your transfer is
dependent on number of files and the rate at which files can be read from
NFS. If the total data set is split across re