+1 for this =)
Been using Github mirror for development anyway =)
We at Apache Gora moved to Git recently and it had been pretty smooth
transition.
- Henry
On Fri, Aug 1, 2014 at 4:43 PM, Karthik Kambatla ka...@cloudera.com wrote:
Hi folks,
From what I hear, a lot of devs use the git mirror
See https://builds.apache.org/job/Hadoop-Common-0.23-Build/1029/
--
[...truncated 8263 lines...]
Running org.apache.hadoop.fs.TestFileSystemTokens
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.524 sec
Running
See https://builds.apache.org/job/Hadoop-Common-trunk/1194/changes
Changes:
[jianhe] YARN-2343. Improve NMToken expire exception message. Contributed by Li
Lu
[cmccabe] HDFS-6482. Use block ID-based block layout on datanodes (James Thomas
via Colin Patrick McCabe)
[wang] HDFS-6788. Improve
On 1 August 2014 16:25, Jean-Baptiste Note jbn...@gmail.com wrote:
JeanBaptisteNote
done
--
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to
which it is addressed and may contain information that is confidential,
privileged and exempt from
Larry McCay created HADOOP-10929:
Summary: Typo in Configuration.getPasswordFromCredentialProviders
Key: HADOOP-10929
URL: https://issues.apache.org/jira/browse/HADOOP-10929
Project: Hadoop Common
Hi Folks ,
I am getting the below mentioned exception while running the Map reduce job
during the copy phase of the Mappers output.
I googled about it and tried all the possible solutions suggested but none
of them worked out in my case.
I tried to increase the memory available to JVM -
D