Jenkins build is back to stable : Hadoop-Hdfs-0.23-Build #671

2013-07-17 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/671/changes



Build failed in Jenkins: Hadoop-Hdfs-trunk #1463

2013-07-17 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1463/changes

Changes:

[vinodkv] YARN-912. Move client facing exceptions to yarn-api module. 
Contributed by Mayank Bansal.

[vinodkv] YARN-62. Modified NodeManagers to avoid AMs from abusing container 
tokens for repetitive container launches. Contributed by Omkar Vinit Joshi.

[kihwal] HADOOP-9738. TestDistCh fails. Contributed by Jing Zhao.

[kihwal] HDFS-4998. TestUnderReplicatedBlocks fails intermittently. Contributed 
by Kihwal Lee.

[bikas] YARN-927. Change ContainerRequest to not have more than 1 container 
count and remove StoreContainerRequest (bikas)

[jlowe] MAPREDUCE-5380. Invalid mapred command should return non-zero exit 
code. Contributed by Stephen Chu

[jlowe] HADOOP-9734. Common protobuf definitions for GetUserMappingsProtocol, 
RefreshAuthorizationPolicyProtocol and RefreshUserMappingsProtocol. Contributed 
by Jason Lowe

[vinodkv] YARN-820. Fixed an invalid state transition in NodeManager caused by 
failing resource localization. Contributed by Mayank Bansal.

[vinodkv] YARN-661. Fixed NM to cleanup users' local directories correctly when 
starting up. Contributed by Omkar Vinit Joshi.

[bikas] MAPREDUCE-5398. MR changes for YARN-513 (Jian He via bikas)

[bikas] YARN-513. Create common proxy client for communicating with RM (Xuan 
Gong  Jian He via bikas)

[cmccabe] HDFS-4657.  Limit the number of blocks logged by the NN after a block 
report to a configurable value.  (Aaron Twinning Meyers via Colin Patrick 
McCabe)

[daryn] HADOOP-9683. [RPC v9] Wrap IpcConnectionContext in RPC headers (daryn)

[cmccabe] HADOOP-9618.  thread which detects GC pauses (Todd Lipcon via Colin 
Patrick McCabe)

[jlowe] Back out revision 1503499 for MAPREDUCE-5317. Stale files left behind 
for failed jobs

[szetszwo] HDFS-4992. Make balancer's mover thread count and dispatcher thread 
count configurable.  Contributed by Max Lapan

--
[...truncated 15037 lines...]
Running org.apache.hadoop.contrib.bkjournal.TestCurrentInprogress
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.693 sec
Running org.apache.hadoop.contrib.bkjournal.TestBookKeeperConfiguration
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.097 sec
Running org.apache.hadoop.contrib.bkjournal.TestBookKeeperJournalManager
Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.477 sec

Results :

Failed tests:   
testStandbyExceptionThrownDuringCheckpoint(org.apache.hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints):
 SBN should have still been checkpointing.

Tests run: 32, Failures: 1, Errors: 0, Skipped: 0

[INFO] 
[INFO] 
[INFO] Building Apache Hadoop HDFS-NFS 3.0.0-SNAPSHOT
[INFO] 
[INFO] 
[INFO] --- maven-clean-plugin:2.4.1:clean (default-clean) @ hadoop-hdfs-nfs ---
[INFO] Deleting 
https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/target
[INFO] 
[INFO] --- maven-antrun-plugin:1.6:run (create-testdirs) @ hadoop-hdfs-nfs ---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/target/test-dir
[mkdir] Created dir: 
https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/target/test/data
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-resources-plugin:2.2:resources (default-resources) @ 
hadoop-hdfs-nfs ---
[INFO] Using default encoding to copy filtered resources.
[INFO] 
[INFO] --- maven-compiler-plugin:2.5.1:compile (default-compile) @ 
hadoop-hdfs-nfs ---
[INFO] Compiling 12 source files to 
https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/target/classes
[INFO] 
[INFO] --- maven-resources-plugin:2.2:testResources (default-testResources) @ 
hadoop-hdfs-nfs ---
[INFO] Using default encoding to copy filtered resources.
[INFO] 
[INFO] --- maven-compiler-plugin:2.5.1:testCompile (default-testCompile) @ 
hadoop-hdfs-nfs ---
[INFO] Compiling 7 source files to 
https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/target/test-classes
[INFO] 
[INFO] --- maven-surefire-plugin:2.12.3:test (default-test) @ hadoop-hdfs-nfs 
---
[INFO] Surefire report directory: 
https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/target/surefire-reports

---
 T E S T S
---

---
 T E S T S
---
Running org.apache.hadoop.hdfs.nfs.nfs3.TestOffsetRange
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.058 

Hadoop-Hdfs-trunk - Build # 1463 - Still Failing

2013-07-17 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1463/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 15230 lines...]
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable 
package
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.6:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:2.3.2:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] ** FindBugsMojo execute ***
[INFO] canGenerate is false
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS  SUCCESS 
[1:37:47.688s]
[INFO] Apache Hadoop HttpFS .. SUCCESS [2:22.174s]
[INFO] Apache Hadoop HDFS BookKeeper Journal . FAILURE [55.075s]
[INFO] Apache Hadoop HDFS-NFS  FAILURE [25.800s]
[INFO] Apache Hadoop HDFS Project  SUCCESS [0.033s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 1:41:31.632s
[INFO] Finished at: Wed Jul 17 13:16:39 UTC 2013
[INFO] Final Memory: 58M/899M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.12.3:test (default-test) on 
project hadoop-hdfs-bkjournal: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal/target/surefire-reports
 for the individual test results.
[ERROR] - [Help 1]
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-checkstyle-plugin:2.6:checkstyle (default-cli) 
on project hadoop-hdfs-nfs: An error has occurred in Checkstyle report 
generation. Failed during checkstyle execution: Unable to find configuration 
file at location 
file:///home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/dev-support/checkstyle.xml:
 Could not find resource 
'file:///home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/dev-support/checkstyle.xml'.
 - [Help 2]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] [Help 2] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn goals -rf :hadoop-hdfs-bkjournal
Build step 'Execute shell' marked build as failure
Archiving artifacts
Updating HDFS-4992
Updating HDFS-4998
Updating HADOOP-9683
Updating MAPREDUCE-5317
Updating HDFS-4657
Updating HADOOP-9618
Updating HADOOP-9734
Updating YARN-661
Updating YARN-820
Updating YARN-912
Updating YARN-513
Updating MAPREDUCE-5398
Updating MAPREDUCE-5380
Updating YARN-62
Updating HADOOP-9738
Updating YARN-927
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
No tests ran.

[jira] [Created] (HDFS-5004) Add additional JMX bean for NameNode status data

2013-07-17 Thread Trevor Lorimer (JIRA)
Trevor Lorimer created HDFS-5004:


 Summary: Add additional JMX bean for NameNode status data
 Key: HDFS-5004
 URL: https://issues.apache.org/jira/browse/HDFS-5004
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.0.4-alpha, 3.0.0, 2.1.0-beta
Reporter: Trevor Lorimer
 Fix For: 3.0.0, 2.2.0


Currently the JMX beans returns much of the data contained on the HDFS Health 
webpage (dfsHealth.html). However there are several other attributes that are 
required to be added, that can only be accessed from within NameNode.

For this reason a new JMX bean is required (NameNodeStatusMXBean) which will 
expose the following attributes in NameNode:
Role
State
HostAndPort

also a list of the corruptedFiles should be exposed by NameNodeMXBean.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Re: mvn eclipse:eclipse failure on windows

2013-07-17 Thread Chris Nauroth
Loading hadoop.dll in tests is supposed to work via a common shared
maven-surefire-plugin configuration that sets the PATH environment variable
to include the build location of the dll:

https://github.com/apache/hadoop-common/blob/trunk/hadoop-project/pom.xml#L894

(On Windows, the shared library path is controlled with PATH instead of
LD_LIBRARY_PATH on Linux.)

This configuration has been working fine in all of the dev environments
I've seen, but I'm wondering if something different is happening in your
environment.  Does your hadoop.dll show up in
hadoop-common-project/hadoop-common/target/bin?  Is there anything else
that looks unique in your environment?

Also, another potential gotcha is the Windows max path length limitation of
260 characters.  Deeply nested project structures like Hadoop can cause
very long paths for the built artifacts, and you might not be able to load
the files if the full path exceeds 260 characters.  The workaround for now
is to keep the codebase in a very short root folder.  (I use C:\hdc .)

Chris Nauroth
Hortonworks
http://hortonworks.com/



On Mon, Jul 15, 2013 at 1:07 PM, Chuan Liu chuan...@microsoft.com wrote:

 Hi Uma,

 I suggest you do a 'mvn install -DskipTests' before running 'mvn
 eclipse:eclipse'.

 Thanks,
 Chuan

 -Original Message-
 From: Uma Maheswara Rao G [mailto:hadoop@gmail.com]
 Sent: Friday, July 12, 2013 7:42 PM
 To: common-...@hadoop.apache.org
 Cc: hdfs-dev@hadoop.apache.org
 Subject: Re: mvn eclipse:eclipse failure on windows

 HI Chris,
   eclipse:eclipse works but still I am seeing UnsatisfiesLinkError.
 Explicitly I pointed java.library.path to where hadoop.dll geneated. This
 dll generated with my clean install command only.   My pc is 64 but and
 also set Platform=x64 while building. But does not help.

 Regards,
 Uma






 On Fri, Jul 12, 2013 at 11:45 PM, Chris Nauroth cnaur...@hortonworks.com
 wrote:

  Hi Uma,
 
  I just tried getting a fresh copy of trunk and running mvn clean
  install -DskipTests followed by mvn eclipse:eclipse -DskipTests.
  Everything worked fine in my environment.  Are you still seeing the
 problem?
 
  The UnsatisfiedLinkError seems to indicate that your build couldn't
  access hadoop.dll for JNI method implementations.  hadoop.dll gets
  built as part of the hadoop-common sub-module.  Is it possible that
  you didn't have a complete package build for that sub-module before
  you started running the HDFS test?
 
  Chris Nauroth
  Hortonworks
  http://hortonworks.com/
 
 
 
  On Sun, Jul 7, 2013 at 9:08 AM, sure bhands sure.bha...@gmail.com
 wrote:
 
   I would try cleaning hadoop-maven-plugin directory from maven
   repository
  to
   rule out the stale version and then mv install followed by mvn
   eclipse:eclipse before digging in to it further.
  
   Thanks,
   Surendra
  
  
   On Sun, Jul 7, 2013 at 8:28 AM, Uma Maheswara Rao G 
  hadoop@gmail.com
   wrote:
  
Hi,
   
I am seeing this failure on windows while executing mvn
eclipse:eclipse command on trunk.
   
See the following trace:
   
[INFO]
   
  --
  --
[ERROR] Failed to execute goal
org.apache.maven.plugins:maven-eclipse-plugin:2.8
:eclipse (default-cli) on project hadoop-common: Request to merge
when 'filterin g' is not identical. Original=resource
src/main/resources:
output=target/classes
, include=[], exclude=[common-version-info.properties|**/*.java],
test=false, fi
ltering=false, merging with=resource src/main/resources:
output=target/classes,
include=[common-version-info.properties], exclude=[**/*.java],
   test=false,
filte
ring=true - [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven
with
  the
   -e
swit
ch.
[ERROR] Re-run Maven using the -X switch to enable full debug
 logging.
[ERROR]
[ERROR] For more information about the errors and possible
solutions, please rea d the following articles:
[ERROR] [Help 1]
http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionE
xception
[ERROR]
[ERROR] After correcting the problems, you can resume the build
with
  the
command
   
[ERROR]   mvn goals -rf :hadoop-common
E:\Hadoop-Trunk
   
any idea for resolving it.
   
With 'org.apache.maven.plugins:maven-eclipse-plugin:2.6:eclipse'
seems
  to
be no failures but  I am seeing following exception while running
  tests.
java.lang.UnsatisfiedLinkError:
   
   
  
  org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Ljava/lang/Stri
  ng;I)Z
at
org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Native
Method)
at
   
  org.apache.hadoop.io.nativeio.NativeIO$Windows.access(NativeIO.java:42
  3)
at org.apache.hadoop.fs.FileUtil.canWrite(FileUtil.java:952)
at
   
   
  
  

[jira] [Resolved] (HDFS-4672) Support tiered storage policies

2013-07-17 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell resolved HDFS-4672.
--

Resolution: Later

HDFS-2832 and new subtasks have picked up some ideas from here, we might 
address deltas later. Once we have production experience with our respective 
implementations, perhaps we can circle back and compare notes.

 Support tiered storage policies
 ---

 Key: HDFS-4672
 URL: https://issues.apache.org/jira/browse/HDFS-4672
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: datanode, hdfs-client, libhdfs, namenode
Reporter: Andrew Purtell

 We would like to be able to create certain files on certain storage device 
 classes (e.g. spinning media, solid state devices, RAM disk, non-volatile 
 memory). HDFS-2832 enables heterogeneous storage at the DataNode, so the 
 NameNode can gain awareness of what different storage options are available 
 in the pool and where they are located, but no API is provided for clients or 
 block placement plugins to perform device aware block placement. We would 
 like to propose a set of extensions that also have broad applicability to use 
 cases where storage device affinity is important:
  
 - Add an enum of generic storage device classes, borrowing from current 
 taxonomy of the storage industry
  
 - Augment DataNode volume metadata in storage reports with this enum
  
 - Extend the namespace so pluggable block policies can be specified on a 
 directory and storage device class can be tracked in the Inode. Perhaps this 
 could be a larger discussion on adding support for extended attributes in the 
 HDFS namespace. The Inode should track both the storage device class hint and 
 the current actual storage device class. FileStatus should expose this 
 information (or xattrs in general) to clients.
  
 - Extend the pluggable block policy framework so policies can also consider, 
 and specify, affinity for a particular storage device class
  
 - Extend the file creation API to accept a storage device class affinity 
 hint. Such a hint can be supplied directly as a parameter, or, if we are 
 considering extended attribute support, then instead as one of a set of 
 xattrs. The hint would be stored in the namespace and also used by the client 
 to indicate to the NameNode/block placement policy/DataNode constraints on 
 block placement. Furthermore, if xattrs or device storage class affinity 
 hints are associated with directories, then the NameNode should provide the 
 storage device affinity hint to the client in the create API response, so the 
 client can provide the appropriate hint to DataNodes when writing new blocks.
  
 - The list of candidate DataNodes for new blocks supplied by the NameNode to 
 clients should be weighted/sorted by availability of the desired storage 
 device class. 
  
 - Block replication should consider storage device affinity hints. If a 
 client move()s a file from a location under a path with affinity hint X to 
 under a path with affinity hint Y, then all blocks currently residing on 
 media X should be eventually replicated onto media Y with the then excess 
 replicas on media X deleted.
  
 - Introduce the concept of degraded path: a path can be degraded if a block 
 placement policy is forced to abandon a constraint in order to persist the 
 block, when there may not be available space on the desired device class, or 
 to maintain the minimum necessary replication factor. This concept is 
 distinct from the corrupt path, where one or more blocks are missing. Paths 
 in degraded state should be periodically reevaluated for re-replication.
  
 - The FSShell should be extended with commands for changing the storage 
 device class hint for a directory or file. 
  
 - Clients like DistCP which compare metadata should be extended to be aware 
 of the storage device class hint. For DistCP specifically, there should be an 
 option to ignore the storage device class hints, enabled by default.
  
 Suggested semantics:
  
 - The default storage device class should be the null class, or simply the 
 “default class”, for all cases where a hint is not available. This should be 
 configurable. hdfs-defaults.xml could provide the default as spinning media.
  
 - A storage device class hint should be provided (and is necessary) only when 
 the default is not sufficient.
  
 - For backwards compatibility, any FSImage or edit log entry lacking a  
 storage device class hint is interpreted as having affinity for the null 
 class.
  
 - All blocks for a given file share the same storage device class. If the 
 replication factor for this file is increased the replicas should all be 
 placed on the same storage device class.
  
 - If one or more blocks for a given file cannot be placed on the required 
 device class, then 

[jira] [Resolved] (HDFS-3994) misleading comment in CommonConfigurationKeysPublic

2013-07-17 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe resolved HDFS-3994.


Resolution: Won't Fix

I looked at this again, and I think the intention is just that you use 
{{CommonConfigurationKeys}} throughout your code, since that inherits a bunch 
of constants from {{CommonConfigurationKeysPublic}}

 misleading comment in CommonConfigurationKeysPublic
 ---

 Key: HDFS-3994
 URL: https://issues.apache.org/jira/browse/HDFS-3994
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Affects Versions: 2.0.3-alpha
Reporter: Colin Patrick McCabe
Priority: Trivial

 {{CommonConfigurationKeysPublic}} contains a potentially misleading comment:
 {code}
 /** 
  * This class contains constants for configuration keys used
  * in the common code.
  *
  * It includes all publicly documented configuration keys. In general
  * this class should not be used directly (use CommonConfigurationKeys
  * instead)
  */
 {code}
 This comment suggests that the user use {{CommonConfigurationKeys}}, despite 
 the fact that that class is {{InterfaceAudience.private}} whereas 
 {{CommonConfigurationKeysPublic}} is {{InterfaceAudience.public}}.  Perhaps 
 this should be rephrased.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Re: [VOTE] Release Apache Hadoop 2.1.0-beta

2013-07-17 Thread Vinod Kumar Vavilapalli

Looks like this RC has gone stale and lots of bug fixes went into 2.1 and 2.1.0 
branches and there are 4-5 outstanding blockers. And from what I see in 
CHANGES.txt files there seems to be a confusion about which branch to get in 
what.

I'm blowing off the current 2.1.0 release branch so that we can create a fresh 
release branch and call voting on that. I'll fix CHANGES.txt entries as well as 
JIRA fix version for bugs committed recently if there are inconsistencies.

Let me know if something is amiss while I do this.

Thanks,
+Vinod

On Jul 3, 2013, at 11:06 AM, Vinod Kumar Vavilapalli wrote:

 
 We should get these in, looking at them now.
 
 Thanks,
 +Vinod
 
 On Jun 28, 2013, at 12:03 PM, Hitesh Shah wrote:
 
 Hi Arun, 
 
 From a YARN perspective, YARN-791 and YARN-727 are 2 jiras that may 
 potentially change the apis. They can implemented in a backward compat 
 fashion if committed after 2.1.0. However, this will require adding of 
 differently-named apis ( different urls in case of the webservices ) and 
 make the current version of the api deprecated and/or obsolete. YARN-818 
 which is currently patch available also changes behavior.  
 
 Assuming that as soon as 2.1.0 is released, we are to follow a very strict 
 backward-compat retaining approach to all user-facing layers  ( 
 api/webservices/rpc/... ) in common/hdfs/yarn/mapreduce, does it make sense 
 to try and pull them in and roll out a new RC after they are ready? Perhaps 
 Vinod can chime in if he is aware of any other such jiras under YARN-386 
 which should be considered compat-related blockers for a 2.1.0 RC. 
 
 thanks
 -- Hitesh
 
 On Jun 26, 2013, at 1:17 AM, Arun C Murthy wrote:
 
 Folks,
 
 I've created a release candidate (rc0) for hadoop-2.1.0-beta that I would 
 like to get released.
 
 This release represents a *huge* amount of work done by the community (639 
 fixes) which includes several major advances including:
 # HDFS Snapshots
 # Windows support
 # YARN API stabilization
 # MapReduce Binary Compatibility with hadoop-1.x
 # Substantial amount of integration testing with rest of projects in the 
 ecosystem
 
 The RC is available at: 
 http://people.apache.org/~acmurthy/hadoop-2.1.0-beta-rc0/
 The RC tag in svn is here: 
 http://svn.apache.org/repos/asf/hadoop/common/tags/release-2.1.0-beta-rc0
 
 The maven artifacts are available via repository.apache.org.
 
 Please try the release and vote; the vote will run for the usual 7 days.
 
 thanks,
 Arun
 
 --
 Arun C. Murthy
 Hortonworks Inc.
 http://hortonworks.com/
 
 
 
 



[jira] [Created] (HDFS-5005) Move SnapshotException and SnapshotAccessControlException to o.a.h.hdfs.protocol

2013-07-17 Thread Jing Zhao (JIRA)
Jing Zhao created HDFS-5005:
---

 Summary: Move SnapshotException and SnapshotAccessControlException 
to o.a.h.hdfs.protocol
 Key: HDFS-5005
 URL: https://issues.apache.org/jira/browse/HDFS-5005
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Jing Zhao
Assignee: Jing Zhao


We should move the definition of these two exceptions to the protocol package 
since they can be directly passed to clients.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Re: [VOTE] Release Apache Hadoop 2.1.0-beta

2013-07-17 Thread Alejandro Abdelnur
Vinod,

Thanks for reviving this thread.

The current blockers are:

https://issues.apache.org/jira/issues/?jql=project%20in%20(hadoop%2C%20mapreduce%2C%20hdfs%2C%20yarn)%20and%20status%20in%20(open%2C%20'patch%20available')%20and%20priority%20%3D%20blocker%20and%20%22Target%20Version%2Fs%22%20%3D%20%222.1.0-beta%22

By looking at them I don't see they are necessary blockers for a beta
release.

* HADOOP-9688  HADOOP-9698

  They definitely have to be addressed before a GA
  release.

* YARN-701

  It should be addressed before a GA release.

  Still, as it is this breaks unmanaged AMs and to me
  that would be a blocker for the beta.

  YARN-701 and the unmanaged AMs fix should be committed
  in tandem.

* YARN-918

  It is a consequence of YARN-701 and depends on it.

* YARN-926

  It would be nice to have it addressed before GA release.


We could do a beta with what we have at the moment in branch-2 and have a
special release note indicating API changes coming in the next beta/GA
release as part of YARN-918  YARN-926.

IMO, we should move forward with the beta release with the current state.
Else we'll continue delaying it and adding more things that break/change
things.

Thanks.

Alejandro


On Wed, Jul 17, 2013 at 12:24 PM, Vinod Kumar Vavilapalli 
vino...@hortonworks.com wrote:


 Looks like this RC has gone stale and lots of bug fixes went into 2.1 and
 2.1.0 branches and there are 4-5 outstanding blockers. And from what I see
 in CHANGES.txt files there seems to be a confusion about which branch to
 get in what.

 I'm blowing off the current 2.1.0 release branch so that we can create a
 fresh release branch and call voting on that. I'll fix CHANGES.txt entries
 as well as JIRA fix version for bugs committed recently if there are
 inconsistencies.

 Let me know if something is amiss while I do this.

 Thanks,
 +Vinod

 On Jul 3, 2013, at 11:06 AM, Vinod Kumar Vavilapalli wrote:

 
  We should get these in, looking at them now.
 
  Thanks,
  +Vinod
 
  On Jun 28, 2013, at 12:03 PM, Hitesh Shah wrote:
 
  Hi Arun,
 
  From a YARN perspective, YARN-791 and YARN-727 are 2 jiras that may
 potentially change the apis. They can implemented in a backward compat
 fashion if committed after 2.1.0. However, this will require adding of
 differently-named apis ( different urls in case of the webservices ) and
 make the current version of the api deprecated and/or obsolete. YARN-818
 which is currently patch available also changes behavior.
 
  Assuming that as soon as 2.1.0 is released, we are to follow a very
 strict backward-compat retaining approach to all user-facing layers  (
 api/webservices/rpc/... ) in common/hdfs/yarn/mapreduce, does it make sense
 to try and pull them in and roll out a new RC after they are ready? Perhaps
 Vinod can chime in if he is aware of any other such jiras under YARN-386
 which should be considered compat-related blockers for a 2.1.0 RC.
 
  thanks
  -- Hitesh
 
  On Jun 26, 2013, at 1:17 AM, Arun C Murthy wrote:
 
  Folks,
 
  I've created a release candidate (rc0) for hadoop-2.1.0-beta that I
 would like to get released.
 
  This release represents a *huge* amount of work done by the community
 (639 fixes) which includes several major advances including:
  # HDFS Snapshots
  # Windows support
  # YARN API stabilization
  # MapReduce Binary Compatibility with hadoop-1.x
  # Substantial amount of integration testing with rest of projects in
 the ecosystem
 
  The RC is available at:
 http://people.apache.org/~acmurthy/hadoop-2.1.0-beta-rc0/
  The RC tag in svn is here:
 http://svn.apache.org/repos/asf/hadoop/common/tags/release-2.1.0-beta-rc0
 
  The maven artifacts are available via repository.apache.org.
 
  Please try the release and vote; the vote will run for the usual 7
 days.
 
  thanks,
  Arun
 
  --
  Arun C. Murthy
  Hortonworks Inc.
  http://hortonworks.com/
 
 
 
 




-- 
Alejandro


I'm interested in working with HDFS-4680. Can somebody be a mentor?

2013-07-17 Thread Sreejith Ramakrishnan
Hey,

I was originally researching options to work on ACCUMULO-1197. Basically,
it was a bid to pass trace functionality through the DFSClient. I discussed
with the guys over there on implementing a Google Dapper-style trace with
HTrace. The guys at HBase are also trying to achieve the same HTrace
integration [HBASE-6449]

But, that meant adding stuff to the RPC in HDFS. For a start, we've to add
a 64-bit span-id to every RPC with tracing enabled. There's some more in
the original Dapper paper and HTrace documentation.

I was told by the Accumulo people to talk with and seek help from the
experts at HDFS. I'm open to suggestions.

Additionally, I'm participating in a Joint Mentoring Programme by Apache
which is quite similar to GSoC. Luciano Resende (Community Development,
Apache) is incharge of the programme. I'll attach a link. The last date is
19th July. So, I'm pretty tensed without any mentors :(

[1] https://issues.apache.org/jira/browse/ACCUMULO-1197
[2] https://issues.apache.org/jira/browse/HDFS-4680
[3] https://github.com/cloudera/htrace
[4] http://community.apache.org/mentoringprogramme-icfoss-pilot.html
[5] https://issues.apache.org/jira/browse/HBASE-6449

Thank you,
Sreejith R


Re: I'm interested in working with HDFS-4680. Can somebody be a mentor?

2013-07-17 Thread Colin McCabe
Andrew Wang has been working on getting this kind of Dapper-style
trace functionality in HDFS.  He is on vacation this week, but next
week he might have some ideas about how you could contribute and/or
integrate with his patch.  Doing this right with security, etc is a
pretty big project and I think he wanted to do it incrementally.

best,
Colin McCabe


On Wed, Jul 17, 2013 at 1:44 PM, Sreejith Ramakrishnan
sreejith.c...@gmail.com wrote:
 Hey,

 I was originally researching options to work on ACCUMULO-1197. Basically,
 it was a bid to pass trace functionality through the DFSClient. I discussed
 with the guys over there on implementing a Google Dapper-style trace with
 HTrace. The guys at HBase are also trying to achieve the same HTrace
 integration [HBASE-6449]

 But, that meant adding stuff to the RPC in HDFS. For a start, we've to add
 a 64-bit span-id to every RPC with tracing enabled. There's some more in
 the original Dapper paper and HTrace documentation.

 I was told by the Accumulo people to talk with and seek help from the
 experts at HDFS. I'm open to suggestions.

 Additionally, I'm participating in a Joint Mentoring Programme by Apache
 which is quite similar to GSoC. Luciano Resende (Community Development,
 Apache) is incharge of the programme. I'll attach a link. The last date is
 19th July. So, I'm pretty tensed without any mentors :(

 [1] https://issues.apache.org/jira/browse/ACCUMULO-1197
 [2] https://issues.apache.org/jira/browse/HDFS-4680
 [3] https://github.com/cloudera/htrace
 [4] http://community.apache.org/mentoringprogramme-icfoss-pilot.html
 [5] https://issues.apache.org/jira/browse/HBASE-6449

 Thank you,
 Sreejith R


Re: I'm interested in working with HDFS-4680. Can somebody be a mentor?

2013-07-17 Thread Stack
Folks over at HBase would be interested in helping out.

What does a mentor have to do?  I poked around the icfoss link but didn't
see list of duties (I've been know to be certified blind on occasion).

I am not up on the malleability of hdfs RPC; is it just a matter of adding
the trace info to a pb header record or would it require more (Sanjay was
saying something recently off-list that trace id is imminent -- but I've
not done the digging)?

St.Ack


On Wed, Jul 17, 2013 at 1:44 PM, Sreejith Ramakrishnan 
sreejith.c...@gmail.com wrote:

 Hey,

 I was originally researching options to work on ACCUMULO-1197. Basically,
 it was a bid to pass trace functionality through the DFSClient. I discussed
 with the guys over there on implementing a Google Dapper-style trace with
 HTrace. The guys at HBase are also trying to achieve the same HTrace
 integration [HBASE-6449]

 But, that meant adding stuff to the RPC in HDFS. For a start, we've to add
 a 64-bit span-id to every RPC with tracing enabled. There's some more in
 the original Dapper paper and HTrace documentation.

 I was told by the Accumulo people to talk with and seek help from the
 experts at HDFS. I'm open to suggestions.

 Additionally, I'm participating in a Joint Mentoring Programme by Apache
 which is quite similar to GSoC. Luciano Resende (Community Development,
 Apache) is incharge of the programme. I'll attach a link. The last date is
 19th July. So, I'm pretty tensed without any mentors :(

 [1] https://issues.apache.org/jira/browse/ACCUMULO-1197
 [2] https://issues.apache.org/jira/browse/HDFS-4680
 [3] https://github.com/cloudera/htrace
 [4] http://community.apache.org/mentoringprogramme-icfoss-pilot.html
 [5] https://issues.apache.org/jira/browse/HBASE-6449

 Thank you,
 Sreejith R



Re: [VOTE] Release Apache Hadoop 2.1.0-beta

2013-07-17 Thread Vinod Kumar Vavilapalli

On Jul 17, 2013, at 1:04 PM, Alejandro Abdelnur wrote:

 * YARN-701
 
 It should be addressed before a GA release.
 
 Still, as it is this breaks unmanaged AMs and to me
 that would be a blocker for the beta.
 
 YARN-701 and the unmanaged AMs fix should be committed
 in tandem.
 
 * YARN-918
 
 It is a consequence of YARN-701 and depends on it.



YARN-918 is an API change. And YARN-701 is a behaviour change. We need both in 
2.1.0.



 * YARN-926
 
 It would be nice to have it addressed before GA release.


Either ways. I'd get it in sooner than later specifically when we are trying to 
replace the old API with the new one.

Thanks,
+Vino



Re: I'm interested in working with HDFS-4680. Can somebody be a mentor?

2013-07-17 Thread Todd Lipcon
I'm happy to help with this as well. I actually have a prototype patch that
I built during a hackathon a few months ago, and was able to get a full
stack trace including Client, NN, and DN. I'm on vacation this week but
will try to post my prototype upstream when I get back. Feel free to ping
me on this if I slack :)

-Todd

On Wed, Jul 17, 2013 at 4:12 PM, Stack st...@duboce.net wrote:

 Folks over at HBase would be interested in helping out.

 What does a mentor have to do?  I poked around the icfoss link but didn't
 see list of duties (I've been know to be certified blind on occasion).

 I am not up on the malleability of hdfs RPC; is it just a matter of adding
 the trace info to a pb header record or would it require more (Sanjay was
 saying something recently off-list that trace id is imminent -- but I've
 not done the digging)?

 St.Ack


 On Wed, Jul 17, 2013 at 1:44 PM, Sreejith Ramakrishnan 
 sreejith.c...@gmail.com wrote:

  Hey,
 
  I was originally researching options to work on ACCUMULO-1197. Basically,
  it was a bid to pass trace functionality through the DFSClient. I
 discussed
  with the guys over there on implementing a Google Dapper-style trace with
  HTrace. The guys at HBase are also trying to achieve the same HTrace
  integration [HBASE-6449]
 
  But, that meant adding stuff to the RPC in HDFS. For a start, we've to
 add
  a 64-bit span-id to every RPC with tracing enabled. There's some more in
  the original Dapper paper and HTrace documentation.
 
  I was told by the Accumulo people to talk with and seek help from the
  experts at HDFS. I'm open to suggestions.
 
  Additionally, I'm participating in a Joint Mentoring Programme by Apache
  which is quite similar to GSoC. Luciano Resende (Community Development,
  Apache) is incharge of the programme. I'll attach a link. The last date
 is
  19th July. So, I'm pretty tensed without any mentors :(
 
  [1] https://issues.apache.org/jira/browse/ACCUMULO-1197
  [2] https://issues.apache.org/jira/browse/HDFS-4680
  [3] https://github.com/cloudera/htrace
  [4] http://community.apache.org/mentoringprogramme-icfoss-pilot.html
  [5] https://issues.apache.org/jira/browse/HBASE-6449
 
  Thank you,
  Sreejith R
 




-- 
Todd Lipcon
Software Engineer, Cloudera


Re: I'm interested in working with HDFS-4680. Can somebody be a mentor?

2013-07-17 Thread Suresh Srinivas
Please look at some of the work happening in HADOOP-9688, which is adding a
unique UUID (16 bytes) for each RPC request. This is common to all Hadoop
RPC, will be available in HDFS, YARN and MAPREDUCE. Please see the jira for
more details. Reach out to me if you have any questions.


On Wed, Jul 17, 2013 at 1:44 PM, Sreejith Ramakrishnan 
sreejith.c...@gmail.com wrote:

 Hey,

 I was originally researching options to work on ACCUMULO-1197. Basically,
 it was a bid to pass trace functionality through the DFSClient. I discussed
 with the guys over there on implementing a Google Dapper-style trace with
 HTrace. The guys at HBase are also trying to achieve the same HTrace
 integration [HBASE-6449]

 But, that meant adding stuff to the RPC in HDFS. For a start, we've to add
 a 64-bit span-id to every RPC with tracing enabled. There's some more in
 the original Dapper paper and HTrace documentation.

 I was told by the Accumulo people to talk with and seek help from the
 experts at HDFS. I'm open to suggestions.

 Additionally, I'm participating in a Joint Mentoring Programme by Apache
 which is quite similar to GSoC. Luciano Resende (Community Development,
 Apache) is incharge of the programme. I'll attach a link. The last date is
 19th July. So, I'm pretty tensed without any mentors :(

 [1] https://issues.apache.org/jira/browse/ACCUMULO-1197
 [2] https://issues.apache.org/jira/browse/HDFS-4680
 [3] https://github.com/cloudera/htrace
 [4] http://community.apache.org/mentoringprogramme-icfoss-pilot.html
 [5] https://issues.apache.org/jira/browse/HBASE-6449

 Thank you,
 Sreejith R




-- 
http://hortonworks.com/download/


[jira] [Created] (HDFS-5006) Provide a link to symlink target in the web UI

2013-07-17 Thread Stephen Chu (JIRA)
Stephen Chu created HDFS-5006:
-

 Summary: Provide a link to symlink target in the web UI
 Key: HDFS-5006
 URL: https://issues.apache.org/jira/browse/HDFS-5006
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.2.0
Reporter: Stephen Chu
Priority: Minor
 Attachments: screenshot1.png, screenshot2.png

Currently, it's difficult to see what is a symlink from the web UI.

I've attached two screenshots. In screenshot 1, we see the symlink 
_/user/schu/tf2-link_ which has a target _/user/schu/dir1/tf2_.

If we click the tf2-link URL, we arrive at a page that shows the path of the 
target (screenshot 2).

It'd be useful if there was a link to this target web UI page and some way of 
easily discerning if a file is a symlink.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-5007) the property key dfs.http.port and dfs.https.port are hard corded

2013-07-17 Thread Kousuke Saruta (JIRA)
Kousuke Saruta created HDFS-5007:


 Summary: the property key dfs.http.port and dfs.https.port are 
hard corded
 Key: HDFS-5007
 URL: https://issues.apache.org/jira/browse/HDFS-5007
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 3.0.0
Reporter: Kousuke Saruta
 Fix For: 3.0.0


In TestHftpFileSystem.java and NameNodeHttpServer.java , the two property 
key dfs.http.port and dfs.https.port are hard corded.
Now, the constant values DFS_NAMENODE_HTTP_PORT_KEY and 
DFS_NAMENODE_HTTPS_KEY in DFSConfigKeys are available so I think we should 
replace those for maintainability. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


RE: mvn eclipse:eclipse failure on windows

2013-07-17 Thread Uma Maheswara Rao G
Hi Chris,

mvn test works fine for me via commandline.
But I am trying with eclipse.

Looks like, 2 things missing in eclipse, 1. /hadoop-common/target/bin is not 
coming in library path  (just checked NativeLoader is not finding 
hadoop.dll)and 2. HADOOP_HOME_DIR is not set
On setting this 2 things, things started working for me. I am not sure if there 
is something I missed here to do things automatic. Before win merge I was not 
having such dependencies.

Regards,
Uma 

-Original Message-
From: Chris Nauroth [mailto:cnaur...@hortonworks.com] 
Sent: 17 July 2013 05:13
To: hdfs-dev@hadoop.apache.org
Cc: common-...@hadoop.apache.org
Subject: Re: mvn eclipse:eclipse failure on windows

Loading hadoop.dll in tests is supposed to work via a common shared 
maven-surefire-plugin configuration that sets the PATH environment variable to 
include the build location of the dll:

https://github.com/apache/hadoop-common/blob/trunk/hadoop-project/pom.xml#L894

(On Windows, the shared library path is controlled with PATH instead of 
LD_LIBRARY_PATH on Linux.)

This configuration has been working fine in all of the dev environments I've 
seen, but I'm wondering if something different is happening in your 
environment.  Does your hadoop.dll show up in 
hadoop-common-project/hadoop-common/target/bin?  Is there anything else that 
looks unique in your environment?

Also, another potential gotcha is the Windows max path length limitation of
260 characters.  Deeply nested project structures like Hadoop can cause very 
long paths for the built artifacts, and you might not be able to load the files 
if the full path exceeds 260 characters.  The workaround for now is to keep the 
codebase in a very short root folder.  (I use C:\hdc .)

Chris Nauroth
Hortonworks
http://hortonworks.com/



On Mon, Jul 15, 2013 at 1:07 PM, Chuan Liu chuan...@microsoft.com wrote:

 Hi Uma,

 I suggest you do a 'mvn install -DskipTests' before running 'mvn 
 eclipse:eclipse'.

 Thanks,
 Chuan

 -Original Message-
 From: Uma Maheswara Rao G [mailto:hadoop@gmail.com]
 Sent: Friday, July 12, 2013 7:42 PM
 To: common-...@hadoop.apache.org
 Cc: hdfs-dev@hadoop.apache.org
 Subject: Re: mvn eclipse:eclipse failure on windows

 HI Chris,
   eclipse:eclipse works but still I am seeing UnsatisfiesLinkError.
 Explicitly I pointed java.library.path to where hadoop.dll geneated. This
 dll generated with my clean install command only.   My pc is 64 but and
 also set Platform=x64 while building. But does not help.

 Regards,
 Uma






 On Fri, Jul 12, 2013 at 11:45 PM, Chris Nauroth 
 cnaur...@hortonworks.com
 wrote:

  Hi Uma,
 
  I just tried getting a fresh copy of trunk and running mvn clean 
  install -DskipTests followed by mvn eclipse:eclipse -DskipTests.
  Everything worked fine in my environment.  Are you still seeing the
 problem?
 
  The UnsatisfiedLinkError seems to indicate that your build couldn't 
  access hadoop.dll for JNI method implementations.  hadoop.dll gets 
  built as part of the hadoop-common sub-module.  Is it possible that 
  you didn't have a complete package build for that sub-module before 
  you started running the HDFS test?
 
  Chris Nauroth
  Hortonworks
  http://hortonworks.com/
 
 
 
  On Sun, Jul 7, 2013 at 9:08 AM, sure bhands sure.bha...@gmail.com
 wrote:
 
   I would try cleaning hadoop-maven-plugin directory from maven 
   repository
  to
   rule out the stale version and then mv install followed by mvn 
   eclipse:eclipse before digging in to it further.
  
   Thanks,
   Surendra
  
  
   On Sun, Jul 7, 2013 at 8:28 AM, Uma Maheswara Rao G 
  hadoop@gmail.com
   wrote:
  
Hi,
   
I am seeing this failure on windows while executing mvn 
eclipse:eclipse command on trunk.
   
See the following trace:
   
[INFO]
   
  
  --
  --
[ERROR] Failed to execute goal
org.apache.maven.plugins:maven-eclipse-plugin:2.8
:eclipse (default-cli) on project hadoop-common: Request to 
merge when 'filterin g' is not identical. Original=resource
src/main/resources:
output=target/classes
, include=[], 
exclude=[common-version-info.properties|**/*.java],
test=false, fi
ltering=false, merging with=resource src/main/resources:
output=target/classes,
include=[common-version-info.properties], exclude=[**/*.java],
   test=false,
filte
ring=true - [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven 
with
  the
   -e
swit
ch.
[ERROR] Re-run Maven using the -X switch to enable full debug
 logging.
[ERROR]
[ERROR] For more information about the errors and possible 
solutions, please rea d the following articles:
[ERROR] [Help 1]
http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionE
xception
[ERROR]
[ERROR] After correcting the problems, you can resume the build 
with
  the
command
   

[jira] [Resolved] (HDFS-5001) Branch-1-Win TestAzureBlockPlacementPolicy and TestReplicationPolicyWithNodeGroup failed caused by 1) old APIs and 2) incorrect value of depthOfAllLeaves

2013-07-17 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth resolved HDFS-5001.
-

Resolution: Fixed

 Branch-1-Win TestAzureBlockPlacementPolicy and 
 TestReplicationPolicyWithNodeGroup failed caused by 1) old APIs and 2) 
 incorrect value of depthOfAllLeaves
 -

 Key: HDFS-5001
 URL: https://issues.apache.org/jira/browse/HDFS-5001
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 1-win
Reporter: Xi Fang
Assignee: Xi Fang
 Fix For: 1-win

 Attachments: HDFS-5001.patch


 After the backport patch of HDFS-4975 was committed, 
 TestAzureBlockPlacementPolicy and TestReplicationPolicyWithNodeGroup failed. 
 The cause for the failure of TestReplicationPolicyWithNodeGroup is that some 
 part in the patch of HDFS-3941 is missing. Our patch for HADOOP-495 makes 
 methods in super class to be called incorrectly. More specifically, HDFS-4975 
 backported HDFS-4350, HDFS-4351, and HDFS-3912 to enable the method parameter 
 boolean avoidStaleNodes, and updated the APIs in 
 BlockPlacementPolicyDefault. However, the override methods in 
 ReplicationPolicyWithNodeGroup weren't updated.
 The cause for the failure of TestAzureBlockPlacementPolicy is similar.
 In addition, TestAzureBlockPlacementPolicy has an error. Here is the error 
 info.
 Testcase: testPolicyWithDefaultRacks took 0.005 sec
 Caused an ERROR
 Invalid network topology. You cannot have a rack and a non-rack node at the 
 same level of the network topology.
 org.apache.hadoop.net.NetworkTopology$InvalidTopologyException: Invalid 
 network topology. You cannot have a rack and a non-rack node at the same 
 level of the network topology.
 at org.apache.hadoop.net.NetworkTopology.add(NetworkTopology.java:396)
 at 
 org.apache.hadoop.hdfs.server.namenode.TestAzureBlockPlacementPolicy.testPolicyWithDefaultRacks(TestAzureBlockPlacementPolicy.java:779)
 The error is caused by a check in NetworkTopology#add(Node node)
 {code}
 if (depthOfAllLeaves != node.getLevel()) {
   LOG.error(Error: can't add leaf node at depth  +
   node.getLevel() +  to topology:\n + oldTopoStr);
   throw new InvalidTopologyException(Invalid network topology.  +
   You cannot have a rack and a non-rack node at the same  +
   level of the network topology.);
 }
 {code}
 The problem of this check is that when we use NetworkTopology#remove(Node 
 node) to remove a node from the cluster, depthOfAllLeaves won't change. As a 
 result, we can't reset the value of NetworkTopology#depathOfAllLeaves of the 
 old topology of a cluster by just removing all its dataNode. See 
 TestAzureBlockPlacementPolicy#testPolicyWithDefaultRacks()
 {code}
 // clear the old topology
 for (Node node : dataNodes) {
   cluster.remove(node);
 }
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira