[jira] [Created] (HDFS-5046) Hang when add/remove a datanode into/from a 2 datanode cluster

2013-07-31 Thread sam liu (JIRA)
sam liu created HDFS-5046:
-

 Summary: Hang when add/remove a datanode into/from a 2 datanode 
cluster
 Key: HDFS-5046
 URL: https://issues.apache.org/jira/browse/HDFS-5046
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 1.1.1
 Environment: Red Hat Enterprise Linux Server release 5.3, 64 bit
Reporter: sam liu


1. Install a Hadoop 1.1.1 cluster, with 2 datanodes: dn1 and dn2. And, in 
hdfs-site.xml, set the 'dfs.replication' to 2
2. Add node dn3 into the cluster as a new datanode, and did not change the 
'dfs.replication' value in hdfs-site.xml and keep it as 2
note: step 2 passed
3. Decommission dn3 from the cluster
Expected result: dn3 could be decommissioned successfully
Actual result:
a). decommission progress hangs and the status always be 'Waiting DataNode 
status: Decommissioned'. But, if I execute 'hadoop dfs -setrep -R 2 /', the 
decommission continues and will be completed finally.
b). However, if the initial cluster includes = 3 datanodes, this issue won't 
be encountered when add/remove another datanode. For example, if I setup a 
cluster with 3 datanodes, and then I can successfully add the 4th datanode into 
it, and then also can successfully remove the 4th datanode from the cluster.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Build failed in Jenkins: Hadoop-Hdfs-trunk #1477

2013-07-31 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1477/changes

Changes:

[brandonli] HDFS-5043. For HdfsFileStatus, set default value of childrenNum to 
-1 instead of 0 to avoid confusing applications. Contributed by Brandon Li

[vinodkv] YARN-966. Fixed ContainerLaunch to not fail quietly when there are no 
localized resources due to some other failure. Contributed by Zhijie Shen.

[vinodkv] YARN-948. Changed ResourceManager to validate the release container 
list before actually releasing them. Contributed by Omkar Vinit Joshi.

[vinodkv] MAPREDUCE-5385. Fixed a bug with JobContext getCacheFiles API. 
Contributed by Omkar Vinit Joshi.

[cnauroth] HADOOP-9768. Moving from 2.1.0-beta to 2.1.1-beta in CHANGES.txt, 
because this patch did not make it into the 2.1.0-beta RC.

[acmurthy] Updating releasenotes for hadoop-2.1.0-beta.

[acmurthy] Updating release date for hadoop-2.1.0-beta.

[acmurthy] Moved HADOOP-9509  HADOOP-9515 to appropriate release of 2.1.0-beta.

--
[...truncated 15221 lines...]
[INFO] Excluding commons-collections:commons-collections:jar:3.2.1 from the 
shaded jar.
[INFO] Excluding commons-digester:commons-digester:jar:1.8 from the shaded jar.
[INFO] Excluding commons-beanutils:commons-beanutils:jar:1.7.0 from the shaded 
jar.
[INFO] Excluding commons-beanutils:commons-beanutils-core:jar:1.8.0 from the 
shaded jar.
[INFO] Excluding org.slf4j:slf4j-api:jar:1.6.1 from the shaded jar.
[INFO] Excluding org.slf4j:slf4j-log4j12:jar:1.6.1 from the shaded jar.
[INFO] Including org.apache.bookkeeper:bookkeeper-server:jar:4.0.0 in the 
shaded jar.
[INFO] Including org.jboss.netty:netty:jar:3.2.4.Final in the shaded jar.
[INFO] Including org.apache.zookeeper:zookeeper:jar:3.4.2 in the shaded jar.
[INFO] Excluding jline:jline:jar:0.9.94 from the shaded jar.
[INFO] Excluding com.google.guava:guava:jar:11.0.2 from the shaded jar.
[INFO] Excluding com.google.code.findbugs:jsr305:jar:1.3.9 from the shaded jar.
[INFO] Replacing original artifact with shaded artifact.
[INFO] Replacing 
https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal/target/hadoop-hdfs-bkjournal-3.0.0-SNAPSHOT.jar
 with 
https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal/target/hadoop-hdfs-bkjournal-3.0.0-SNAPSHOT-shaded.jar
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.6:checkstyle (default-cli) @ 
hadoop-hdfs-bkjournal ---
[INFO] 
[INFO] There are 321 checkstyle errors.
[WARNING] Unable to locate Source XRef to link to - DISABLED
[INFO] 
[INFO] --- findbugs-maven-plugin:2.3.2:findbugs (default-cli) @ 
hadoop-hdfs-bkjournal ---
[INFO] ** FindBugsMojo execute ***
[INFO] canGenerate is true
[INFO] ** FindBugsMojo executeFindbugs ***
[INFO] Temp File is 
https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal/target/findbugsTemp.xml
[INFO] Fork Value is true
[INFO] xmlOutput is false
[INFO] 
[INFO] 
[INFO] Building Apache Hadoop HDFS-NFS 3.0.0-SNAPSHOT
[INFO] 
[INFO] 
[INFO] --- maven-clean-plugin:2.4.1:clean (default-clean) @ hadoop-hdfs-nfs ---
[INFO] Deleting 
https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/target
[INFO] 
[INFO] --- maven-antrun-plugin:1.6:run (create-testdirs) @ hadoop-hdfs-nfs ---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/target/test-dir
[mkdir] Created dir: 
https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/target/test/data
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-resources-plugin:2.2:resources (default-resources) @ 
hadoop-hdfs-nfs ---
[INFO] Using default encoding to copy filtered resources.
[INFO] 
[INFO] --- maven-compiler-plugin:2.5.1:compile (default-compile) @ 
hadoop-hdfs-nfs ---
[INFO] Compiling 12 source files to 
https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/target/classes
[INFO] 
[INFO] --- maven-resources-plugin:2.2:testResources (default-testResources) @ 
hadoop-hdfs-nfs ---
[INFO] Using default encoding to copy filtered resources.
[INFO] 
[INFO] --- maven-compiler-plugin:2.5.1:testCompile (default-testCompile) @ 
hadoop-hdfs-nfs ---
[INFO] Compiling 7 source files to 
https://builds.apache.org/job/Hadoop-Hdfs-trunk/ws/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/target/test-classes
[INFO] 
[INFO] --- maven-surefire-plugin:2.12.3:test (default-test) @ hadoop-hdfs-nfs 
---
[INFO] Surefire report directory: 

Hadoop-Hdfs-trunk - Build # 1477 - Still Failing

2013-07-31 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1477/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 15414 lines...]
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.0:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.0:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable 
package
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.6:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:2.3.2:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] ** FindBugsMojo execute ***
[INFO] canGenerate is false
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS  SUCCESS 
[1:40:36.365s]
[INFO] Apache Hadoop HttpFS .. SUCCESS [2:29.304s]
[INFO] Apache Hadoop HDFS BookKeeper Journal . SUCCESS [1:20.708s]
[INFO] Apache Hadoop HDFS-NFS  FAILURE [25.741s]
[INFO] Apache Hadoop HDFS Project  SUCCESS [0.031s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 1:44:53.003s
[INFO] Finished at: Wed Jul 31 13:18:35 UTC 2013
[INFO] Final Memory: 49M/796M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-checkstyle-plugin:2.6:checkstyle (default-cli) 
on project hadoop-hdfs-nfs: An error has occurred in Checkstyle report 
generation. Failed during checkstyle execution: Unable to find configuration 
file at location 
file:///home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/dev-support/checkstyle.xml:
 Could not find resource 
'file:///home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs-nfs/dev-support/checkstyle.xml'.
 - [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn goals -rf :hadoop-hdfs-nfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Updating HDFS-5043
Updating YARN-966
Updating HADOOP-9768
Updating HADOOP-9515
Updating MAPREDUCE-5385
Updating YARN-948
Updating HADOOP-9509
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
No tests ran.

[jira] [Created] (HDFS-5047) Supress logging of full stack trace of quota and lease exceptions

2013-07-31 Thread Kihwal Lee (JIRA)
Kihwal Lee created HDFS-5047:


 Summary: Supress logging of full stack trace of quota and lease 
exceptions
 Key: HDFS-5047
 URL: https://issues.apache.org/jira/browse/HDFS-5047
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 0.23.9, 2.0.5-alpha
Reporter: Kihwal Lee


This is a follow up to HDFS-4714, which made a number of request-level 
exceptions to the terse list of the namenode rpc server.  I still see several 
exceptions causing full stack trace to be logged.

NSQuotaExceededException
DSQuotaExceededException
LeaseExpiredException


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HDFS-5036) Namenode safemode is on and is misleading on webpage

2013-07-31 Thread Ravi Prakash (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5036?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash resolved HDFS-5036.


  Resolution: Invalid
Release Note: Marking as invalid and closing! Raghu, if you were able to 
reproduce this on Apache like Suresh asked, please feel free to re-open this 
JIRA.

 Namenode safemode is on and is misleading on webpage
 

 Key: HDFS-5036
 URL: https://issues.apache.org/jira/browse/HDFS-5036
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
 Environment: production, CDH4.3
Reporter: Raghu C Doppalapudi
Priority: Minor

 even though namenode is not in safemode , namenode webUI shows safemode is on.
 when namenode safemode is get from command line it shows safe mode is off but 
 on the it shows it is on, it is confusing to the users looking at the webUI.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


RE: [VOTE] Release Apache Hadoop 2.1.0-beta

2013-07-31 Thread Bikas Saha
+1.

Using it locally for Tez.

Bikas

-Original Message-
From: Arun C Murthy [mailto:a...@hortonworks.com]
Sent: Tuesday, July 30, 2013 6:30 AM
To: common-...@hadoop.apache.org; hdfs-dev@hadoop.apache.org;
yarn-...@hadoop.apache.org; mapreduce-...@hadoop.apache.org
Subject: [VOTE] Release Apache Hadoop 2.1.0-beta

Folks,

I've created another release candidate (rc1) for hadoop-2.1.0-beta that I
would like to get released. This RC fixes a number of issues reported on
the previous candidate.

This release represents a *huge* amount of work done by the community
(~650 fixes) which includes several major advances including:
# HDFS Snapshots
# Windows support
# YARN API stabilization
# MapReduce Binary Compatibility with hadoop-1.x # Substantial amount of
integration testing with rest of projects in the ecosystem

The RC is available at:
http://people.apache.org/~acmurthy/hadoop-2.1.0-beta-rc1/
The RC tag in svn is here:
http://svn.apache.org/repos/asf/hadoop/common/tags/release-2.1.0-beta-rc1

The maven artifacts are available via repository.apache.org.

Please try the release and vote; the vote will run for the usual 7 days.

thanks,
Arun

--
Arun C. Murthy
Hortonworks Inc.
http://hortonworks.com/


[jira] [Resolved] (HDFS-5046) Hang when add/remove a datanode into/from a 2 datanode cluster

2013-07-31 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J resolved HDFS-5046.
---

Resolution: Not A Problem

bq. a). decommission progress hangs and the status always be 'Waiting DataNode 
status: Decommissioned'. But, if I execute 'hadoop dfs -setrep -R 2 /', the 
decommission continues and will be completed finally.

The step (a) points to your problem and solution both. You have files
being created with repl=3 on a 2 DN cluster which will prevent
decommission. This is not a bug.

 Hang when add/remove a datanode into/from a 2 datanode cluster
 --

 Key: HDFS-5046
 URL: https://issues.apache.org/jira/browse/HDFS-5046
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 1.1.1
 Environment: Red Hat Enterprise Linux Server release 5.3, 64 bit
Reporter: sam liu

 1. Install a Hadoop 1.1.1 cluster, with 2 datanodes: dn1 and dn2. And, in 
 hdfs-site.xml, set the 'dfs.replication' to 2
 2. Add node dn3 into the cluster as a new datanode, and did not change the 
 'dfs.replication' value in hdfs-site.xml and keep it as 2
 note: step 2 passed
 3. Decommission dn3 from the cluster
 Expected result: dn3 could be decommissioned successfully
 Actual result:
 a). decommission progress hangs and the status always be 'Waiting DataNode 
 status: Decommissioned'. But, if I execute 'hadoop dfs -setrep -R 2 /', the 
 decommission continues and will be completed finally.
 b). However, if the initial cluster includes = 3 datanodes, this issue won't 
 be encountered when add/remove another datanode. For example, if I setup a 
 cluster with 3 datanodes, and then I can successfully add the 4th datanode 
 into it, and then also can successfully remove the 4th datanode from the 
 cluster.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-5048) FileSystem#globStatus and FileContext#globStatus need to work with symlinks

2013-07-31 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HDFS-5048:
--

 Summary: FileSystem#globStatus and FileContext#globStatus need to 
work with symlinks
 Key: HDFS-5048
 URL: https://issues.apache.org/jira/browse/HDFS-5048
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe


FileSystem#globStatus and FileContext#globStatus need to work with symlinks.  
Currently, they resolve all links, so that if you have:
{code}
/alpha/beta
/alphaLink - alpha
{code}

and you take {{globStatus(/alphaLink/*)}}, you will get {{/alpha/beta}}, rather 
than the expected {{/alphaLink/beta}}.

We even resolve terminal symlinks, which would prevent listing a symlink in 
FSShell, for example.  Instead, we should build up the path incrementally.  
This will allow the shell to behave as expected, and also allow custom globbers 
to see the correct paths for symlinks.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-5049) Add JNI mlock support

2013-07-31 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HDFS-5049:
--

 Summary: Add JNI mlock support
 Key: HDFS-5049
 URL: https://issues.apache.org/jira/browse/HDFS-5049
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Colin Patrick McCabe


Add support for {{mlock}} and {{munlock}}, for use in caching.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-5050) Add DataNode support for mlock and munlock

2013-07-31 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HDFS-5050:
--

 Summary: Add DataNode support for mlock and munlock
 Key: HDFS-5050
 URL: https://issues.apache.org/jira/browse/HDFS-5050
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Colin Patrick McCabe


Add DataNode support for mlock and munlock.  The DataNodes should respond to 
RPCs telling them to mlock and munlock blocks.  Blocks should be uncached when 
the NameNode asks for them to be moved or deleted.  For now, we should cache 
only completed blocks.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-5051) Add cache status information to the DataNode heartbeat

2013-07-31 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HDFS-5051:
--

 Summary: Add cache status information to the DataNode heartbeat
 Key: HDFS-5051
 URL: https://issues.apache.org/jira/browse/HDFS-5051
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Colin Patrick McCabe


Add cache status information to the DataNode heartbeat.  This will inform the 
NameNode of the current cache status for each replica, which it should expose 
in {{getFileBlockLocations}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-5052) Add cacheRequest/uncacheRequest support to DFSAdmin and NameNode

2013-07-31 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HDFS-5052:
--

 Summary: Add cacheRequest/uncacheRequest support to DFSAdmin and 
NameNode
 Key: HDFS-5052
 URL: https://issues.apache.org/jira/browse/HDFS-5052
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Colin Patrick McCabe


Add cacheRequest/uncacheRequest/listCacheRequest support to DFSAdmin and 
NameNode.  Maintain a list of active CachingRequests on the NameNode.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-5053) NameNode should invoke DataNode mlock APIs to coordinate caching

2013-07-31 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HDFS-5053:
--

 Summary: NameNode should invoke DataNode mlock APIs to coordinate 
caching
 Key: HDFS-5053
 URL: https://issues.apache.org/jira/browse/HDFS-5053
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Colin Patrick McCabe


The NameNode should invoke the DataNode mlock APIs to coordinate caching.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-5054) PortmapInterface should check if the procedure is out-of-range

2013-07-31 Thread Brandon Li (JIRA)
Brandon Li created HDFS-5054:


 Summary: PortmapInterface should check if the procedure is 
out-of-range
 Key: HDFS-5054
 URL: https://issues.apache.org/jira/browse/HDFS-5054
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: nfs
Affects Versions: 3.0.0
Reporter: Brandon Li
Assignee: Brandon Li




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HDFS-5055) nn-2nn ignore dfs.namenode.secondary.http-address

2013-07-31 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HDFS-5055:
--

 Summary: nn-2nn ignore dfs.namenode.secondary.http-address
 Key: HDFS-5055
 URL: https://issues.apache.org/jira/browse/HDFS-5055
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.1.0-beta
Reporter: Allen Wittenauer
Priority: Blocker


The primary namenode attempts to connect back to (incoming hostname):port 
regardless of how dfs.namenode.secondary.http-address is configured.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira