[jira] [Resolved] (HDFS-2841) HA: HAAdmin does not work if security is enabled

2012-01-29 Thread Aaron T. Myers (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2841?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers resolved HDFS-2841.
--

   Resolution: Fixed
Fix Version/s: HA branch (HDFS-1623)
 Hadoop Flags: Reviewed

Thanks a lot for the reviews, Uma and Todd. I've just committed this to the 
branch and addressed Uma's nit on commit.

> HA: HAAdmin does not work if security is enabled
> 
>
> Key: HDFS-2841
> URL: https://issues.apache.org/jira/browse/HDFS-2841
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: ha
>Affects Versions: HA branch (HDFS-1623)
>Reporter: Aaron T. Myers
>Assignee: Aaron T. Myers
>Priority: Critical
> Fix For: HA branch (HDFS-1623)
>
> Attachments: HDFS-2841-HDFS-1623.patch
>
>
> Though HAServiceProtocol is indeed annotated with the {{@KerberosInfo}} 
> annotation, it specifies the Common server principal, which is intended to be 
> overridden at run-time with the value of the correct principal (e.g. as is 
> done in MRAdmin and DFSAdmin.) We need to do something similar for HAAdmin.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (HDFS-74) dfs.du.reserved not honored in 0.15/16 (regression from 0.14+patch for 2549)

2012-01-29 Thread Suresh Srinivas (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-74?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas resolved HDFS-74.
-

Resolution: Won't Fix

There are quite a few changes in how this is tracked and reported. Please 
reopen the bug if this issue still needs to be fixed.

> dfs.du.reserved not honored in 0.15/16 (regression from 0.14+patch for 2549)
> 
>
> Key: HDFS-74
> URL: https://issues.apache.org/jira/browse/HDFS-74
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Joydeep Sen Sarma
>Priority: Critical
>
> changes for https://issues.apache.org/jira/browse/HADOOP-1463
> have caused a regression. earlier:
> - we could set dfs.du.reserve to 1G and be *sure* that 1G would not be used.
> now this is no longer true. I am quoting Pete Wyckoff's example:
> 
> Let's look at an example. 100 GB disk and /usr using 45 GB and dfs using 50 
> GBs now
> Df -kh shows:
> Capacity = 100 GB
> Available = 1 GB (remember ~4 GB chopped out for metadata and stuff)
> Used = 95 GBs   
> remaining = 100 GB - 50 GB - 1GB = 49 GB 
> Min(remaining, available) = 1 GB
> 98% of which is usable for DFS apparently - 
> So, we're at the limit, but are free to use 98% of the remaining 1GB.
> 
> this is broke. based on the discussion on 1463 - it seems like the notion of 
> 'capacity' as being the first field of 'df' is problematic. For example - 
> here's what our df output looks like:
> FilesystemSize  Used Avail Use% Mounted on
> /dev/sda3 130G  123G   49M 100% /
> as u can see - 'Size' is a misnomer - that much space is not available. 
> Rather the actual usable space is 123G+49M ~ 123G. (not entirely sure what 
> the discrepancy is due to - but have heard this may be due to space reserved 
> for file system metadata). Because of this discrepancy - we end up in a 
> situation where file system is out of space.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HDFS-2856) Fix block protocol so that Datanodes don't require root or jvsc

2012-01-29 Thread Owen O'Malley (Created) (JIRA)
Fix block protocol so that Datanodes don't require root or jvsc
---

 Key: HDFS-2856
 URL: https://issues.apache.org/jira/browse/HDFS-2856
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: contrib/raid, scripts
Reporter: Owen O'Malley


Since we send the block tokens unencrypted to the datanode, we currently start 
the datanode as root and get a secure (< 1024) port.

If we have the datanode generate a nonce and send it on the connection and the 
sends an hmac of the nonce back instead of the block token it won't reveal any 
secrets. Thus, we wouldn't require a secure port and would not require root or 
jsvc.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




Hadoop-Hdfs-0.23-Build - Build # 153 - Still Unstable

2012-01-29 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/153/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 14413 lines...]

main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-javadoc-plugin:2.7:jar (module-javadocs) @ hadoop-hdfs-project 
---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable 
package
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.0:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-install-plugin:2.3.1:install (default-install) @ 
hadoop-hdfs-project ---
[INFO] Installing 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-0.23-Build/trunk/hadoop-hdfs-project/pom.xml
 to 
/home/jenkins/.m2/repository/org/apache/hadoop/hadoop-hdfs-project/0.23.1-SNAPSHOT/hadoop-hdfs-project-0.23.1-SNAPSHOT.pom
[INFO] 
[INFO] --- maven-antrun-plugin:1.6:run (create-testdirs) @ hadoop-hdfs-project 
---
[INFO] Executing tasks

main:
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-javadoc-plugin:2.7:jar (module-javadocs) @ hadoop-hdfs-project 
---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable 
package
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.0:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.6:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:2.3.2:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] ** FindBugsMojo execute ***
[INFO] canGenerate is false
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS  SUCCESS [5:36.278s]
[INFO] Apache Hadoop HttpFS .. SUCCESS [42.387s]
[INFO] Apache Hadoop HDFS Project  SUCCESS [0.058s]
[INFO] 
[INFO] BUILD SUCCESS
[INFO] 
[INFO] Total time: 6:19.169s
[INFO] Finished at: Sun Jan 29 11:42:09 UTC 2012
[INFO] Final Memory: 77M/755M
[INFO] 
+ /home/jenkins/tools/maven/latest/bin/mvn test 
-Dmaven.test.failure.ignore=true -Pclover 
-DcloverLicenseLocation=/home/jenkins/tools/clover/latest/lib/clover.license
Archiving artifacts
Publishing Clover coverage report...
Publishing Clover HTML report...
Publishing Clover XML report...
Publishing Clover coverage results...
Recording test results
Build step 'Publish JUnit test result report' changed build result to UNSTABLE
Publishing Javadoc
Recording fingerprints
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Unstable
Sending email for trigger: Unstable



###
## FAILED TESTS (if any) 
##
1 tests failed.
FAILED:  org.apache.hadoop.hdfs.TestFSInputChecker.testFSInputChecker

Error Message:
expected:<10> but was:<0>

Stack Trace:
junit.framework.AssertionFailedError: expected:<10> but was:<0>
at junit.framework.Assert.fail(Assert.java:47)
at junit.framework.Assert.failNotEquals(Assert.java:283)
at junit.framework.Assert.assertEquals(Assert.java:64)
at junit.framework.Assert.assertEquals(Assert.java:130)
at junit.framework.Assert.assertEquals(Assert.java:136)
at 
org.apache.hadoop.hdfs.TestFSInputChecker.checkSkip(TestFSInputChecker.java:193)
at 
org.apache.hadoop.hdfs.TestFSInputChecker.testChecker(TestFSInputChecker.java:223)
at 
org.apache.hadoop.hdfs.TestFSInputChecker.__CLR3_0_27kn567zvz(TestFSInputChecker.java:316)
at 
org.apache.hadoop.hdfs.TestFSInputChecker.testFSInputChecker(TestFSInputChecker.java:293)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at junit.framework.TestCase.runTest(TestCase.java:168)
at junit.framework.TestCase.runBare(TestCase.java:134)
at junit.framework.TestResult$1.protect(TestResult.java:110)
at junit.framework.TestResult.runProtected(TestResult.java:128)
at junit.framework.TestResult.run(TestResult.java:113)
at ju

Jenkins build is still unstable: Hadoop-Hdfs-0.23-Build #153

2012-01-29 Thread Apache Jenkins Server
See 




Jenkins build is still unstable: Hadoop-Hdfs-trunk #940

2012-01-29 Thread Apache Jenkins Server
See 




Hadoop-Hdfs-trunk - Build # 940 - Still Unstable

2012-01-29 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/940/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 12519 lines...]
[INFO] --- findbugs-maven-plugin:2.3.2:findbugs (default-cli) @ 
hadoop-hdfs-bkjournal ---
[INFO] ** FindBugsMojo execute ***
[INFO] canGenerate is true
[INFO] ** FindBugsMojo executeFindbugs ***
[INFO] Temp File is 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal/target/findbugsTemp.xml
[INFO] Fork Value is true
[INFO] xmlOutput is false
[INFO] 
[INFO] 
[INFO] Building Apache Hadoop HDFS Project 0.24.0-SNAPSHOT
[INFO] 
[INFO] 
[INFO] --- maven-clean-plugin:2.4.1:clean (default-clean) @ hadoop-hdfs-project 
---
[INFO] Deleting 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/target
[INFO] 
[INFO] --- maven-antrun-plugin:1.6:run (create-testdirs) @ hadoop-hdfs-project 
---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-javadoc-plugin:2.7:jar (module-javadocs) @ hadoop-hdfs-project 
---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable 
package
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.0:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.6:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:2.3.2:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] ** FindBugsMojo execute ***
[INFO] canGenerate is false
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS  SUCCESS [5:10.077s]
[INFO] Apache Hadoop HttpFS .. SUCCESS [31.359s]
[INFO] Apache Hadoop HDFS BookKeeper Journal . SUCCESS [11.455s]
[INFO] Apache Hadoop HDFS Project  SUCCESS [0.038s]
[INFO] 
[INFO] BUILD SUCCESS
[INFO] 
[INFO] Total time: 5:53.371s
[INFO] Finished at: Sun Jan 29 11:41:27 UTC 2012
[INFO] Final Memory: 87M/732M
[INFO] 
+ /home/jenkins/tools/maven/latest/bin/mvn test 
-Dmaven.test.failure.ignore=true -Pclover 
-DcloverLicenseLocation=/home/jenkins/tools/clover/latest/lib/clover.license
Archiving artifacts
Recording test results
Build step 'Publish JUnit test result report' changed build result to UNSTABLE
Publishing Javadoc
Recording fingerprints
Updating HDFS-2759
Updating HDFS-2801
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Unstable
Sending email for trigger: Unstable



###
## FAILED TESTS (if any) 
##
1 tests failed.
FAILED:  org.apache.hadoop.hdfs.TestFSInputChecker.testFSInputChecker

Error Message:
expected:<10> but was:<0>

Stack Trace:
junit.framework.AssertionFailedError: expected:<10> but was:<0>
at junit.framework.Assert.fail(Assert.java:47)
at junit.framework.Assert.failNotEquals(Assert.java:283)
at junit.framework.Assert.assertEquals(Assert.java:64)
at junit.framework.Assert.assertEquals(Assert.java:130)
at junit.framework.Assert.assertEquals(Assert.java:136)
at 
org.apache.hadoop.hdfs.TestFSInputChecker.checkSkip(TestFSInputChecker.java:194)
at 
org.apache.hadoop.hdfs.TestFSInputChecker.testChecker(TestFSInputChecker.java:224)
at 
org.apache.hadoop.hdfs.TestFSInputChecker.__CLR3_0_27kn56713yy(TestFSInputChecker.java:317)
at 
org.apache.hadoop.hdfs.TestFSInputChecker.testFSInputChecker(TestFSInputChecker.java:294)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at junit.framework.TestCase.runTest(TestCase.java:168)
at junit.framework.TestCase.runBare(TestCase.java:134)
at junit.framework.TestResult$1.protect(TestResult.java:110)
at junit.fr