[jira] [Created] (HADOOP-8500) Javadoc jars contain entire target directory

2012-06-08 Thread EJ Ciramella (JIRA)
EJ Ciramella created HADOOP-8500:


 Summary: Javadoc jars contain entire target directory
 Key: HADOOP-8500
 URL: https://issues.apache.org/jira/browse/HADOOP-8500
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.0.0-alpha
 Environment: N/A
Reporter: EJ Ciramella
Priority: Minor
 Fix For: 2.0.1-alpha


The javadoc jars contain the contents of the target directory - which includes 
classes and all sorts of binary files that it shouldn't.

Sometimes the resulting javadoc jar is 10X bigger than it should be.

The fix is to reconfigure maven to use "api" as it's destDir for javadoc 
generation.

I have a patch/diff incoming.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8499) fix mvn compile -Pnative on CentOS / RHEL / Fedora / SuSE / etc

2012-06-08 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HADOOP-8499:


 Summary: fix mvn compile -Pnative on CentOS / RHEL / Fedora / SuSE 
/ etc
 Key: HADOOP-8499
 URL: https://issues.apache.org/jira/browse/HADOOP-8499
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe


On Linux platforms where user IDs start at 500 rather than 1000, the build 
currently is broken.  This includes CentOS, RHEL, Fedora, SuSE, and probably 
most other Linux platforms.  It does happen to work on Debian and Ubuntu, which 
explains why Jenkins hasn't caught it yet.

Other users will see something like this:

{code}
[INFO] Requested user cmccabe has id 500, which is below the minimum allowed 
1000
[INFO] FAIL: test-container-executor
[INFO] 
[INFO] 1 of 1 test failed
[INFO] Please report to mapreduce-...@hadoop.apache.org
[INFO] 
[INFO] make[1]: *** [check-TESTS] Error 1
[INFO] make[1]: Leaving directory 
`/home/cmccabe/hadoop4/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn
-server/hadoop-yarn-server-nodemanager/target/native/container-executor'
{code}

And then the build fails.  Since native unit tests are currently unskippable 
(HADOOP-8480) this makes the project unbuildable.

The easy solution to this is to relax the constraint for the unit test.  Since 
the unit test already writes its own configuration file, we just need to change 
it there.

In general, I believe that it would make sense to change this to 500 across the 
board.  I'm not aware of any Linuxes that create system users with IDs higher 
than or equal to 500.  System user IDs tend to be below 200.

However, if we do nothing else, we should at least fix the build by relaxing 
the constraint for unit tests.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




Hadoop-Common-22-branch - Build # 107 - Failure

2012-06-08 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Common-22-branch/107/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 3771 lines...]
[javac] ^
[javac] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Common-22-branch/trunk/src/java/org/apache/hadoop/security/SecurityUtil.java:134:
 warning: sun.security.jgss.krb5.Krb5Util is Sun proprietary API and may be 
removed in a future release
[javac] .add(Krb5Util.credsToTicket(serviceCred));
[javac]  ^
[javac] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Common-22-branch/trunk/src/java/org/apache/hadoop/security/authentication/client/KerberosAuthenticator.java:202:
 warning: sun.security.jgss.GSSUtil is Sun proprietary API and may be removed 
in a future release
[javac] 
GSSUtil.NT_GSS_KRB5_PRINCIPAL);
[javac] ^
[javac] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Common-22-branch/trunk/src/java/org/apache/hadoop/security/authentication/client/KerberosAuthenticator.java:203:
 warning: sun.security.jgss.GSSUtil is Sun proprietary API and may be removed 
in a future release
[javac] gssContext = gssManager.createContext(serviceName, 
GSSUtil.GSS_KRB5_MECH_OID, null,
[javac]^
[javac] Note: Some input files use or override a deprecated API.
[javac] Note: Recompile with -Xlint:deprecation for details.
[javac] 18 warnings
 [copy] Copying 1 file to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Common-22-branch/trunk/build-fi/classes

ivy-resolve-test:

ivy-retrieve-test:

generate-test-records:

generate-avro-records:
Trying to override old definition of task schema

generate-avro-protocols:
Trying to override old definition of task schema

compile-core-test:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Common-22-branch/trunk/build-fi/test/core/classes
[javac] Compiling 9 source files to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Common-22-branch/trunk/build-fi/test/core/classes
[javac] Note: Some input files use or override a deprecated API.
[javac] Note: Recompile with -Xlint:deprecation for details.
[javac] Note: Some input files use unchecked or unsafe operations.
[javac] Note: Recompile with -Xlint:unchecked for details.
[javac] Compiling 206 source files to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Common-22-branch/trunk/build-fi/test/core/classes
[javac] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Common-22-branch/trunk/src/test/core/org/apache/hadoop/security/authentication/server/TestKerberosAuthenticationHandler.java:40:
 warning: sun.security.jgss.GSSUtil is Sun proprietary API and may be removed 
in a future release
[javac] import sun.security.jgss.GSSUtil;
[javac] ^
[javac] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Common-22-branch/trunk/src/test/core/org/apache/hadoop/security/authentication/server/TestKerberosAuthenticationHandler.java:131:
 warning: sun.security.jgss.GSSUtil is Sun proprietary API and may be removed 
in a future release
[javac]   GSSName serviceName = 
gssManager.createName(servicePrincipal, GSSUtil.NT_GSS_KRB5_PRINCIPAL);
[javac] 
^
[javac] 
/home/jenkins/jenkins-slave/workspace/Hadoop-Common-22-branch/trunk/src/test/core/org/apache/hadoop/security/authentication/server/TestKerberosAuthenticationHandler.java:132:
 warning: sun.security.jgss.GSSUtil is Sun proprietary API and may be removed 
in a future release
[javac]   gssContext = gssManager.createContext(serviceName, 
GSSUtil.GSS_KRB5_MECH_OID, null,
[javac]  ^
[javac] Note: Some input files use or override a deprecated API.
[javac] Note: Recompile with -Xlint:deprecation for details.
[javac] 3 warnings
Trying to override old definition of task paranamer
[paranamer] Generating parameter names from 
/home/jenkins/jenkins-slave/workspace/Hadoop-Common-22-branch/trunk/src/test/core
 to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Common-22-branch/trunk/build-fi/test/core/classes
Build timed out (after 30 minutes). Marking the build as failed.
[FINDBUGS] Skipping publisher since build result is FAILURE
[WARNINGS] Skipping publisher since build result is FAILURE
Archiving artifacts
Recording fingerprints
Updating HADOOP-6995
Recording test results
Publishing Javadoc
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
All te

[jira] [Created] (HADOOP-8498) Hadoop-1.0.3 didnt publish sources.jar to maven

2012-06-08 Thread ryan rawson (JIRA)
ryan rawson created HADOOP-8498:
---

 Summary: Hadoop-1.0.3 didnt publish sources.jar to maven
 Key: HADOOP-8498
 URL: https://issues.apache.org/jira/browse/HADOOP-8498
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 1.0.3
Reporter: ryan rawson
Priority: Minor


on search.maven.org, only the JAR and POM for hadoop was published.  
Sources.jar should also be published.  This helps developers who are writing on 
top of hadoop, it allows their IDE to provide fully seamless and integrated 
source browsing (and javadoc) without taking extra steps.  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8497) Shell needs a way to list amount of physical consumed space in a directory

2012-06-08 Thread Todd Lipcon (JIRA)
Todd Lipcon created HADOOP-8497:
---

 Summary: Shell needs a way to list amount of physical consumed 
space in a directory
 Key: HADOOP-8497
 URL: https://issues.apache.org/jira/browse/HADOOP-8497
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Affects Versions: 2.0.0-alpha, 1.0.3, 3.0.0
Reporter: Todd Lipcon
Assignee: Andy Isaacson


Currently, there is no way to see the physical consumed space for a directory. 
du lists the logical (pre-replication) space, and "fs -count" only displays the 
consumed space when a quota is set. This makes it hard for administrators to 
set a quota on a directory, since they have no way to determine a reasonable 
value.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8496) FsShell is broken with s3 filesystems

2012-06-08 Thread Alejandro Abdelnur (JIRA)
Alejandro Abdelnur created HADOOP-8496:
--

 Summary: FsShell is broken with s3 filesystems
 Key: HADOOP-8496
 URL: https://issues.apache.org/jira/browse/HADOOP-8496
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/s3
Affects Versions: 2.0.1-alpha
Reporter: Alejandro Abdelnur
Priority: Critical


After setting up a S3 account, configuring the site.xml with the 
accesskey/password, when doing an ls on a non-empty bucket I get:

{code}
Found 4 items
-ls: -0s
Usage: hadoop fs [generic options] -ls [-d] [-h] [-R] [ ...]
{code}

Note that it correctly shows the number of items in the root of the bucket, it 
does not show the contents of the root.

I've tried -get and -put and it works fine, accessing a folder in the bucket 
seems to be fully broken.


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8495) Update Netty to avoid leaking file descriptors during shuffle

2012-06-08 Thread Jason Lowe (JIRA)
Jason Lowe created HADOOP-8495:
--

 Summary: Update Netty to avoid leaking file descriptors during 
shuffle
 Key: HADOOP-8495
 URL: https://issues.apache.org/jira/browse/HADOOP-8495
 Project: Hadoop Common
  Issue Type: Bug
  Components: io
Affects Versions: 0.23.3
Reporter: Jason Lowe
Assignee: Jason Lowe
Priority: Critical


Netty 3.2.3.Final has a known bug where writes to a closed channel do not have 
their futures invoked.  See 
[Netty-374|https://issues.jboss.org/browse/NETTY-374].  This can lead to file 
descriptor leaks during shuffle as noted in MAPREDUCE-4298.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




AUTO: Prabhat Pandey is out of the office (returning 06/28/2012)

2012-06-08 Thread Prabhat Pandey


I am out of the office until 06/28/2012.

I am out of the office until 06/28/2012.
For any issues please contact Dispatcher: dbqor...@us.ibm.com
Thanks.

Prabhat Pandey


Note: This is an automated response to your message  "[jira] [Created]
(HADOOP-8494) bin/hadoop dfs -help" sent on 06/08/2012 0:44:22.

This is the only notification you will receive while this person is away.

Jenkins build is back to normal : Hadoop-Common-trunk #434

2012-06-08 Thread Apache Jenkins Server
See 



Build failed in Jenkins: Hadoop-Common-0.23-Build #278

2012-06-08 Thread Apache Jenkins Server
See 

--
[...truncated 13274 lines...]
Running org.apache.hadoop.fs.viewfs.TestViewFsWithAuthorityLocalFs
Tests run: 42, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.112 sec
Running org.apache.hadoop.fs.viewfs.TestViewFsLocalFs
Tests run: 42, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.087 sec
Running org.apache.hadoop.fs.TestGlobPattern
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.158 sec
Running org.apache.hadoop.fs.TestS3_LocalFileContextURI
Tests run: 17, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.089 sec
Running org.apache.hadoop.fs.TestLocalFSFileContextCreateMkdir
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.748 sec
Running org.apache.hadoop.fs.TestHarFileSystem
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.335 sec
Running org.apache.hadoop.fs.TestFileSystemCaching
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.766 sec
Running org.apache.hadoop.fs.TestLocalFsFCStatistics
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.536 sec
Running org.apache.hadoop.fs.TestHardLink
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.315 sec
Running org.apache.hadoop.fs.TestCommandFormat
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.166 sec
Running org.apache.hadoop.fs.TestLocal_S3FileContextURI
Tests run: 17, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.082 sec
Running org.apache.hadoop.fs.TestLocalFileSystem
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.816 sec
Running org.apache.hadoop.fs.TestFcLocalFsPermission
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.725 sec
Running org.apache.hadoop.fs.TestListFiles
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.592 sec
Running org.apache.hadoop.fs.TestPath
Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.862 sec
Running org.apache.hadoop.fs.kfs.TestKosmosFileSystem
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.577 sec
Running org.apache.hadoop.fs.TestGlobExpander
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.106 sec
Running org.apache.hadoop.fs.TestFilterFileSystem
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.604 sec
Running org.apache.hadoop.fs.TestFcLocalFsUtil
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.602 sec
Running org.apache.hadoop.fs.TestGetFileBlockLocations
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.822 sec
Running org.apache.hadoop.fs.s3.TestInMemoryS3FileSystemContract
Tests run: 29, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.858 sec
Running org.apache.hadoop.fs.s3.TestINode
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.12 sec
Running org.apache.hadoop.fs.s3.TestS3Credentials
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.206 sec
Running org.apache.hadoop.fs.s3.TestS3FileSystem
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.228 sec
Running org.apache.hadoop.fs.TestDU
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.266 sec
Running org.apache.hadoop.record.TestBuffer
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.123 sec
Running org.apache.hadoop.record.TestRecordVersioning
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.209 sec
Running org.apache.hadoop.record.TestRecordIO
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.233 sec
Running org.apache.hadoop.metrics2.source.TestJvmMetrics
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.456 sec
Running org.apache.hadoop.metrics2.util.TestSampleStat
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.145 sec
Running org.apache.hadoop.metrics2.util.TestMetricsCache
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.184 sec
Running org.apache.hadoop.metrics2.lib.TestInterns
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.288 sec
Running org.apache.hadoop.metrics2.lib.TestMetricsAnnotations
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.54 sec
Running org.apache.hadoop.metrics2.lib.TestMutableMetrics
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.476 sec
Running org.apache.hadoop.metrics2.lib.TestUniqNames
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.15 sec
Running org.apache.hadoop.metrics2.lib.TestMetricsRegistry
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.456 sec
Running org.apache.hadoop.metrics2.impl.TestMetricsCollectorImpl
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.285 sec
Running org.apache.hadoop.metrics2.impl.TestGangliaMetrics
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.475 sec
Running org.apache.hadoop.metrics2.impl.TestSinkQueue
Tests run:

Hadoop command not found:hdfs and yarn

2012-06-08 Thread Prajakta Kalmegh
Hi

I am trying to execute the following commands for setting up Hadoop:
# Format the namenode
hdfs namenode -format
# Start the namenode
hdfs namenode
# Start a datanode
hdfs datanode

yarn resourcemanager
yarn nodemanager

It gives me a "Hadoop Command not found." error for all the commands. When 
I try to use "hadoop namenode -format" instead, it gives me a deprecated 
command warning. Can someone please tell me if I am missing including any 
env variables? I have included HADOOP_COMMON_HOME, HADOOP_HDFS_HOME, 
HADOOP_MAPRED_HOME, YARN_HOME, HADOOP_CONF_DIR, YARN_CONF_DIR, 
HADOOP_PREFIX in my path (apart from java etc).

I am following the instructions for setting up Hadoop with Eclipse given 
in 
- http://wiki.apache.org/hadoop/HowToSetupYourDevelopmentEnvironment
- 
http://hadoop.apache.org/common/docs/r2.0.0-alpha/hadoop-yarn/hadoop-yarn-site/SingleCluster.html

Regards,
Prajakta