Build failed in Jenkins: Hadoop-Common-0.23-Build #295

2012-06-26 Thread Apache Jenkins Server
See 

Changes:

[tgraves] MAPREDUCE-4361. Fix detailed metrics for protobuf-based RPC on 0.23 
(Jason Lowe via tgraves)

--
[...truncated 87690 lines...]
Running org.apache.hadoop.fs.viewfs.TestViewFsWithAuthorityLocalFs
Tests run: 42, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.007 sec
Running org.apache.hadoop.fs.viewfs.TestViewFsLocalFs
Tests run: 42, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.912 sec
Running org.apache.hadoop.fs.TestGlobPattern
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.147 sec
Running org.apache.hadoop.fs.TestS3_LocalFileContextURI
Tests run: 17, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.082 sec
Running org.apache.hadoop.fs.TestLocalFSFileContextCreateMkdir
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.707 sec
Running org.apache.hadoop.fs.TestHarFileSystem
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.324 sec
Running org.apache.hadoop.fs.TestFileSystemCaching
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.73 sec
Running org.apache.hadoop.fs.TestLocalFsFCStatistics
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.539 sec
Running org.apache.hadoop.fs.TestHardLink
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.306 sec
Running org.apache.hadoop.fs.TestCommandFormat
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.168 sec
Running org.apache.hadoop.fs.TestLocal_S3FileContextURI
Tests run: 17, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.11 sec
Running org.apache.hadoop.fs.TestLocalFileSystem
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.807 sec
Running org.apache.hadoop.fs.TestFcLocalFsPermission
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.773 sec
Running org.apache.hadoop.fs.TestListFiles
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.564 sec
Running org.apache.hadoop.fs.TestPath
Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.818 sec
Running org.apache.hadoop.fs.kfs.TestKosmosFileSystem
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.583 sec
Running org.apache.hadoop.fs.TestGlobExpander
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.107 sec
Running org.apache.hadoop.fs.TestFilterFileSystem
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.625 sec
Running org.apache.hadoop.fs.TestFcLocalFsUtil
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.575 sec
Running org.apache.hadoop.fs.TestGetFileBlockLocations
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.767 sec
Running org.apache.hadoop.fs.s3.TestInMemoryS3FileSystemContract
Tests run: 29, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.888 sec
Running org.apache.hadoop.fs.s3.TestINode
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.113 sec
Running org.apache.hadoop.fs.s3.TestS3Credentials
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.195 sec
Running org.apache.hadoop.fs.s3.TestS3FileSystem
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.222 sec
Running org.apache.hadoop.fs.TestDU
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.25 sec
Running org.apache.hadoop.record.TestBuffer
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.116 sec
Running org.apache.hadoop.record.TestRecordVersioning
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.186 sec
Running org.apache.hadoop.record.TestRecordIO
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.21 sec
Running org.apache.hadoop.metrics2.source.TestJvmMetrics
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.428 sec
Running org.apache.hadoop.metrics2.util.TestSampleStat
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.135 sec
Running org.apache.hadoop.metrics2.util.TestMetricsCache
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.12 sec
Running org.apache.hadoop.metrics2.lib.TestInterns
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.275 sec
Running org.apache.hadoop.metrics2.lib.TestMetricsAnnotations
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.493 sec
Running org.apache.hadoop.metrics2.lib.TestMutableMetrics
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.488 sec
Running org.apache.hadoop.metrics2.lib.TestUniqNames
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.144 sec
Running org.apache.hadoop.metrics2.lib.TestMetricsRegistry
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.423 sec
Running org.apache.hadoop.metrics2.impl.TestMetricsCollectorImpl
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.297 sec
Running org.apache.hadoop.metrics2.impl.TestGangliaMetrics
Tests run: 2, Fail

[jira] [Created] (HADOOP-8529) Error while formatting the namenode in hadoop single node setup in windows

2012-06-26 Thread Narayana Karteek (JIRA)
Narayana Karteek created HADOOP-8529:


 Summary: Error while formatting the namenode in hadoop single node 
setup in windows
 Key: HADOOP-8529
 URL: https://issues.apache.org/jira/browse/HADOOP-8529
 Project: Hadoop Common
  Issue Type: Task
  Components: conf
Affects Versions: 1.0.3
 Environment: Windows XP using Cygwin
Reporter: Narayana Karteek
Priority: Blocker


Hi,
  I tried to configure hadoop 1.0.3 .I added all libs from share folder to 
lib directory.But still i get the error while formatting the 
 namenode

$ ./hadoop namenode -format
java.lang.NoClassDefFoundError:
Caused by: java.lang.ClassNotFoundException:
at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
.  Program will exit.in class:
Exception in thread "main"


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




Resolving find bug issue

2012-06-26 Thread madhu phatak
Hi,
 I have submitted a patch for jira (HADOOP-8521) which is giving findbug(
https://issues.apache.org/jira/browse/HADOOP-8521) error. To fix the
issue,I have to duplicate the StreamUtil class to the newly introduced
mapreduce package.Is it a good practice or is there other way to fix this?


Regards,
Madhukara Phatak

-- 
https://github.com/zinnia-phatak-dev/Nectar


[jira] [Created] (HADOOP-8530) Potential deadlock in IPC

2012-06-26 Thread Tom White (JIRA)
Tom White created HADOOP-8530:
-

 Summary: Potential deadlock in IPC
 Key: HADOOP-8530
 URL: https://issues.apache.org/jira/browse/HADOOP-8530
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Affects Versions: 2.0.0-alpha, 1.0.3
Reporter: Tom White


This cycle (see attached image, and explanation here: 
http://www.jcarder.org/manual.html#analysis) was found with jcarder in branch-1 
(affects trunk too).



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




Re: Resolving find bug issue

2012-06-26 Thread Robert Evans
The issue you are running into is because you made the HOST variable public, 
when it was package previously.  Findbugs thinks that you want HOST to be a 
constant because it is ALL CAPS and is only set once and read all other times.  
By making it public it is now difficult to ensure that it is never written to, 
hence the suggestion to make it final.  I would prefer to actually switch it 
over to private and add in a new public method that return the value of HOST.

--Bobby Evans


On 6/26/12 6:01 AM, "madhu phatak"  wrote:

Hi,
 I have submitted a patch for jira (HADOOP-8521) which is giving findbug(
https://issues.apache.org/jira/browse/HADOOP-8521) error. To fix the
issue,I have to duplicate the StreamUtil class to the newly introduced
mapreduce package.Is it a good practice or is there other way to fix this?


Regards,
Madhukara Phatak

--
https://github.com/zinnia-phatak-dev/Nectar



[jira] [Resolved] (HADOOP-8529) Error while formatting the namenode in hadoop single node setup in windows

2012-06-26 Thread Todd Lipcon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8529?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon resolved HADOOP-8529.
-

Resolution: Invalid

Please inquire with the user mailing lists for questions like this. JIRA is 
meant for bug/task tracking.

> Error while formatting the namenode in hadoop single node setup in windows
> --
>
> Key: HADOOP-8529
> URL: https://issues.apache.org/jira/browse/HADOOP-8529
> Project: Hadoop Common
>  Issue Type: Task
>  Components: conf
>Affects Versions: 1.0.3
> Environment: Windows XP using Cygwin
>Reporter: Narayana Karteek
>Priority: Blocker
> Attachments: capture8.bmp
>
>   Original Estimate: 5h
>  Remaining Estimate: 5h
>
> Hi,
>   I tried to configure hadoop 1.0.3 .I added all libs from share folder 
> to lib directory.But still i get the error while formatting the 
>  namenode
> $ ./hadoop namenode -format
> java.lang.NoClassDefFoundError:
> Caused by: java.lang.ClassNotFoundException:
> at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
> at java.security.AccessController.doPrivileged(Native Method)
> at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
> at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
> .  Program will exit.in class:
> Exception in thread "main"

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (HADOOP-8486) Resource leak - Close the open resource handles (File handles) before throwing the exception from the SequenceFile constructor

2012-06-26 Thread Bikas Saha (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8486?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bikas Saha resolved HADOOP-8486.


Resolution: Fixed

> Resource leak - Close the open resource handles (File handles) before 
> throwing the exception from the SequenceFile constructor
> --
>
> Key: HADOOP-8486
> URL: https://issues.apache.org/jira/browse/HADOOP-8486
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs, io
>Affects Versions: 1.0.2, 1-win
>Reporter: Kanna Karanam
>Assignee: Kanna Karanam
> Fix For: 1-win
>
> Attachments: HADOOP-8486-branch-1-win-(2).patch, 
> HADOOP-8486-branch-1-win-(3).patch, HADOOP-8486-branch-1-win-(4).patch, 
> HADOOP-8486-branch-1-win-(5).patch, HADOOP-8486-branch-1-win.patch
>
>
> I noticed this problem while I am working on porting HIVE to work on windows. 
> Hive is attempting to create this class object to validate the file format 
> and end up with resource leak. Because of this leak, we can’t move, rename or 
> delete the files on windows when there is an open file handle whereas in UNIX 
> we can perform all these operation with no issues even with open file handles.
> Please suggest me if you similar issues in any other places.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8531) SequenceFile Writer can throw out a better error if a serializer isn't available

2012-06-26 Thread Harsh J (JIRA)
Harsh J created HADOOP-8531:
---

 Summary: SequenceFile Writer can throw out a better error if a 
serializer isn't available
 Key: HADOOP-8531
 URL: https://issues.apache.org/jira/browse/HADOOP-8531
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Harsh J
Priority: Trivial


Currently, if the provided Key/Value class lacks a proper serializer in the 
loaded config for the SequenceFile.Writer, we get an NPE as the null return 
goes unchecked.

Hence we get:
{code}
java.lang.NullPointerException
at org.apache.hadoop.io.SequenceFile$Writer.init(SequenceFile.java:1163)
at 
org.apache.hadoop.io.SequenceFile$Writer.(SequenceFile.java:1079)
at 
org.apache.hadoop.io.SequenceFile$RecordCompressWriter.(SequenceFile.java:1331)
at org.apache.hadoop.io.SequenceFile.createWriter(SequenceFile.java:271)
{code}

We can provide a better message + exception in such cases. This is slightly 
related to MAPREDUCE-2584.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8532) [Configuration] Increase or make variable substitution depth configurable

2012-06-26 Thread Harsh J (JIRA)
Harsh J created HADOOP-8532:
---

 Summary: [Configuration] Increase or make variable substitution 
depth configurable
 Key: HADOOP-8532
 URL: https://issues.apache.org/jira/browse/HADOOP-8532
 Project: Hadoop Common
  Issue Type: Improvement
  Components: conf
Affects Versions: 2.0.0-alpha
Reporter: Harsh J


We've had some users recently complain that the default MAX_SUBST hardcode of 
20 isn't sufficient for their substitution needs and they wished it were 
configurable rather than having to roll about with workarounds such as using 
temporary smaller substitutes and then building the fuller one after it. We 
should consider raising the default hardcode, or provide a way to make it 
configurable instead.

Related: HIVE-2021 changed something similar for their HiveConf classes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8533) Remove Parallel Call in IPC

2012-06-26 Thread Suresh Srinivas (JIRA)
Suresh Srinivas created HADOOP-8533:
---

 Summary: Remove Parallel Call in IPC
 Key: HADOOP-8533
 URL: https://issues.apache.org/jira/browse/HADOOP-8533
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas
 Fix For: 3.0.0


>From what I know, I do not think any one uses Parallel Call. I also think it 
>is not tested very well.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8534) TestQueueManagerForJobKillAndJobPriority and TestQueueManagerForJobKillAndNonDefaultQueue fail on Windows

2012-06-26 Thread Ivan Mitic (JIRA)
Ivan Mitic created HADOOP-8534:
--

 Summary: TestQueueManagerForJobKillAndJobPriority and 
TestQueueManagerForJobKillAndNonDefaultQueue fail on Windows
 Key: HADOOP-8534
 URL: https://issues.apache.org/jira/browse/HADOOP-8534
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 1.0.0
Reporter: Ivan Mitic


Java xml parser keeps file locked after SAXException, causing the following 
tests to fail:
 - TestQueueManagerForJobKillAndJobPriority
 - TestQueueManagerForJobKillAndNonDefaultQueue

{{TestQueueManagerForJobKillAndJobPriority#testQueueAclRefreshWithInvalidConfFile()}}
 is creating a temp config file with incorrect syntax. Later, the test tries to 
delete/cleanup this file and this operation fails on Windows (as the file is 
still open). From this point on, all subsequent tests fail because they try to 
use the incorrect config file.

Forum references on the problem and the fix:
http://www.linuxquestions.org/questions/programming-9/java-xml-parser-keeps-file-locked-after-saxexception-768613/
https://forums.oracle.com/forums/thread.jspa?threadID=2046505&start=0&tstart=0


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (HADOOP-7084) Remove java5 dependencies from site's build

2012-06-26 Thread Konstantin Boudnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Boudnik resolved HADOOP-7084.


Resolution: Duplicate

This is a dup of Hadoop-7072

> Remove java5 dependencies from site's build
> ---
>
> Key: HADOOP-7084
> URL: https://issues.apache.org/jira/browse/HADOOP-7084
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Konstantin Boudnik
>Assignee: Konstantin Boudnik
>
> Java5 dependency needs to be removed from 
> http://svn.apache.org/repos/asf/hadoop/site/build.xml

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira