[jira] [Created] (HADOOP-9051) “ant test” will build failed for trying to delete an non-existent file

2012-11-15 Thread meng gong (JIRA)
meng gong created HADOOP-9051:
-

 Summary: “ant test” will build failed for  trying to delete an 
non-existent file
 Key: HADOOP-9051
 URL: https://issues.apache.org/jira/browse/HADOOP-9051
 Project: Hadoop Common
  Issue Type: Test
  Components: test
Affects Versions: 1.0.0
 Environment: OS: Ubuntu 10.04; forrest version 0.8; findbugs version 
2.0.1; ant version 1.8.1
Reporter: meng gong
Priority: Minor
 Fix For: 1.0.0


Run "ant test" on branch-1 of hadoop-common.
When the test process reach "test-core-excluding-commit-and-smoke"

It will invoke the "macro-test-runner" to clear and rebuild the test 
environment.
Then the ant task command  
failed for trying to delete an non-existent file.

following is the test result logs:
test-core-excluding-commit-and-smoke:
   [delete] Deleting: 
/home/jdu/bdc/hadoop-topology-branch1-new/hadoop-common/build/test/testsfailed
   [delete] Deleting directory 
/home/jdu/bdc/hadoop-topology-branch1-new/hadoop-common/build/test/data
[mkdir] Created dir: 
/home/jdu/bdc/hadoop-topology-branch1-new/hadoop-common/build/test/data
   [delete] Deleting directory 
/home/jdu/bdc/hadoop-topology-branch1-new/hadoop-common/build/test/logs

BUILD FAILED
/home/jdu/bdc/hadoop-topology-branch1-new/hadoop-common/build.xml:1212: The 
following error occurred while executing this line:
/home/jdu/bdc/hadoop-topology-branch1-new/hadoop-common/build.xml:1166: The 
following error occurred while executing this line:
/home/jdu/bdc/hadoop-topology-branch1-new/hadoop-common/build.xml:1057: Unable 
to delete file 
/home/jdu/bdc/hadoop-topology-branch1-new/hadoop-common/build/test/logs/userlogs/job_20121112223129603_0001/attempt_20121112223129603_0001_r_00_0/attempt_20121112223129603_0001_r_00_0/attempt_20121112223129603_0001_r_00_0/attempt_20121112223129603_0001_r_00_0/attempt_20121112223129603_0001_r_00_0/attempt_20121112223129603_0001_r_00_0/attempt_20121112223129603_0001_r_00_0/attempt_20121112223129603_0001_r_00_0/attempt_20121112223129603_0001_r_00_0/attempt_20121112223129603_0001_r_00_0/attempt_20121112223129603_0001_r_00_0/attempt_20121112223129603_0001_r_00_0/attempt_20121112223129603_0001_r_00_0/attempt_20121112223129603_0001_r_00_0/attempt_20121112223129603_0001_r_00_0/attempt_20121112223129603_0001_r_00_0/attempt_20121112223129603_0001_r_00_0/attempt_20121112223129603_0001_r_00_0/attempt_20121112223129603_0001_r_00_0/attempt_20121112223129603_0001_r_00_0/attempt_20121112223129603_0001_r_00_0/attempt_20121112223129603_0001_r_00_0/attempt_20121112223129603_0001_r_00_0/attempt_20121112223129603_0001_r_00_0/attempt_20121112223129603_0001_r_00_0/attempt_20121112223129603_0001_r_00_0/attempt_20121112223129603_0001_r_00_0/attempt_20121112223129603_0001_r_00_0/attempt_20121112223129603_0001_r_00_0/attempt_20121112223129603_0001_r_00_0/attempt_20121112223129603_0001_r_00_0/attempt_20121112223129603_0001_r_00_0/attempt_20121112223129603_0001_r_00_0/attempt_20121112223129603_0001_r_00_0/attempt_20121112223129603_0001_r_00_0/attempt_20121112223129603_0001_r_00_0/attempt_20121112223129603_0001_r_00_0/attempt_20121112223129603_0001_r_00_0/attempt_20121112223129603_0001_r_00_0/attempt_20121112223129603_0001_r_00_0/stdout


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HADOOP-9047) TestHDFSFileSystemContract.testMkdirsWithUmask failed by using umask 022

2012-11-15 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE resolved HADOOP-9047.


Resolution: Not A Problem

> TestHDFSFileSystemContract.testMkdirsWithUmask failed by using umask 022
> 
>
> Key: HADOOP-9047
> URL: https://issues.apache.org/jira/browse/HADOOP-9047
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.0.2-alpha
>Reporter: Junping Du
>
> In PreCommit-test of HADOOP-9045, this error appears as it still use system's 
> default 022 umask but not the 062 specifying in test.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Reopened] (HADOOP-9047) TestHDFSFileSystemContract.testMkdirsWithUmask failed by using umask 022

2012-11-15 Thread Tsz Wo (Nicholas), SZE (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE reopened HADOOP-9047:



Let's close this as not-a-problem since HADOOP-9042 was reverted.

> TestHDFSFileSystemContract.testMkdirsWithUmask failed by using umask 022
> 
>
> Key: HADOOP-9047
> URL: https://issues.apache.org/jira/browse/HADOOP-9047
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.0.2-alpha
>Reporter: Junping Du
>
> In PreCommit-test of HADOOP-9045, this error appears as it still use system's 
> default 022 umask but not the 062 specifying in test.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9050) CLONE - Remove java5 dependencies from build

2012-11-15 Thread Raja Aluri (JIRA)
Raja Aluri created HADOOP-9050:
--

 Summary: CLONE - Remove java5 dependencies from build
 Key: HADOOP-9050
 URL: https://issues.apache.org/jira/browse/HADOOP-9050
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: build
Reporter: Raja Aluri
Assignee: Konstantin Boudnik
 Fix For: 0.21.1


As the first short-term step let's remove JDK5 dependency from build(s)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HADOOP-8891) Remove DelegationTokenRenewer and its logic from FileSystems using it

2012-11-15 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8891?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla resolved HADOOP-8891.
--

Resolution: Invalid

Per discussion on HDFS-4009, this is invalid anymore. The issue at hand is 
being addressed in HADOOP-9049.

> Remove DelegationTokenRenewer and its logic from FileSystems using it
> -
>
> Key: HADOOP-8891
> URL: https://issues.apache.org/jira/browse/HADOOP-8891
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.0.1-alpha
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
> Attachments: HADOOP-8891.patch, HADOOP-8891.patch, HADOOP-8891.patch
>
>
> Moved the HDFS part of HADOOP-8852 to HDFS-4009 along with other sub-tasks. 
> Created this to track the removal of DelegationTokenRenewer alone.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9049) DelegationTokenRenewer needs to be Singleton and FileSystems should register/deregister to/from.

2012-11-15 Thread Karthik Kambatla (JIRA)
Karthik Kambatla created HADOOP-9049:


 Summary: DelegationTokenRenewer needs to be Singleton and 
FileSystems should register/deregister to/from.
 Key: HADOOP-9049
 URL: https://issues.apache.org/jira/browse/HADOOP-9049
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.0.2-alpha
Reporter: Karthik Kambatla


Currently, DelegationTokenRenewer is not singleton.

Each filesystem using it spawns its own DelegationTokenRenewer. Also, they 
don't stop the Renewer leading to other problems.

A single DelegationTokenRenewer should be sufficient for all FileSystems.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HADOOP-9047) TestHDFSFileSystemContract.testMkdirsWithUmask failed by using umask 022

2012-11-15 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe resolved HADOOP-9047.
--

Resolution: Fixed

We're going to fix this in HADOOP-9042.

> TestHDFSFileSystemContract.testMkdirsWithUmask failed by using umask 022
> 
>
> Key: HADOOP-9047
> URL: https://issues.apache.org/jira/browse/HADOOP-9047
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.0.2-alpha
>Reporter: Junping Du
>
> In PreCommit-test of HADOOP-9405, this error appears as it still use system's 
> default 022 umask but not the 062 specifying in test.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HADOOP-9048) TestHDFSFileSystemContract.testMkdirsWithUmask failed by using umask 022

2012-11-15 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du resolved HADOOP-9048.


Resolution: Duplicate

It is duplicated with HADOOP-9047 as double click jira creation based on bad 
network.

> TestHDFSFileSystemContract.testMkdirsWithUmask failed by using umask 022
> 
>
> Key: HADOOP-9048
> URL: https://issues.apache.org/jira/browse/HADOOP-9048
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.0.2-alpha
>Reporter: Junping Du
>
> In PreCommit-test of HADOOP-9405, this error appears as it still use system's 
> default 022 umask but not the 062 specifying in test.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9048) TestHDFSFileSystemContract.testMkdirsWithUmask failed by using umask 022

2012-11-15 Thread Junping Du (JIRA)
Junping Du created HADOOP-9048:
--

 Summary: TestHDFSFileSystemContract.testMkdirsWithUmask failed by 
using umask 022
 Key: HADOOP-9048
 URL: https://issues.apache.org/jira/browse/HADOOP-9048
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 2.0.2-alpha
Reporter: Junping Du


In PreCommit-test of HADOOP-9405, this error appears as it still use system's 
default 022 umask but not the 062 specifying in test.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9047) TestHDFSFileSystemContract.testMkdirsWithUmask failed by using umask 022

2012-11-15 Thread Junping Du (JIRA)
Junping Du created HADOOP-9047:
--

 Summary: TestHDFSFileSystemContract.testMkdirsWithUmask failed by 
using umask 022
 Key: HADOOP-9047
 URL: https://issues.apache.org/jira/browse/HADOOP-9047
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 2.0.2-alpha
Reporter: Junping Du


In PreCommit-test of HADOOP-9405, this error appears as it still use system's 
default 022 umask but not the 062 specifying in test.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9045) In nodegroup-aware case, make sure nodes are avoided to place replica if some replica are already under the same nodegroup

2012-11-15 Thread Junping Du (JIRA)
Junping Du created HADOOP-9045:
--

 Summary: In nodegroup-aware case, make sure nodes are avoided to 
place replica if some replica are already under the same nodegroup
 Key: HADOOP-9045
 URL: https://issues.apache.org/jira/browse/HADOOP-9045
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.0.2-alpha
Reporter: Junping Du


In previous implementation for HADOOP-8468, 3rd replica is avoid to place on 
the same nodegroup of 2nd replica. But it didn't provide check on nodegroup of 
1st replica, so if 2nd replica's rack is not efficient to place replica, then 
it is possible to place 3rd and 1st replica within the same node group. We need 
a change to remove all nodes from available nodes for placing replica if there 
already replica on the same nodegroup.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9044) add FindClass main class to provide classpath checking of installations

2012-11-15 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-9044:
--

 Summary: add FindClass main class to provide classpath checking of 
installations
 Key: HADOOP-9044
 URL: https://issues.apache.org/jira/browse/HADOOP-9044
 Project: Hadoop Common
  Issue Type: New Feature
  Components: util
Affects Versions: 1.1.0
Reporter: Steve Loughran
Priority: Minor


It's useful in postflight checking of a hadoop installation to verify that 
classes load, especially codes with external JARs and native codecs. 

An entry point designed to load a named class and create an instance of that 
class can do this -and be invoked from any shell script or tool that does the 
installation.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


Build failed in Jenkins: Hadoop-Common-0.23-Build #433

2012-11-15 Thread Apache Jenkins Server
See 

Changes:

[bobby] MAPREDUCE-4720. Browser thinks History Server main page JS is taking 
too long (Ravi Prakash via bobby)

[bobby] svn merge -c 1409525 FIXES: MAPREDUCE-4797. LocalContainerAllocator can 
loop forever trying to contact the RM (jlowe via bobby)

[bobby] HDFS-4182. SecondaryNameNode leaks NameCache entries (bobby)

[jlowe] YARN-216. Remove jquery theming support. Contributed by Robert Joseph 
Evans.

--
[...truncated 12325 lines...]

[INFO] 
[INFO] --- maven-clover2-plugin:3.0.5:clover (clover) @ hadoop-auth ---
[INFO] Using /default-clover-report descriptor.
[INFO] Using Clover report descriptor: /tmp/mvn6875310013983794079resource
[INFO] Clover Version 3.0.2, built on April 13 2010 (build-790)
[INFO] Loaded from: 
/home/jenkins/.m2/repository/com/cenqua/clover/clover/3.0.2/clover-3.0.2.jar
[INFO] Clover: Open Source License registered to Apache.
[INFO] Clover is enabled with initstring 
'
[WARNING] Clover historical directory 
[
 does not exist, skipping Clover historical report generation 
([
[INFO] Clover Version 3.0.2, built on April 13 2010 (build-790)
[INFO] Loaded from: 
/home/jenkins/.m2/repository/com/cenqua/clover/clover/3.0.2/clover-3.0.2.jar
[INFO] Clover: Open Source License registered to Apache.
[INFO] Loading coverage database from: 
'
[INFO] Writing HTML report to 
'
[INFO] Done. Processed 4 packages in 914ms (228ms per package).
[INFO] Clover Version 3.0.2, built on April 13 2010 (build-790)
[INFO] Loaded from: 
/home/jenkins/.m2/repository/com/cenqua/clover/clover/3.0.2/clover-3.0.2.jar
[INFO] Clover: Open Source License registered to Apache.
[INFO] Clover is enabled with initstring 
'
[WARNING] Clover historical directory 
[
 does not exist, skipping Clover historical report generation 
([
[INFO] Clover Version 3.0.2, built on April 13 2010 (build-790)
[INFO] Loaded from: 
/home/jenkins/.m2/repository/com/cenqua/clover/clover/3.0.2/clover-3.0.2.jar
[INFO] Clover: Open Source License registered to Apache.
[INFO] Loading coverage database from: 
'
[INFO] Writing report to 
'
[INFO] 
[INFO] 
[INFO] Building Apache Hadoop Auth Examples 0.23.5-SNAPSHOT
[INFO] 
[INFO] 
[INFO] --- maven-antrun-plugin:1.6:run (create-testdirs) @ hadoop-auth-examples 
---
[INFO] Executing tasks

main:
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-dependency-plugin:2.1:build-classpath (build-classpath) @ 
hadoop-auth-examples ---
[INFO] Wrote classpath file 
'
[INFO] 
[INFO] --- maven-clover2-plugin:3.0.5:setup (setup) @ hadoop-auth-examples ---
[INFO] Clover Version 3.0.2, built on April 13 2010 (build-790)
[INFO] Loaded from: 
/home/jenkins/.m2/repository/com/cenqua/clover/clover/3.0.2/clover-3.0.2.jar
[INFO] Clover: Open Source License registered to Apache.
[INFO] Creating new database at 
'
[INFO] Processing files at 1.6 source level.
[INFO] Clover all over. Instrumented 3 files (1 package).
[INFO] Elapsed time = 0.014 secs. (214.286 files/sec, 20,214.285 srclines/sec)
[INFO] No Clover instrumentation done on source files in: 
[
 

Re: which part of Hadoop is responsible of distributing the input file fragments to datanodes?

2012-11-15 Thread Yanbo Liang
I guess you means to set your own strategy of block distribution.
If this, just hack the code as following clue:
FSNamesystem.getAdditionalBlock() ---> BlockManager.chooseTarget()
 ---> BlockPlacementPolicy.chooseTarget().
And you need to implement your own BlockPlacementPolicy.
Then if the client request addBlock RPC, the NameNode will assign DataNode
to store the replicas as your rules.

2012/11/15 salmakhalil 

> What I want to do exactly is redistributing the input file fragments over
> the
> nodes of cluster according some calculations. I need to find the part that
> starts to distribute the input file to add my code instead of.
>
>
>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/which-part-of-Hadoop-is-responsible-of-distributing-the-input-file-fragments-to-datanodes-tp4019530p4020330.html
> Sent from the Hadoop lucene-dev mailing list archive at Nabble.com.
>