[jira] [Created] (HADOOP-8386) hadoop script doesn't work if 'cd' prints to stdout (default behavior in Ubuntu)

2012-05-09 Thread Christopher Berner (JIRA)
Christopher Berner created HADOOP-8386:
--

 Summary: hadoop script doesn't work if 'cd' prints to stdout 
(default behavior in Ubuntu)
 Key: HADOOP-8386
 URL: https://issues.apache.org/jira/browse/HADOOP-8386
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Affects Versions: 1.0.2, 0.20.205.0
 Environment: Ubuntu
Reporter: Christopher Berner


if the 'hadoop' script is run as 'bin/hadoop' on a distro where the 'cd' 
command prints to stdout, the script will fail due to this line: 'bin=`cd 
"$bin"; pwd`'

Workaround: execute from the bin/ directory as './hadoop'

Fix: change that line to 'bin=`cd "$bin" > /dev/null; pwd`'

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




Re: Test failures in TestFileSystemCanonicalization

2012-05-09 Thread Eli Collins
Do you get the same failure with JDK6?

On Wednesday, May 9, 2012, Trevor Robinson wrote:

> Does anyone know anything about these test failures in 0.23.1
> (+cdh4b2) and trunk?
>
> Failed tests:
>  testShortAuthority(org.apache.hadoop.fs.TestFileSystemCanonicalization):
> expected: but was:
>  testPartialAuthority(org.apache.hadoop.fs.TestFileSystemCanonicalization):
> expected: but was:
>
> $ mvn --version
> Apache Maven 3.0.4 (r1232337; 2012-01-17 02:44:56-0600)
> Maven home: /usr/local/apache-maven
> Java version: 1.7.0_04, vendor: Oracle Corporation
> Java home: /usr/lib/jvm/jdk1.7.0_04/jre
> Default locale: en_US, platform encoding: ISO-8859-1
> OS name: "linux", version: "3.2.0-24-generic", arch: "amd64", family:
> "unix"
>
> $ lsb_release -d
> Description:Ubuntu 12.04 LTS
>
> Thanks,
> Trevor
>


[jira] [Created] (HADOOP-8385) TestRecoveryManager fails

2012-05-09 Thread Benoy Antony (JIRA)
Benoy Antony created HADOOP-8385:


 Summary: TestRecoveryManager fails
 Key: HADOOP-8385
 URL: https://issues.apache.org/jira/browse/HADOOP-8385
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: security
Affects Versions: 0.22.0
Reporter: Benoy Antony
Assignee: Benoy Antony
Priority: Minor
 Fix For: 0.22.1
 Attachments: TestRecoveryManager.patch

Error Message shown from jenkins

Timeout occurred. Please note the time in the report does not reflect the time 
until the timeout.
Stacktrace

junit.framework.AssertionFailedError: Timeout occurred. Please note the time in 
the report does not reflect the time until the timeout.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8384) TestTaskTrackerLocalization fails

2012-05-09 Thread Benoy Antony (JIRA)
Benoy Antony created HADOOP-8384:


 Summary: TestTaskTrackerLocalization fails 
 Key: HADOOP-8384
 URL: https://issues.apache.org/jira/browse/HADOOP-8384
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: security
Affects Versions: 0.22.0
Reporter: Benoy Antony
Assignee: Benoy Antony
Priority: Minor
 Fix For: 0.22.1
 Attachments: TestTaskTrackerLocalization.patch

There are 11 failures on TestTaskTrackerLocalization .

org.apache.hadoop.mapred.TestTaskTrackerLocalization.testUserLocalization
org.apache.hadoop.mapred.TestTaskTrackerLocalization.testJobLocalization
org.apache.hadoop.mapred.TestTaskTrackerLocalization.testJobLocalizationFailsIfLogDirUnwritable
org.apache.hadoop.mapred.TestTaskTrackerLocalization.testTaskLocalization
org.apache.hadoop.mapred.TestTaskTrackerLocalization.testTaskFilesRemoval
org.apache.hadoop.mapred.TestTaskTrackerLocalization.testFailedTaskFilesRemoval
org.apache.hadoop.mapred.TestTaskTrackerLocalization.testTaskFilesRemovalWithJvmUse
org.apache.hadoop.mapred.TestTaskTrackerLocalization.testJobFilesRemoval
org.apache.hadoop.mapred.TestTaskTrackerLocalization.testTrackerRestart
org.apache.hadoop.mapred.TestTaskTrackerLocalization.testTrackerReinit
org.apache.hadoop.mapred.TestTaskTrackerLocalization.testCleanupTaskLocalization

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8383) TestKerberosAuthenticator fails

2012-05-09 Thread Benoy Antony (JIRA)
Benoy Antony created HADOOP-8383:


 Summary: TestKerberosAuthenticator fails
 Key: HADOOP-8383
 URL: https://issues.apache.org/jira/browse/HADOOP-8383
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: security
Affects Versions: 0.22.0
Reporter: Benoy Antony
Assignee: Benoy Antony
Priority: Minor
 Fix For: 0.22.1
 Attachments: ignore-kerberos-authenticator-test.patch

TestKerberosAuthenticator fails when executed on Jenkins

These failures are for  tests supposed to be ignored. (annotated with @Ignore) 
. But Jenkins executes and reports failures for these tests. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8382) Failure in deleting user directories in Secure hadoop

2012-05-09 Thread Benoy Antony (JIRA)
Benoy Antony created HADOOP-8382:


 Summary: Failure in deleting user directories in Secure hadoop
 Key: HADOOP-8382
 URL: https://issues.apache.org/jira/browse/HADOOP-8382
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: security
Affects Versions: 0.22.0
Reporter: Benoy Antony
Assignee: Benoy Antony
 Fix For: 0.22.1


This happens when security is enabled on 22

When TaskTracker starts p, it invokes MRAsyncDiskService moves the contents of 
scratch/tasktracker to toBeDeleted under a directory created with the current 
timestamp.
The owner of this directory is hadoop (owner of tasktracker)

The contents of this directory are various usernames and they are owned by 
individual users. But MRAsyncDiskService tries to delete the newly created 
directory as hadoop and it fails.

2012-05-01 16:04:58,805 DEBUG org.apache.hadoop.mapred.LinuxTaskController: 
deleteAsUser: 
[/apache/hadoop-assemble-argon-0.228/hadoop-0.22-argon-0.105/bin/../bin/task-controller,
 hadoop, 3, /hadoop00/scratch/toBeDeleted/2012-04-17_16-22-03.965_0]
2012-05-01 16:04:58,809 WARN 
org.apache.hadoop.mapreduce.util.MRAsyncDiskService: Failure in deletion of 
toBeDeleted/2012-04-17_16-22-03.965_0 on /hadoop00/scratch with original name 
/hadoop00/scratch/toBeDeleted/2012-04-17_16-22-03.965_0 with exception 
org.apache.hadoop.util.Shell$ExitCodeException:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:256)
at org.apache.hadoop.util.Shell.run(Shell.java:183)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:376)
at 
org.apache.hadoop.mapred.LinuxTaskController.deleteAsUser(LinuxTaskController.java:273)
at 
org.apache.hadoop.mapreduce.util.MRAsyncDiskService$DeleteTask.run(MRAsyncDiskService.java:237)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




Test failures in TestFileSystemCanonicalization

2012-05-09 Thread Trevor Robinson
Does anyone know anything about these test failures in 0.23.1
(+cdh4b2) and trunk?

Failed tests:
  testShortAuthority(org.apache.hadoop.fs.TestFileSystemCanonicalization):
expected: but was:
  testPartialAuthority(org.apache.hadoop.fs.TestFileSystemCanonicalization):
expected: but was:

$ mvn --version
Apache Maven 3.0.4 (r1232337; 2012-01-17 02:44:56-0600)
Maven home: /usr/local/apache-maven
Java version: 1.7.0_04, vendor: Oracle Corporation
Java home: /usr/lib/jvm/jdk1.7.0_04/jre
Default locale: en_US, platform encoding: ISO-8859-1
OS name: "linux", version: "3.2.0-24-generic", arch: "amd64", family: "unix"

$ lsb_release -d
Description:Ubuntu 12.04 LTS

Thanks,
Trevor


[jira] [Created] (HADOOP-8381) Substitute _HOST with hostname for HTTP principals

2012-05-09 Thread Benoy Antony (JIRA)
Benoy Antony created HADOOP-8381:


 Summary: Substitute _HOST with hostname  for HTTP principals 
 Key: HADOOP-8381
 URL: https://issues.apache.org/jira/browse/HADOOP-8381
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: security
Affects Versions: 0.22.0
Reporter: Benoy Antony
Assignee: Benoy Antony
Priority: Minor
 Fix For: 0.22.1


SPNEGO based Web Authentication uses HTTP/fqdn@REALM as the kerberos principal 
for each host.
Since it is difficult to modify the config for each host, a substitution 
feature where _HOST gets replaced by fqdn is implemented. 
The task is to provide similar feature for the kerberos principals used for 
SPNEGO principals

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8380) SecondaryNamenode doesn't start up in secure cluster

2012-05-09 Thread Benoy Antony (JIRA)
Benoy Antony created HADOOP-8380:


 Summary: SecondaryNamenode doesn't start up in secure cluster
 Key: HADOOP-8380
 URL: https://issues.apache.org/jira/browse/HADOOP-8380
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: security
Affects Versions: 0.22.0
Reporter: Benoy Antony
Assignee: Benoy Antony
Priority: Minor
 Fix For: 0.22.1


SN fails to startup due to access control error. This is an authorization issue 
and not authentication issue.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8379) Fix an issue related to do with setting of correct groups for tasks

2012-05-09 Thread Benoy Antony (JIRA)
Benoy Antony created HADOOP-8379:


 Summary: Fix an issue related to do with setting of correct groups 
for tasks
 Key: HADOOP-8379
 URL: https://issues.apache.org/jira/browse/HADOOP-8379
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: security
Affects Versions: 0.22.0
Reporter: Benoy Antony
Assignee: Benoy Antony
Priority: Minor
 Fix For: 0.22.1
 Attachments: incorrect-groups.patch

There was a recent fix related supplemental groups.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8378) Revert MAPREDUCE-2767

2012-05-09 Thread Benoy Antony (JIRA)
Benoy Antony created HADOOP-8378:


 Summary: Revert MAPREDUCE-2767 
 Key: HADOOP-8378
 URL: https://issues.apache.org/jira/browse/HADOOP-8378
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: security
Affects Versions: 0.22.0
Reporter: Benoy Antony
Assignee: Benoy Antony
Priority: Minor
 Fix For: 0.22.1


MAPREDUCE-2767 removed LinuxTaskController. 
This task is revert that so LinuxtaskController is introduced back.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (HADOOP-8371) Hadoop 1.0.1 release - DFS rollback issues

2012-05-09 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas resolved HADOOP-8371.
-

Resolution: Not A Problem
  Assignee: Suresh Srinivas

Rollback is not a problem. 

However, I created a related bug HDFS-3393 to track the issue where rollback 
was allowed on the newer release.

> Hadoop 1.0.1 release - DFS rollback issues
> --
>
> Key: HADOOP-8371
> URL: https://issues.apache.org/jira/browse/HADOOP-8371
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 1.0.1
> Environment: All tests were done on a single node cluster, that runs 
> namenode, secondarynamenode, datanode, all on one machine, running Ubuntu 
> 12.04
>Reporter: Giri
>Assignee: Suresh Srinivas
>Priority: Minor
>  Labels: hdfs
>
> See the next comment for details.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8377) Modify mapreduce build to include task-controller

2012-05-09 Thread Benoy Antony (JIRA)
Benoy Antony created HADOOP-8377:


 Summary: Modify mapreduce build to include task-controller
 Key: HADOOP-8377
 URL: https://issues.apache.org/jira/browse/HADOOP-8377
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: build
Affects Versions: 0.22.0
Reporter: Benoy Antony
Assignee: Benoy Antony
Priority: Minor
 Fix For: 0.22.1
 Attachments: task-controller-build.patch

Secure hadoop requires task-controller.
Task-controller has to be built as part of mapreduce build.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8376) Fix hdfs for

2012-05-09 Thread Benoy Antony (JIRA)
Benoy Antony created HADOOP-8376:


 Summary: Fix hdfs for 
 Key: HADOOP-8376
 URL: https://issues.apache.org/jira/browse/HADOOP-8376
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: security
Affects Versions: 0.22.0
Reporter: Benoy Antony
Assignee: Benoy Antony
Priority: Minor
 Fix For: 0.22.1


Starting secure datanode gives out the following error :

Error thrown :
09/04/2012 12:09:30 2524 jsvc error: Invalid option -server
09/04/2012 12:09:30 2524 jsvc error: Cannot parse command line arguments

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8375) test-patch should stop immediately once it has found compilation errors

2012-05-09 Thread Tsz Wo (Nicholas), SZE (JIRA)
Tsz Wo (Nicholas), SZE created HADOOP-8375:
--

 Summary: test-patch should stop immediately once it has found 
compilation errors
 Key: HADOOP-8375
 URL: https://issues.apache.org/jira/browse/HADOOP-8375
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Reporter: Tsz Wo (Nicholas), SZE


It does not makes sense to run findbugs or javadoc check if the program does 
not compile.  It was the behavior previously if I remember correctly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8374) Improve support for hard link manipulation on Windows

2012-05-09 Thread Bikas Saha (JIRA)
Bikas Saha created HADOOP-8374:
--

 Summary: Improve support for hard link manipulation on Windows
 Key: HADOOP-8374
 URL: https://issues.apache.org/jira/browse/HADOOP-8374
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1.0.0
Reporter: Bikas Saha
Assignee: Bikas Saha
 Fix For: 1.0.3


Hard link support for Windows does not work properly. There is some refactoring 
needed in the code. Also, the code currently executes the fsutil command to 
manipulate hard links. fsutil requires admin privileges on recent versions of 
Windows. The main features needed are the ability to create hard links to a 
file and count the number of hard links to a file. So we could use mklink to 
create hard links and write a custom executable to count hard links. Or use a 
custom executable to do both using Windows API's.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




Re: Sailfish

2012-05-09 Thread Otis Gospodnetic
Hi Sriram,

>> The I-file concept could possibly be implemented here in a fairly self 
>> contained way. One
>> could even colocate/embed a KFS filesystem with such an alternate
>> shuffle, like how MR task temporary space is usually colocated with
>> HDFS storage.

>  Exactly.

>> Does this seem reasonable in any way?

> Great. Where do go from here?  How do we get a colloborative effort going? 


Sounds like a JIRA issue should be opened, the approach briefly described, and 
the first implementation attempt made.  Then iterate.

I look forward to seeing this! :)

Otis
--

Performance Monitoring for Solr / ElasticSearch / HBase - 
http://sematext.com/spm 



>
> From: Sriram Rao 
>To: common-dev@hadoop.apache.org 
>Sent: Tuesday, May 8, 2012 6:48 PM
>Subject: Re: Sailfish
> 
>Dear Andy,
>
>> From: Andrew Purtell 
>> ...
>
>> Do you intend this to be a joint project with the Hadoop community or
>> a technology competitor?
>
>As I had said in my email, we are looking for folks to colloborate
>with us to help get us integrated with Hadoop.  So, to be explicitly
>clear, we are intending for this to be a joint project with the
>community.
>
>> Regrettably, KFS is not a "drop in replacement" for HDFS.
>> Hypothetically: I have several petabytes of data in an existing HDFS
>> deployment, which is the norm, and a continuous MapReduce workflow.
>> How do you propose I, practically, migrate to something like Sailfish
>> without a major capital expenditure and/or downtime and/or data loss?
>
>Well, we are not asking for KFS to replace HDFS.  One path you could
>take is to experiment with Sailfish---use KFS just for the
>intermediate data and HDFS for everything else.  There is no major
>capex :).  While you get comfy with pushing intermediate data into a
>DFS, we get the ideas added to HDFS.  This simplifies deployment
>considerations.
>
>> However, can the Sailfish I-files implementation be plugged in as an
>> alternate Shuffle implementation in MRv2 (see MAPREDUCE-3060 and
>> MAPREDUCE-4049),
>
>This'd be great!
>
>> with necessary additional plumbing for dynamic
>> adjustment of reduce task population? And the workbuilder could be
>> part of an alternate MapReduce Application Manager?
>
>It should be part of the AM.  (Currently, with our implementation in
>Hadoop-0.20.2, the workbuilder serves the role of an AM).
>
>> The I-file concept could possibly be implemented here in a fairly self 
>> contained way. One
>> could even colocate/embed a KFS filesystem with such an alternate
>> shuffle, like how MR task temporary space is usually colocated with
>> HDFS storage.
>
>Exactly.
>
>> Does this seem reasonable in any way?
>
>Great. Where do go from here?  How do we get a colloborative effort going?
>
>Best,
>
>Sriram
>
>>>  From: Sriram Rao 
>>> To: common-dev@hadoop.apache.org
>>> Sent: Tuesday, May 8, 2012 10:32 AM
>>> Subject: Project announcement: Sailfish (also, looking for colloborators)
>>>
>>> Hi,
>>>
>>> I'd like to announce the release of a new open source project, Sailfish.
>>>
>>> http://code.google.com/p/sailfish/
>>>
>>> Sailfish tries to improve Hadoop-performance, particularly for large-jobs
>>> which process TB's of data and run for hours.  In building Sailfish, we
>>> modify how map-output is handled and transported from map->reduce.
>>>
>>> The project pages provide more information about the project.
>>>
>>> We are looking for colloborators who can help get some of the ideas into
>>> Apache Hadoop. A possible step forward could be to make "shuffle" phase of
>>> Hadoop pluggable.
>>>
>>> If you are interested in working with us, please get in touch with me.
>>>
>>> Sriram
>>
>
>
>
>-- 
>Best regards,
>
>   - Andy
>
>Problems worthy of attack prove their worth by hitting back. - Piet
>Hein (via Tom White)
>
>
>

[jira] [Created] (HADOOP-8373) Port RPC.getServerAddress to 0.23

2012-05-09 Thread Daryn Sharp (JIRA)
Daryn Sharp created HADOOP-8373:
---

 Summary: Port RPC.getServerAddress to 0.23
 Key: HADOOP-8373
 URL: https://issues.apache.org/jira/browse/HADOOP-8373
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc
Affects Versions: 0.23.3
Reporter: Daryn Sharp
Assignee: Daryn Sharp


{{RPC.getServerAddress}} was introduced in trunk/2.0 as part of larger HA 
changes.  0.23 does not have HA, but this non-HA specific method is needed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8372) normalizeHostName() in NetUtils is not working properly in resolving a hostname start with numeric character

2012-05-09 Thread Junping Du (JIRA)
Junping Du created HADOOP-8372:
--

 Summary: normalizeHostName() in NetUtils is not working properly 
in resolving a hostname start with numeric character
 Key: HADOOP-8372
 URL: https://issues.apache.org/jira/browse/HADOOP-8372
 Project: Hadoop Common
  Issue Type: Bug
  Components: io, util
Affects Versions: 0.23.0, 1.0.0
Reporter: Junping Du
Assignee: Junping Du


A valid host name can start with numeric value (You can refer RFC952, RFC1123 
or http://www.zytrax.com/books/dns/apa/names.html), so it is possible in a 
production environment, user name their hadoop nodes as: 1hosta, 2hostb, etc. 
But normalizeHostName() will recognise this hostname as IP address and return 
directly rather than resolving the real IP address. These nodes will be failed 
to get correct network topology if topology script/TableMapping only contains 
their IPs (without hostname).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




Jenkins build is back to normal : Hadoop-Common-0.23-Build #247

2012-05-09 Thread Apache Jenkins Server
See 



Build failed in Jenkins: Hadoop-Common-trunk #401

2012-05-09 Thread Apache Jenkins Server
See 

Changes:

[szetszwo] Remove the empty file FSInodeInfo.java for HDFS-3363.

[umamahesh] HDFS-3157. Error in deleting block is keep on coming from DN even 
after the block report and directory scanning has happened. Contributed by 
Ashish Singhi.

[szetszwo] MAPREDUCE-4231. Update RAID to use the new BlockCollection interface.

[tgraves] MAPREDUCE-4215. RM app page shows 500 error on appid parse error 
(Jonathon Eagles via tgraves)

[bobby] MAPREDUCE-3850. Avoid redundant calls for tokens in TokenCache (Daryn 
Sharp via bobby)

[bobby] MAPREDUCE-4162. Correctly set token service (Daryn Sharp via bobby)

[bobby] HADOOP-8341. Fix or filter findbugs issues in hadoop-tools (bobby)

--
[...truncated 45264 lines...]
[DEBUG]   (f) reactorProjects = [MavenProject: 
org.apache.hadoop:hadoop-annotations:3.0.0-SNAPSHOT @ 

 MavenProject: org.apache.hadoop:hadoop-auth:3.0.0-SNAPSHOT @ 

 MavenProject: org.apache.hadoop:hadoop-auth-examples:3.0.0-SNAPSHOT @ 

 MavenProject: org.apache.hadoop:hadoop-common:3.0.0-SNAPSHOT @ 

 MavenProject: org.apache.hadoop:hadoop-common-project:3.0.0-SNAPSHOT @ 

[DEBUG]   (f) useDefaultExcludes = true
[DEBUG]   (f) useDefaultManifestFile = false
[DEBUG] -- end configuration --
[INFO] 
[INFO] --- maven-enforcer-plugin:1.0:enforce (dist-enforce) @ 
hadoop-common-project ---
[DEBUG] Configuring mojo 
org.apache.maven.plugins:maven-enforcer-plugin:1.0:enforce from plugin realm 
ClassRealm[plugin>org.apache.maven.plugins:maven-enforcer-plugin:1.0, parent: 
sun.misc.Launcher$AppClassLoader@126b249]
[DEBUG] Configuring mojo 
'org.apache.maven.plugins:maven-enforcer-plugin:1.0:enforce' with basic 
configurator -->
[DEBUG]   (s) fail = true
[DEBUG]   (s) failFast = false
[DEBUG]   (f) ignoreCache = false
[DEBUG]   (s) project = MavenProject: 
org.apache.hadoop:hadoop-common-project:3.0.0-SNAPSHOT @ 

[DEBUG]   (s) version = [3.0.2,)
[DEBUG]   (s) version = 1.6
[DEBUG]   (s) rules = 
[org.apache.maven.plugins.enforcer.RequireMavenVersion@d92cab, 
org.apache.maven.plugins.enforcer.RequireJavaVersion@3bb18]
[DEBUG]   (s) session = org.apache.maven.execution.MavenSession@1a5770
[DEBUG]   (s) skip = false
[DEBUG] -- end configuration --
[DEBUG] Executing rule: org.apache.maven.plugins.enforcer.RequireMavenVersion
[DEBUG] Rule org.apache.maven.plugins.enforcer.RequireMavenVersion is cacheable.
[DEBUG] Key org.apache.maven.plugins.enforcer.RequireMavenVersion -937312197 
was found in the cache
[DEBUG] The cached results are still valid. Skipping the rule: 
org.apache.maven.plugins.enforcer.RequireMavenVersion
[DEBUG] Executing rule: org.apache.maven.plugins.enforcer.RequireJavaVersion
[DEBUG] Rule org.apache.maven.plugins.enforcer.RequireJavaVersion is cacheable.
[DEBUG] Key org.apache.maven.plugins.enforcer.RequireJavaVersion 48569 was 
found in the cache
[DEBUG] The cached results are still valid. Skipping the rule: 
org.apache.maven.plugins.enforcer.RequireJavaVersion
[INFO] 
[INFO] --- maven-site-plugin:3.0:attach-descriptor (attach-descriptor) @ 
hadoop-common-project ---
[DEBUG] Configuring mojo 
org.apache.maven.plugins:maven-site-plugin:3.0:attach-descriptor from plugin 
realm ClassRealm[plugin>org.apache.maven.plugins:maven-site-plugin:3.0, parent: 
sun.misc.Launcher$AppClassLoader@126b249]
[DEBUG] Configuring mojo 
'org.apache.maven.plugins:maven-site-plugin:3.0:attach-descriptor' with basic 
configurator -->
[DEBUG]   (f) basedir = 

[DEBUG]   (f) inputEncoding = UTF-8
[DEBUG]   (f) localRepository =id: local
  url: file:///home/jenkins/.m2/repository/
   layout: none

[DEBUG]   (f) outputEncoding = UTF-8
[DEBUG]   (f) pomPackagingOnly = true
[DEBUG]   (f) project = MavenProject: 
org.apache.hadoop:hadoop-common-project:3.0.0-SNAPSHOT @ 

[DEBUG]   (f) reactorProjects = [MavenProject: 
org.apache.hadoop:hadoop-annotations:3.0.0-SNAPSHOT @ 

 MavenProject: org.apache.hadoop:hadoop-auth:3.0.0-SNAPSHOT @