[jira] [Created] (HADOOP-8577) The RPC must have failed proxyUser (auth:SIMPLE) via realus...@hadoop.apache.org (auth:SIMPLE)

2012-07-08 Thread chandrashekhar Kotekar (JIRA)
chandrashekhar Kotekar created HADOOP-8577:
--

 Summary: The RPC must have failed proxyUser (auth:SIMPLE) via 
realus...@hadoop.apache.org (auth:SIMPLE)
 Key: HADOOP-8577
 URL: https://issues.apache.org/jira/browse/HADOOP-8577
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
 Environment: Ubuntu 11
JDK 1.7
Maven 3.0.4
Reporter: chandrashekhar Kotekar
Priority: Minor


Hi,

I have downloaded maven source code today itself and tried test it. I did 
following steps :
1) mvn clean
2) mvn compile
3) mvn test

After 3rd step one step failed. Stack trace of failed test is as follows :

Failed tests:   
testRealUserIPNotSpecified(org.apache.hadoop.security.TestDoAsEffectiveUser): 
The RPC must have failed proxyUser (auth:SIMPLE) via 
realus...@hadoop.apache.org (auth:SIMPLE)
  testWithDirStringAndConf(org.apache.hadoop.fs.shell.TestPathData): checking 
exist
  testPartialAuthority(org.apache.hadoop.fs.TestFileSystemCanonicalization): 
expected:myfs://host.a.b:123 but was:myfs://host.a:123
  testFullAuthority(org.apache.hadoop.fs.TestFileSystemCanonicalization): 
expected:null but was:java.lang.IllegalArgumentException: Wrong FS: 
myfs://host/file, expected: myfs://host.a.b
  
testShortAuthorityWithDefaultPort(org.apache.hadoop.fs.TestFileSystemCanonicalization):
 expected:myfs://host.a.b:123 but was:myfs://host:123
  
testPartialAuthorityWithDefaultPort(org.apache.hadoop.fs.TestFileSystemCanonicalization):
 expected:myfs://host.a.b:123 but was:myfs://host.a:123
  testShortAuthority(org.apache.hadoop.fs.TestFileSystemCanonicalization): 
expected:myfs://host.a.b:123 but was:myfs://host:123
  
testIpAuthorityWithOtherPort(org.apache.hadoop.fs.TestFileSystemCanonicalization):
 expected:myfs://127.0.0.1:456 but was:myfs://localhost:456
  
testAuthorityFromDefaultFS(org.apache.hadoop.fs.TestFileSystemCanonicalization):
 expected:myfs://host.a.b:123 but was:myfs://host:123
  
testFullAuthorityWithDefaultPort(org.apache.hadoop.fs.TestFileSystemCanonicalization):
 expected:null but was:java.lang.IllegalArgumentException: Wrong FS: 
myfs://host/file, expected: myfs://host.a.b:123
  
testShortAuthorityWithOtherPort(org.apache.hadoop.fs.TestFileSystemCanonicalization):
 expected:myfs://host.a.b:456 but was:myfs://host:456
  
testPartialAuthorityWithOtherPort(org.apache.hadoop.fs.TestFileSystemCanonicalization):
 expected:myfs://host.a.b:456 but was:myfs://host.a:456
  
testFullAuthorityWithOtherPort(org.apache.hadoop.fs.TestFileSystemCanonicalization):
 expected:null but was:java.lang.IllegalArgumentException: Wrong FS: 
myfs://host:456/file, expected: myfs://host.a.b:456
  testIpAuthority(org.apache.hadoop.fs.TestFileSystemCanonicalization): 
expected:myfs://127.0.0.1:123 but was:myfs://localhost:123
  
testIpAuthorityWithDefaultPort(org.apache.hadoop.fs.TestFileSystemCanonicalization):
 expected:myfs://127.0.0.1:123 but was:myfs://localhost:123

Tests in error: 
  testUnqualifiedUriContents(org.apache.hadoop.fs.shell.TestPathData): `d1': No 
such file or directory

I am newbie in Hadoop source code world. Please help me in building hadoop 
source code.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (HADOOP-8577) The RPC must have failed proxyUser (auth:SIMPLE) via realus...@hadoop.apache.org (auth:SIMPLE)

2012-07-08 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J resolved HADOOP-8577.
-

Resolution: Invalid

The JIRA is to track issues with the project, not for user/dev-help. Please ask 
your question on common-dev[at]hadoop.apache.org mailing lists instead, and 
refrain from posting general questions on the JIRA. Thanks! :)

P.s. The issue is your OS. Fix your /etc/hosts to use the right format of IP 
FQDN ALIAS, instead of IP ALIAS FQDN. In any case, please mail the right 
user/dev group. See http://hadoop.apache.org/mailing_lists.html

 The RPC must have failed proxyUser (auth:SIMPLE) via 
 realus...@hadoop.apache.org (auth:SIMPLE)
 --

 Key: HADOOP-8577
 URL: https://issues.apache.org/jira/browse/HADOOP-8577
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
 Environment: Ubuntu 11
 JDK 1.7
 Maven 3.0.4
Reporter: chandrashekhar Kotekar
Priority: Minor
   Original Estimate: 12h
  Remaining Estimate: 12h

 Hi,
 I have downloaded maven source code today itself and tried test it. I did 
 following steps :
 1) mvn clean
 2) mvn compile
 3) mvn test
 After 3rd step one step failed. Stack trace of failed test is as follows :
 Failed tests:   
 testRealUserIPNotSpecified(org.apache.hadoop.security.TestDoAsEffectiveUser): 
 The RPC must have failed proxyUser (auth:SIMPLE) via 
 realus...@hadoop.apache.org (auth:SIMPLE)
   testWithDirStringAndConf(org.apache.hadoop.fs.shell.TestPathData): checking 
 exist
   testPartialAuthority(org.apache.hadoop.fs.TestFileSystemCanonicalization): 
 expected:myfs://host.a.b:123 but was:myfs://host.a:123
   testFullAuthority(org.apache.hadoop.fs.TestFileSystemCanonicalization): 
 expected:null but was:java.lang.IllegalArgumentException: Wrong FS: 
 myfs://host/file, expected: myfs://host.a.b
   
 testShortAuthorityWithDefaultPort(org.apache.hadoop.fs.TestFileSystemCanonicalization):
  expected:myfs://host.a.b:123 but was:myfs://host:123
   
 testPartialAuthorityWithDefaultPort(org.apache.hadoop.fs.TestFileSystemCanonicalization):
  expected:myfs://host.a.b:123 but was:myfs://host.a:123
   testShortAuthority(org.apache.hadoop.fs.TestFileSystemCanonicalization): 
 expected:myfs://host.a.b:123 but was:myfs://host:123
   
 testIpAuthorityWithOtherPort(org.apache.hadoop.fs.TestFileSystemCanonicalization):
  expected:myfs://127.0.0.1:456 but was:myfs://localhost:456
   
 testAuthorityFromDefaultFS(org.apache.hadoop.fs.TestFileSystemCanonicalization):
  expected:myfs://host.a.b:123 but was:myfs://host:123
   
 testFullAuthorityWithDefaultPort(org.apache.hadoop.fs.TestFileSystemCanonicalization):
  expected:null but was:java.lang.IllegalArgumentException: Wrong FS: 
 myfs://host/file, expected: myfs://host.a.b:123
   
 testShortAuthorityWithOtherPort(org.apache.hadoop.fs.TestFileSystemCanonicalization):
  expected:myfs://host.a.b:456 but was:myfs://host:456
   
 testPartialAuthorityWithOtherPort(org.apache.hadoop.fs.TestFileSystemCanonicalization):
  expected:myfs://host.a.b:456 but was:myfs://host.a:456
   
 testFullAuthorityWithOtherPort(org.apache.hadoop.fs.TestFileSystemCanonicalization):
  expected:null but was:java.lang.IllegalArgumentException: Wrong FS: 
 myfs://host:456/file, expected: myfs://host.a.b:456
   testIpAuthority(org.apache.hadoop.fs.TestFileSystemCanonicalization): 
 expected:myfs://127.0.0.1:123 but was:myfs://localhost:123
   
 testIpAuthorityWithDefaultPort(org.apache.hadoop.fs.TestFileSystemCanonicalization):
  expected:myfs://127.0.0.1:123 but was:myfs://localhost:123
 Tests in error: 
   testUnqualifiedUriContents(org.apache.hadoop.fs.shell.TestPathData): `d1': 
 No such file or directory
 I am newbie in Hadoop source code world. Please help me in building hadoop 
 source code.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




Re: fs.trash.interval

2012-07-08 Thread Harsh J
Hi,

I'm not sure why you're asking how to stop. Can you not ^C (Ctrl-C)
the running 'hadoop fs -rm' command and start over?

^C
hadoop fs -rm -skipTrash /path
hadoop fs -rm -skipTrash .Trash

Also, please send user queries to the common-user@ group, not the
common-dev@ group, which is for project development.

On Mon, Jul 9, 2012 at 2:58 AM, abhiTowson cal
abhishek.dod...@gmail.com wrote:
 Hi,
 We have very large sample dataset to delete from HDFS. But we dont
 need this data to be in trash (trash interval is enabled).
 Unfortunately we started deleting data without skip trash option. It's
 taking very long time to move data into trash. Can you please help me
 how to stop this process of deleting and restart process with skip
 trash??



-- 
Harsh J


[jira] [Created] (HADOOP-8578) Provide a mechanism for cleaning config items from LocalDirAllocator which will not be used anymore

2012-07-08 Thread Devaraj K (JIRA)
Devaraj K created HADOOP-8578:
-

 Summary: Provide a mechanism for cleaning config items from 
LocalDirAllocator which will not be used anymore
 Key: HADOOP-8578
 URL: https://issues.apache.org/jira/browse/HADOOP-8578
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Devaraj K


If we use DefaultContainerExecutor, for every application one config item is 
getting added into the LocalDirAllocator.contexts and is not deleting forever 
and due to this nm throws oom error after some time. This has been fixed with 
MAPREDUCE-4379 by adding removeContext() api in LocalDirAllocator and explictly 
deleting after the application completion.


It would be good if we can clean the cache of config items from 
LocalDirAllocator when there is no use with that furthermore.
https://issues.apache.org/jira/browse/MAPREDUCE-4379?focusedCommentId=13407237page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13407237

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (HADOOP-8166) Remove JDK 1.5 dependency from building forrest docs

2012-07-08 Thread Matt Foley (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley resolved HADOOP-8166.


Resolution: Duplicate

 Remove JDK 1.5 dependency from building forrest docs
 

 Key: HADOOP-8166
 URL: https://issues.apache.org/jira/browse/HADOOP-8166
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation
Affects Versions: 0.20.203.0, 0.20.204.0, 0.20.205.0, 1.0.0, 1.0.1
Reporter: Mark Butler
 Attachments: forrest.patch, hadoop-8166.txt


 Currently Hadoop requires both JDK 1.6 and JDK 1.5. JDK 1.5 is a requirement 
 of Forrest. It is easy to remove the latter requirement by turning off 
 forrest.validate.sitemap and forrest.validate.skins.stylesheets.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira