[jira] [Resolved] (HADOOP-7211) Security uses proprietary Sun APIs

2012-04-05 Thread Luke Lu (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luke Lu resolved HADOOP-7211.
-

  Resolution: Duplicate
Target Version/s: 1.0.3, 2.0.0

This jira is incorporated by the patches in HADOOP-6941 and HADOOP-7211

> Security uses proprietary Sun APIs
> --
>
> Key: HADOOP-7211
> URL: https://issues.apache.org/jira/browse/HADOOP-7211
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Eli Collins
>Assignee: Luke Lu
>
> The security code uses the KrbException, Credentials, and PrincipalName 
> classes from sun.security.krb5 and Krb5Util from sun.security.jgss.krb5. 
> These may disappear in future Java releases. Also Hadoop does not compile 
> using jdks that do not support them, for example with the following IBM JDK.
> {noformat}
> $ /home/eli/toolchain/java-x86_64-60/bin/java -version
> java version "1.6.0"
> Java(TM) SE Runtime Environment (build pxa6460sr9fp1-20110208_03(SR9 FP1))
> IBM J9 VM (build 2.4, JRE 1.6.0 IBM J9 2.4 Linux amd64-64 
> jvmxa6460sr9-20110203_74623 (JIT enabled, AOT enabled)
> J9VM - 20110203_074623
> JIT  - r9_20101028_17488ifx3
> GC   - 20101027_AA)
> JCL  - 20110203_01
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (HADOOP-8077) HA: fencing method should be able to be configured on a per-NN or per-NS basis

2012-04-05 Thread Todd Lipcon (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon resolved HADOOP-8077.
-

   Resolution: Fixed
Fix Version/s: 2.0.0
 Hadoop Flags: Reviewed

re-ran all the fencing/failover tests and they passed. Committed to branch-2 
and trunk. Thanks for the review.

> HA: fencing method should be able to be configured on a per-NN or per-NS basis
> --
>
> Key: HADOOP-8077
> URL: https://issues.apache.org/jira/browse/HADOOP-8077
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ha
>Affects Versions: 0.24.0
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
> Fix For: 2.0.0
>
> Attachments: hadoop-8077.txt, hadoop-8077.txt
>
>
> Currently, the fencing method configuration is global. Given that different 
> nameservices may use different underlying storage mechanisms or different 
> types of PDUs, it would be preferable to allow the fencing method 
> configuration to be scoped by namenode or nameservice.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8251) SecurityUtil.fetchServiceTicket broken after HADOOP-6941

2012-04-05 Thread Todd Lipcon (Created) (JIRA)
SecurityUtil.fetchServiceTicket broken after HADOOP-6941


 Key: HADOOP-8251
 URL: https://issues.apache.org/jira/browse/HADOOP-8251
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 1.1.0, 2.0.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon
Priority: Blocker
 Attachments: hadoop-8251.txt

HADOOP-6941 replaced direct references to some classes with reflective access 
so as to support other JDKs. Unfortunately there was a mistake in the name of 
the Krb5Util class, which broke fetchServiceTicket. This manifests itself as 
the inability to run checkpoints or other krb5-SSL HTTP-based transfers:

java.lang.ClassNotFoundException: sun.security.jgss.krb5

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




Re: hadoop build problems

2012-04-05 Thread Hennig, Ryan
You'll probably get better help if you give details on what OS you're
using, which JDK, version of hadoop, an example error, etc.

- Ryan


On 4/5/12 5:32 PM, "Ranjan Banerjee"  wrote:

>Hello,
>  We were trying to build the hadoop source code but getting a
> lot of errors(100 to be precise). We used the ant package to build the
>haddop. Tried it from both the HADOOP_HOME folder and the src folder but
> the errors continue to happen. Can someone please help us out.
>
>Thanking you
>
>Yours faithfully
>Ranjan Banerjee



hadoop build problems

2012-04-05 Thread Ranjan Banerjee
Hello,
  We were trying to build the hadoop source code but getting a
 lot of errors(100 to be precise). We used the ant package to build the 
haddop. Tried it from both the HADOOP_HOME folder and the src folder but
 the errors continue to happen. Can someone please help us out.

Thanking you

Yours faithfully
Ranjan Banerjee


Jenkins build for branch-2?

2012-04-05 Thread Jason Lowe
Are there plans to setup the regular Jenkins builds for branch-2?  I 
noticed branch-2 had a build failure recently, but there wasn't a 
corresponding failure message sent to the dev list.


Jasonn



[jira] [Resolved] (HADOOP-6963) Fix FileUtil.getDU. It should not include the size of the directory or follow symbolic links

2012-04-05 Thread Tsz Wo (Nicholas), SZE (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6963?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE resolved HADOOP-6963.


  Resolution: Fixed
   Fix Version/s: (was: 3.0.0)
  (was: 2.0.0)
  1.0.3
Target Version/s: 1.0.3, 0.23.3  (was: 0.23.3, 1.0.3)

I also committed the patch to branch-1 and branch-1.0.

> Fix FileUtil.getDU. It should not include the size of the directory or follow 
> symbolic links
> 
>
> Key: HADOOP-6963
> URL: https://issues.apache.org/jira/browse/HADOOP-6963
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 0.20.205.0, 0.23.1
>Reporter: Owen O'Malley
>Assignee: Ravi Prakash
>Priority: Critical
> Fix For: 1.0.3, 0.23.3
>
> Attachments: HADOOP-6963.branch-1.0.2.patch, 
> HADOOP-6963.branch-1.0.2.patch, HADOOP-6963.branch-1.patch, 
> HADOOP-6963.branch-23.patch, HADOOP-6963.branch-23.patch, 
> HADOOP-6963.branch-23.patch
>
>
> The getDU method should not include the size of the directory. The Java 
> interface says that the value is undefined and in Linux/Sun it gets the 4096 
> for the inode. Clearly this isn't useful.
> It also recursively calls itself. In case the directory has a symbolic link 
> forming a cycle, getDU keeps spinning in the cycle. In our case, we saw this 
> in the org.apache.hadoop.mapred.JobLocalizer.downloadPrivateCacheObjects 
> call. This prevented other tasks on the same node from committing, causing 
> the TT to become effectively useless (because the JT thinks it already has 
> enough tasks running)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Reopened] (HADOOP-6963) Fix FileUtil.getDU. It should not include the size of the directory or follow symbolic links

2012-04-05 Thread Ravi Prakash (Reopened) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6963?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash reopened HADOOP-6963:
--


Reopening for committing to branch-1.

> Fix FileUtil.getDU. It should not include the size of the directory or follow 
> symbolic links
> 
>
> Key: HADOOP-6963
> URL: https://issues.apache.org/jira/browse/HADOOP-6963
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 0.20.205.0, 0.23.1
>Reporter: Owen O'Malley
>Assignee: Ravi Prakash
>Priority: Critical
> Fix For: 0.23.3, 2.0.0, 3.0.0
>
> Attachments: HADOOP-6963.branch-1.0.2.patch, 
> HADOOP-6963.branch-1.0.2.patch, HADOOP-6963.branch-1.patch, 
> HADOOP-6963.branch-23.patch, HADOOP-6963.branch-23.patch, 
> HADOOP-6963.branch-23.patch
>
>
> The getDU method should not include the size of the directory. The Java 
> interface says that the value is undefined and in Linux/Sun it gets the 4096 
> for the inode. Clearly this isn't useful.
> It also recursively calls itself. In case the directory has a symbolic link 
> forming a cycle, getDU keeps spinning in the cycle. In our case, we saw this 
> in the org.apache.hadoop.mapred.JobLocalizer.downloadPrivateCacheObjects 
> call. This prevented other tasks on the same node from committing, causing 
> the TT to become effectively useless (because the JT thinks it already has 
> enough tasks running)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




About contributing and patching

2012-04-05 Thread Sefa Irken
Hello everyone,

I am new at Hadoop JIRA, also first Apache project. I want to contribute
and submit patches. I have some very basic questions.


   - Can you give short info about hadoop branches? I see similarities
   between trunk and 0.2x branch. But what about 1.x branch?
   - I have prepared patches for a few jira issues and uploaded them as
   files to those issues. Should I wait for someone or something before
   submitting as a patch?

I have already read http://wiki.apache.org/hadoop/HowToContribute and
http://wiki.apache.org/hadoop/GitAndHadoop.

Thanks.
-- 
*Sefa İrken*
*Software Developer.*


Re: Problem with Hadoop simple program

2012-04-05 Thread shashwat shriparv
You dont need to format it everyday, just format once and when ever you
start you pc just start you hadoop with bin/start-all.sh and check with jps
if the namenode data node and task tracker has started.

On Thu, Apr 5, 2012 at 11:39 PM, prem vishnoi  wrote:

> Hi Team,
> I am trying run hadoop on MAC sytem
> As first day my hadoop is working.
> Now i am unable to format NAMENODE
> as i am getting WORNING i.e.
> Directory has been locked .
> Can u help me how tto directory unlocked.
> and also my hadoop is not stoping
> as i am trying from unix command
>  bin/start-all.sh
>
> Thanks,
> Prema Vishnoi
>
> On Thu, Apr 5, 2012 at 10:21 PM, shashwat shriparv <
> dwivedishash...@gmail.com> wrote:
>
> > Check your host file setting... .
> >
> > On Thu, Apr 5, 2012 at 10:17 PM, madhu phatak 
> > wrote:
> >
> > > Did you formatted the namenode before running code?
> > > On Apr 5, 2012 10:13 PM, "VaniJay"  wrote:
> > >
> > > >
> > > > Hi,
> > > >
> > > > I am trying to learn hadoop and I am stuck up with a simple program
> > > > execution. I am using windows and eclipse to run.
> > > >
> > > > I have started the threads by bin/start-all.sh
> > > > I am able to successfully bring up the job tracker as
> > > > http://localhost:50030/jobtracker.jsp
> > > >
> > > > But I am not able to view the name node. When i visit
> > > > http://localhost:50070/ I get page cannot be displayed. To me it
> > appears
> > > > that the namenode is not properly set.
> > > >
> > > > When I run the code from eclipse, it goes on like this ,
> > > > "Retrying connect to server: localhost/127.0.0.1:9000. Already
> tried 0
> > > > time(s)."
> > > > "Retrying connect to server: localhost/127.0.0.1:9000. Already
> tried 1
> > > > time(s)."
> > > >
> > > >
> > > > I have attached my configuration files :
> > > >
> > > > Any help would greatly help me to proceed.
> > > >
> > > > Thanks!
> > > > http://old.nabble.com/file/p33574567/core-site.xml core-site.xml
> > > > http://old.nabble.com/file/p33574567/hdfs-site.xml hdfs-site.xml
> > > > http://old.nabble.com/file/p33574567/mapred-site.xml mapred-site.xml
> > > > --
> > > > View this message in context:
> > > >
> > >
> >
> http://old.nabble.com/Problem-with-Hadoop-simple-program-tp33574567p33574567.html
> > > > Sent from the Hadoop core-dev mailing list archive at Nabble.com.
> > > >
> > > >
> > >
> >
> >
> >
> > --
> >
> >
> > ∞
> > Shashwat Shriparv
> >
>



-- 


∞
Shashwat Shriparv


[jira] [Created] (HADOOP-8250) Investigate compatibility of symlink usage

2012-04-05 Thread Bikas Saha (Created) (JIRA)
Investigate compatibility of symlink usage
--

 Key: HADOOP-8250
 URL: https://issues.apache.org/jira/browse/HADOOP-8250
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: native
Affects Versions: 1.1.0
Reporter: Bikas Saha
Assignee: Bikas Saha
 Fix For: 1.1.0


The current Windows patch replaces symlink with copy. This jira tracks 
understanding the implications of this on expected functionality.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




Re: Problem with Hadoop simple program

2012-04-05 Thread prem vishnoi
Hi Team,
I am trying run hadoop on MAC sytem
As first day my hadoop is working.
Now i am unable to format NAMENODE
as i am getting WORNING i.e.
Directory has been locked .
Can u help me how tto directory unlocked.
and also my hadoop is not stoping
as i am trying from unix command
 bin/start-all.sh

Thanks,
Prema Vishnoi

On Thu, Apr 5, 2012 at 10:21 PM, shashwat shriparv <
dwivedishash...@gmail.com> wrote:

> Check your host file setting... .
>
> On Thu, Apr 5, 2012 at 10:17 PM, madhu phatak 
> wrote:
>
> > Did you formatted the namenode before running code?
> > On Apr 5, 2012 10:13 PM, "VaniJay"  wrote:
> >
> > >
> > > Hi,
> > >
> > > I am trying to learn hadoop and I am stuck up with a simple program
> > > execution. I am using windows and eclipse to run.
> > >
> > > I have started the threads by bin/start-all.sh
> > > I am able to successfully bring up the job tracker as
> > > http://localhost:50030/jobtracker.jsp
> > >
> > > But I am not able to view the name node. When i visit
> > > http://localhost:50070/ I get page cannot be displayed. To me it
> appears
> > > that the namenode is not properly set.
> > >
> > > When I run the code from eclipse, it goes on like this ,
> > > "Retrying connect to server: localhost/127.0.0.1:9000. Already tried 0
> > > time(s)."
> > > "Retrying connect to server: localhost/127.0.0.1:9000. Already tried 1
> > > time(s)."
> > >
> > >
> > > I have attached my configuration files :
> > >
> > > Any help would greatly help me to proceed.
> > >
> > > Thanks!
> > > http://old.nabble.com/file/p33574567/core-site.xml core-site.xml
> > > http://old.nabble.com/file/p33574567/hdfs-site.xml hdfs-site.xml
> > > http://old.nabble.com/file/p33574567/mapred-site.xml mapred-site.xml
> > > --
> > > View this message in context:
> > >
> >
> http://old.nabble.com/Problem-with-Hadoop-simple-program-tp33574567p33574567.html
> > > Sent from the Hadoop core-dev mailing list archive at Nabble.com.
> > >
> > >
> >
>
>
>
> --
>
>
> ∞
> Shashwat Shriparv
>


Re: Adding documentation patches to hadoop

2012-04-05 Thread Harsh J
Amir,

Perhaps you'd prefer maintaining a Wiki page at
http://wiki.apache.org/hadoop/ for this?

Or if not, please read up on
http://wiki.apache.org/hadoop/HowToContribute. Do open up a JIRA to
discuss the specifics.

On Thu, Apr 5, 2012 at 10:05 PM, Amir Sanjar  wrote:
> We would like to add a documentation patch ( i.e. a readme file)
> to explain the build process and the requirements for hadoop on. POWER.
> What is the process?
>
> Best Regards
> Amir Sanjar
>
> Linux System Management Architect and Lead
> IBM Senior Software Engineer
> Phone# 512-286-8393
> Fax#      512-838-8858
>
>



-- 
Harsh J


Re: Problem with Hadoop simple program

2012-04-05 Thread shashwat shriparv
Check your host file setting... .

On Thu, Apr 5, 2012 at 10:17 PM, madhu phatak  wrote:

> Did you formatted the namenode before running code?
> On Apr 5, 2012 10:13 PM, "VaniJay"  wrote:
>
> >
> > Hi,
> >
> > I am trying to learn hadoop and I am stuck up with a simple program
> > execution. I am using windows and eclipse to run.
> >
> > I have started the threads by bin/start-all.sh
> > I am able to successfully bring up the job tracker as
> > http://localhost:50030/jobtracker.jsp
> >
> > But I am not able to view the name node. When i visit
> > http://localhost:50070/ I get page cannot be displayed. To me it appears
> > that the namenode is not properly set.
> >
> > When I run the code from eclipse, it goes on like this ,
> > "Retrying connect to server: localhost/127.0.0.1:9000. Already tried 0
> > time(s)."
> > "Retrying connect to server: localhost/127.0.0.1:9000. Already tried 1
> > time(s)."
> >
> >
> > I have attached my configuration files :
> >
> > Any help would greatly help me to proceed.
> >
> > Thanks!
> > http://old.nabble.com/file/p33574567/core-site.xml core-site.xml
> > http://old.nabble.com/file/p33574567/hdfs-site.xml hdfs-site.xml
> > http://old.nabble.com/file/p33574567/mapred-site.xml mapred-site.xml
> > --
> > View this message in context:
> >
> http://old.nabble.com/Problem-with-Hadoop-simple-program-tp33574567p33574567.html
> > Sent from the Hadoop core-dev mailing list archive at Nabble.com.
> >
> >
>



-- 


∞
Shashwat Shriparv


Re: Problem with Hadoop simple program

2012-04-05 Thread madhu phatak
Did you formatted the namenode before running code?
On Apr 5, 2012 10:13 PM, "VaniJay"  wrote:

>
> Hi,
>
> I am trying to learn hadoop and I am stuck up with a simple program
> execution. I am using windows and eclipse to run.
>
> I have started the threads by bin/start-all.sh
> I am able to successfully bring up the job tracker as
> http://localhost:50030/jobtracker.jsp
>
> But I am not able to view the name node. When i visit
> http://localhost:50070/ I get page cannot be displayed. To me it appears
> that the namenode is not properly set.
>
> When I run the code from eclipse, it goes on like this ,
> "Retrying connect to server: localhost/127.0.0.1:9000. Already tried 0
> time(s)."
> "Retrying connect to server: localhost/127.0.0.1:9000. Already tried 1
> time(s)."
>
>
> I have attached my configuration files :
>
> Any help would greatly help me to proceed.
>
> Thanks!
> http://old.nabble.com/file/p33574567/core-site.xml core-site.xml
> http://old.nabble.com/file/p33574567/hdfs-site.xml hdfs-site.xml
> http://old.nabble.com/file/p33574567/mapred-site.xml mapred-site.xml
> --
> View this message in context:
> http://old.nabble.com/Problem-with-Hadoop-simple-program-tp33574567p33574567.html
> Sent from the Hadoop core-dev mailing list archive at Nabble.com.
>
>


Problem with Hadoop simple program

2012-04-05 Thread VaniJay

Hi,

I am trying to learn hadoop and I am stuck up with a simple program
execution. I am using windows and eclipse to run. 

I have started the threads by bin/start-all.sh
I am able to successfully bring up the job tracker as
http://localhost:50030/jobtracker.jsp

But I am not able to view the name node. When i visit
http://localhost:50070/ I get page cannot be displayed. To me it appears
that the namenode is not properly set.

When I run the code from eclipse, it goes on like this ,
"Retrying connect to server: localhost/127.0.0.1:9000. Already tried 0
time(s)."
"Retrying connect to server: localhost/127.0.0.1:9000. Already tried 1
time(s)."


I have attached my configuration files :

Any help would greatly help me to proceed.

Thanks!
http://old.nabble.com/file/p33574567/core-site.xml core-site.xml 
http://old.nabble.com/file/p33574567/hdfs-site.xml hdfs-site.xml 
http://old.nabble.com/file/p33574567/mapred-site.xml mapred-site.xml 
-- 
View this message in context: 
http://old.nabble.com/Problem-with-Hadoop-simple-program-tp33574567p33574567.html
Sent from the Hadoop core-dev mailing list archive at Nabble.com.



Adding documentation patches to hadoop

2012-04-05 Thread Amir Sanjar
We would like to add a documentation patch ( i.e. a readme file)
to explain the build process and the requirements for hadoop on. POWER.
What is the process?

Best Regards
Amir Sanjar

Linux System Management Architect and Lead
IBM Senior Software Engineer
Phone# 512-286-8393
Fax#  512-838-8858




Fw: Requirements for patch review

2012-04-05 Thread Tsz Wo (Nicholas), Sze
Resent.



- Forwarded Message -
From: Tsz Wo Sze 
To: "common-dev@hadoop.apache.org" 
Cc: 
Sent: Wednesday, April 4, 2012 8:24 PM
Subject: Re: Requirements for patch review

>> The wording here is ambiguous, though, whether the committer who
>> provides the minimum one +1 may also be the author of the code change.
>> If so, that would seem to imply that committers may always make code
>> changes by merely +1ing their own patches, which seems counter to the
>> whole point of "review-then-commit". So, I'm pretty sure that's not
>> what it means.
>>
>> The question that came up, however, was whether a non-committer
>> contributor may provide a binding +1 for a patch written by a
>> committer. So, if I write a patch as a committer, and then a community
>> member reviews it, am I free to commit it without another committer
>> looking at it? My understanding has always been that this is not the
>> case, but we should clarify the by-laws if there is some ambiguity.
>>
>> I would propose the following amendments:
>> A committer may not provide a binding +1 for his or her own patch.
>> However, in the case of trivial patches only, a committer may use a +1
>> from the problem reporter or other contributor in lieu of another
>> committer's +1. The definition of a trivial patch is subject to the
>> committer's best judgment, but in general should consist of things
>> such as: documentation fixes, spelling mistakes, log message changes,
>> or additional test cases.

I agree that the bylaws is not clear about this.  For reviewing patches, my 
understanding is that any contributor, a committer or not, could review patches 
and the +1 counts.  I have worked on Hadoop almost five years.  This is what we 
are doing for a long time (if it is not from the beginning of the Hadoop 
project.)  Could other people confirm this?

From the HowToContribute wiki, it does advise committers to find another 
committer to review difficult patches: "Committers: for non-trivial changes, it 
is best to get another committer to review your patches before commit. ..."  It 
seems saying that it is okay for non-committers reviewing simple and medium 
patches.  Todd's amendments use different wording which seems implying a 
different requirement: the +1's from non-committers could be counted only for 
simple patches but not medium and difficult patches.

I think we should keep allowing everyone to review patches.  It slows down the 
development and is discouraging if non-committer's +1 does not count.  I 
believe the judgement of the committer who commits the patch won't commit bad 
code.  We have svn and we could revert patches if necessary.  Lastly, if a 
committer keeps committing bad code, we could exercise "Committer Removal".

BTW, does anyone know what other Apache projects do?

PS: since this is a bylaws change discussion, should we discuss it in general@?

Regards,
Tsz-Wo