[jira] [Commented] (HADOOP-9613) [JDK8] Update jersey version to latest 1.x release

2016-04-22 Thread Roman Shaposhnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15255129#comment-15255129
 ] 

Roman Shaposhnik commented on HADOOP-9613:
--

Now that I'm building Hadoop ecosystem for ARM and OpenJDK 7 is totally busted 
on that platform, I'd like to add yet one more of "me too" to this JIRA. Is 
there any way I can help getting this done (along with other 3) ?


> [JDK8] Update jersey version to latest 1.x release
> --
>
> Key: HADOOP-9613
> URL: https://issues.apache.org/jira/browse/HADOOP-9613
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: 2.4.0, 3.0.0
>Reporter: Timothy St. Clair
>Assignee: Tsuyoshi Ozawa
>  Labels: maven
> Attachments: HADOOP-2.2.0-9613.patch, 
> HADOOP-9613.004.incompatible.patch, HADOOP-9613.005.incompatible.patch, 
> HADOOP-9613.006.incompatible.patch, HADOOP-9613.007.incompatible.patch, 
> HADOOP-9613.008.incompatible.patch, HADOOP-9613.009.incompatible.patch, 
> HADOOP-9613.010.incompatible.patch, HADOOP-9613.011.incompatible.patch, 
> HADOOP-9613.012.incompatible.patch, HADOOP-9613.013.incompatible.patch, 
> HADOOP-9613.1.patch, HADOOP-9613.2.patch, HADOOP-9613.3.patch, 
> HADOOP-9613.patch
>
>
> Update pom.xml dependencies exposed when running a mvn-rpmbuild against 
> system dependencies on Fedora 18.  
> The existing version is 1.8 which is quite old. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11983) HADOOP_USER_CLASSPATH_FIRST works the opposite of what it is supposed to do

2015-05-17 Thread Roman Shaposhnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14547371#comment-14547371
 ] 

Roman Shaposhnik commented on HADOOP-11983:
---

This looks good to me. [~aw] WDYT?

> HADOOP_USER_CLASSPATH_FIRST works the opposite of what it is supposed to do
> ---
>
> Key: HADOOP-11983
> URL: https://issues.apache.org/jira/browse/HADOOP-11983
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 3.0.0
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
> Attachments: HADOOP-11983.001.patch
>
>
> The behavior of HADOOP_USER_CLASSPATH_FIRST works the opposite of what it 
> should do. If it is not set, HADOOP_CLASSPATH is prepended. If set, it is 
> appended.
> You can easily try out by doing something like
> {noformat}
> HADOOP_CLASSPATH=/Users/alice/tmp hadoop classpath
> {noformat}
> (HADOOP_CLASSPATH should point to an existing directory)
> I think the if clause in hadoop_add_to_classpath_userpath is reversed.
> This issue seems specific to the trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-9650) Update jetty dependencies

2015-05-08 Thread Roman Shaposhnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9650?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Shaposhnik updated HADOOP-9650:
-
Issue Type: Sub-task  (was: Improvement)
Parent: HADOOP-11941

> Update jetty dependencies 
> --
>
> Key: HADOOP-9650
> URL: https://issues.apache.org/jira/browse/HADOOP-9650
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: 2.6.0
>Reporter: Timothy St. Clair
>Assignee: Timothy St. Clair
>  Labels: BB2015-05-TBR, build, maven
> Attachments: HADOOP-9650.patch, HADOOP-trunk-9650.patch
>
>
> Update deprecated jetty 6 dependencies, moving forwards to jetty 8.  This 
> enables mvn-rpmbuild on Fedora 18 & > platforms. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-9613) Updated jersey pom dependencies

2015-05-08 Thread Roman Shaposhnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Shaposhnik updated HADOOP-9613:
-
Issue Type: Sub-task  (was: Improvement)
Parent: HADOOP-11941

> Updated jersey pom dependencies
> ---
>
> Key: HADOOP-9613
> URL: https://issues.apache.org/jira/browse/HADOOP-9613
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: 3.0.0, 2.4.0
>Reporter: Timothy St. Clair
>Assignee: Timothy St. Clair
>  Labels: BB2015-05-TBR, maven
> Attachments: HADOOP-2.2.0-9613.patch, HADOOP-9613.patch
>
>
> Update pom.xml dependencies exposed when running a mvn-rpmbuild against 
> system dependencies on Fedora 18.  
> The existing version is 1.8 which is quite old. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11941) Umbrella JIRA for tracking maven related patches

2015-05-08 Thread Roman Shaposhnik (JIRA)
Roman Shaposhnik created HADOOP-11941:
-

 Summary: Umbrella JIRA for tracking maven related patches
 Key: HADOOP-11941
 URL: https://issues.apache.org/jira/browse/HADOOP-11941
 Project: Hadoop Common
  Issue Type: Task
  Components: build
Affects Versions: 3.0.0
Reporter: Roman Shaposhnik
Assignee: Roman Shaposhnik
Priority: Minor


This is just to aid in tracking patches that change build in some way. I've 
identified quite a few of those during the bugbash and this is the easiest way 
to track them all at once.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11590) Update sbin commands and documentation to use new --slaves option

2015-05-08 Thread Roman Shaposhnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14535278#comment-14535278
 ] 

Roman Shaposhnik commented on HADOOP-11590:
---

A few questions/nits:

* why this change in start-dfs.sh:
{noformat}
 #Add other possible options
-nameStartOpt="$nameStartOpt $@"
+nameStartOpt="$nameStartOpt $*"
{noformat}
* shouldn't we put a deprecation message in hadoop-daemons.sh and 
yarn-daemons.sh ?

Other than that -- looks good to me

> Update sbin commands and documentation to use new --slaves option
> -
>
> Key: HADOOP-11590
> URL: https://issues.apache.org/jira/browse/HADOOP-11590
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation, scripts
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Blocker
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-11590-00.patch, HADOOP-11590-01.patch, 
> HADOOP-11590-02.patch, HADOOP-11590-03.patch
>
>
> With HADOOP-11565 now committed, we need to remove usages of yarn-daemons.sh 
> and hadoop-daemons.sh from the start and stop scripts, converting them to use 
> the new --slaves option.  Additionally, the documentation should be updated 
> to reflect these new command options.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-8307) The task-controller is not packaged in the tarball

2015-05-08 Thread Roman Shaposhnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Shaposhnik updated HADOOP-8307:
-
Resolution: Duplicate
Status: Resolved  (was: Patch Available)

At this point it really is a dup of HADOOP-8364 especially given that branch-1 
is not the most actively developed branch in hadoop

> The task-controller is not packaged in the tarball
> --
>
> Key: HADOOP-8307
> URL: https://issues.apache.org/jira/browse/HADOOP-8307
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 1.0.3
>Reporter: Owen O'Malley
>Assignee: Owen O'Malley
>  Labels: BB2015-05-TBR
> Attachments: hadoop-8307.patch
>
>
> Ant in some situations, puts artifacts such as task-controller into the 
> build/hadoop-*/ directory before the "package" target deletes it to start 
> over.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11393) Revert HADOOP_PREFIX, go back to HADOOP_HOME

2015-05-08 Thread Roman Shaposhnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14535161#comment-14535161
 ] 

Roman Shaposhnik commented on HADOOP-11393:
---

The patch is pretty straightforward and in general looks good to me. A couple 
of comments still:
   * the parts that patch documentation need to be rebased on the current state 
of trunk
   * the hadoop-functions.sh part needs to be rebased as well
   * httpfs-env.sh httpfs-config.sh httpfs.sh mapred-config.sh rumen2sls.sh 
still use HADOOP_PREFIX even after applying the patch

The biggest question I have is re: overriding HADOOP_HOME with HADOOP_PREFIX 
unconditionally. Shouldn't we at least
start issuing a deprecation warning for use of HADOOP_PREFIX ?

[~aw] do you want to take care of the above?

> Revert HADOOP_PREFIX, go back to HADOOP_HOME
> 
>
> Key: HADOOP-11393
> URL: https://issues.apache.org/jira/browse/HADOOP-11393
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-11393-00.patch
>
>
> Today, Windows and parts of the Hadoop source code still use HADOOP_HOME.  
> The switch to HADOOP_PREFIX back in 0.21 or so didn't really accomplish what 
> it was intended to do and only helped confuse the situation.
> _HOME is a much more standard suffix and is, in fact, used for everything in 
> Hadoop except for the top level project home.  I think it would be beneficial 
> to use HADOOP_HOME in the shell code as the Official(tm) variable, still 
> honoring HADOOP_PREFIX if it is set.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11393) Revert HADOOP_PREFIX, go back to HADOOP_HOME

2015-05-08 Thread Roman Shaposhnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14534939#comment-14534939
 ] 

Roman Shaposhnik commented on HADOOP-11393:
---

[~aw] first of all, I'm very much +1 on the idea of going back to HADOOP_HOME. 
I'm reviewing the patch right now. The question I have is this though: how do 
we advertise that the winter is coming? There's a suspicious lack of activity 
on this JIRA. Should we blast to a ML that this may be going in?

> Revert HADOOP_PREFIX, go back to HADOOP_HOME
> 
>
> Key: HADOOP-11393
> URL: https://issues.apache.org/jira/browse/HADOOP-11393
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-11393-00.patch
>
>
> Today, Windows and parts of the Hadoop source code still use HADOOP_HOME.  
> The switch to HADOOP_PREFIX back in 0.21 or so didn't really accomplish what 
> it was intended to do and only helped confuse the situation.
> _HOME is a much more standard suffix and is, in fact, used for everything in 
> Hadoop except for the top level project home.  I think it would be beneficial 
> to use HADOOP_HOME in the shell code as the Official(tm) variable, still 
> honoring HADOOP_PREFIX if it is set.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-9902) Shell script rewrite

2014-08-13 Thread Roman Shaposhnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14096400#comment-14096400
 ] 

Roman Shaposhnik commented on HADOOP-9902:
--

Just to close a loop on this from my side: at this point I'm confident in this 
change to go into trunk. Sure we may discover small hicups, but the bulk of it 
is extremely solid.

+1

> Shell script rewrite
> 
>
> Key: HADOOP-9902
> URL: https://issues.apache.org/jira/browse/HADOOP-9902
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>  Labels: releasenotes
> Fix For: 3.0.0
>
> Attachments: HADOOP-9902-10.patch, HADOOP-9902-11.patch, 
> HADOOP-9902-12.patch, HADOOP-9902-13-branch-2.patch, HADOOP-9902-13.patch, 
> HADOOP-9902-14.patch, HADOOP-9902-2.patch, HADOOP-9902-3.patch, 
> HADOOP-9902-4.patch, HADOOP-9902-5.patch, HADOOP-9902-6.patch, 
> HADOOP-9902-7.patch, HADOOP-9902-8.patch, HADOOP-9902-9.patch, 
> HADOOP-9902.patch, HADOOP-9902.txt, hadoop-9902-1.patch, more-info.txt
>
>
> Umbrella JIRA for shell script rewrite.  See more-info.txt for more details.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-9902) Shell script rewrite

2014-08-08 Thread Roman Shaposhnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14090936#comment-14090936
 ] 

Roman Shaposhnik commented on HADOOP-9902:
--

[~aw] hm. this is weird. I'll definitely re-try myself today. One small thing 
I've noticed: you're trying on trunk while I was doing it with branch-2. Not 
sure if that makes any difference.

> Shell script rewrite
> 
>
> Key: HADOOP-9902
> URL: https://issues.apache.org/jira/browse/HADOOP-9902
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>  Labels: releasenotes
> Attachments: HADOOP-9902-10.patch, HADOOP-9902-11.patch, 
> HADOOP-9902-12.patch, HADOOP-9902-13-branch-2.patch, HADOOP-9902-13.patch, 
> HADOOP-9902-14.patch, HADOOP-9902-2.patch, HADOOP-9902-3.patch, 
> HADOOP-9902-4.patch, HADOOP-9902-5.patch, HADOOP-9902-6.patch, 
> HADOOP-9902-7.patch, HADOOP-9902-8.patch, HADOOP-9902-9.patch, 
> HADOOP-9902.patch, HADOOP-9902.txt, hadoop-9902-1.patch, more-info.txt
>
>
> Umbrella JIRA for shell script rewrite.  See more-info.txt for more details.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-9902) Shell script rewrite

2014-08-07 Thread Roman Shaposhnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14090401#comment-14090401
 ] 

Roman Shaposhnik commented on HADOOP-9902:
--

[~aw] here's how to repro: you just need to rebuild the source distribution 
tarball
{noformat}
$ mvn package -Psrc -DskipTests
$ (cd /tmp/ ; tar xzvf - ) < 
./hadoop-dist/target/hadoop-2.6.0-SNAPSHOT-src.tar.gz 
$ cd /tmp/hadoop-2.6.0*
$ mvn -Dsnappy.prefix=x -Dbundle.snappy=true -Dsnappy.lib=/usr/lib64 -Pdist 
-Pnative -Psrc -Dtar -DskipTests -DskipTest -DskipITs install
{noformat}

> Shell script rewrite
> 
>
> Key: HADOOP-9902
> URL: https://issues.apache.org/jira/browse/HADOOP-9902
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>  Labels: releasenotes
> Attachments: HADOOP-9902-10.patch, HADOOP-9902-11.patch, 
> HADOOP-9902-12.patch, HADOOP-9902-13-branch-2.patch, HADOOP-9902-13.patch, 
> HADOOP-9902-14.patch, HADOOP-9902-2.patch, HADOOP-9902-3.patch, 
> HADOOP-9902-4.patch, HADOOP-9902-5.patch, HADOOP-9902-6.patch, 
> HADOOP-9902-7.patch, HADOOP-9902-8.patch, HADOOP-9902-9.patch, 
> HADOOP-9902.patch, HADOOP-9902.txt, hadoop-9902-1.patch, more-info.txt
>
>
> Umbrella JIRA for shell script rewrite.  See more-info.txt for more details.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-9902) Shell script rewrite

2014-08-07 Thread Roman Shaposhnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14090227#comment-14090227
 ] 

Roman Shaposhnik commented on HADOOP-9902:
--

Here are my comments. The only two that are the dealbreakers are:

# It seems that hadoop-functions.sh ends up in sbin/hadoop-functions.sh in the 
finally binary assembly but hadoop-config.sh looks for it in the 
HADOOP_LIBEXEC_DIR. Relates to this -- I think we need to bail in 
hadoop-config.sh if hadoop-functions.sh can't be found.
# In hadoop-common-project/hadoop-common/src/main/bin/hadoop the following 
doesn't work:
{noformat}
exec "${JAVA}" "${HADOOP_OPTS}" "${CLASS}" "$@"
{noformat}
it needs to be changed so that HADOOP_OPTS is not quoted. Otherwise JDK gets 
confused.

The rest of my notes are here for tracking purposes. I'd appreciate if [~aw] 
can comment.

# HADOOPOSTYPE needs to be documents
# Are we planning to use *-env.sh for documenting all the variables that one 
may set?
# Any reason populate_slaves_file function is in hadoop-config.sh and not in 
hadoop-functions.sh ?
# The following appears to be a sort of no-op if value is not set (not sure why 
extra if [[ -z)
{noformat}
  # default policy file for service-level authorization 
  if [[ -z "${HADOOP_POLICYFILE}" ]]; then
HADOOP_POLICYFILE=${HADOOP_POLICYFILE:-"hadoop-policy.xml"}
  fi
{noformat}
# Any reason not to try harder and see what type -p java returns?
{noformat}
# The java implementation to use.
export JAVA_HOME=${JAVA_HOME:-"hadoop-env.sh is not configured"}
{noformat}

> Shell script rewrite
> 
>
> Key: HADOOP-9902
> URL: https://issues.apache.org/jira/browse/HADOOP-9902
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>  Labels: releasenotes
> Attachments: HADOOP-9902-10.patch, HADOOP-9902-11.patch, 
> HADOOP-9902-12.patch, HADOOP-9902-13-branch-2.patch, HADOOP-9902-13.patch, 
> HADOOP-9902-2.patch, HADOOP-9902-3.patch, HADOOP-9902-4.patch, 
> HADOOP-9902-5.patch, HADOOP-9902-6.patch, HADOOP-9902-7.patch, 
> HADOOP-9902-8.patch, HADOOP-9902-9.patch, HADOOP-9902.patch, HADOOP-9902.txt, 
> hadoop-9902-1.patch, more-info.txt
>
>
> Umbrella JIRA for shell script rewrite.  See more-info.txt for more details.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-9902) Shell script rewrite

2014-08-07 Thread Roman Shaposhnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14090008#comment-14090008
 ] 

Roman Shaposhnik commented on HADOOP-9902:
--

[~aw] that's my plan. I'm almost done with manual review. Next step is 
Bigtop-based testing on a fully distributed cluster.

> Shell script rewrite
> 
>
> Key: HADOOP-9902
> URL: https://issues.apache.org/jira/browse/HADOOP-9902
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>  Labels: releasenotes
> Attachments: HADOOP-9902-10.patch, HADOOP-9902-11.patch, 
> HADOOP-9902-12.patch, HADOOP-9902-13-branch-2.patch, HADOOP-9902-13.patch, 
> HADOOP-9902-2.patch, HADOOP-9902-3.patch, HADOOP-9902-4.patch, 
> HADOOP-9902-5.patch, HADOOP-9902-6.patch, HADOOP-9902-7.patch, 
> HADOOP-9902-8.patch, HADOOP-9902-9.patch, HADOOP-9902.patch, HADOOP-9902.txt, 
> hadoop-9902-1.patch, more-info.txt
>
>
> Umbrella JIRA for shell script rewrite.  See more-info.txt for more details.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-9902) Shell script rewrite

2014-08-05 Thread Roman Shaposhnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14086865#comment-14086865
 ] 

Roman Shaposhnik commented on HADOOP-9902:
--

The patch is huge. It is also a needed one. I don't think it is realistic to 
expect anybody be able to just look at the patch and predict all the potential 
changes to the current behavior. I don't think this should be the goal of the 
review. The way I'm approaching it is this: if this was a from-scratch 
implementation would it be reasonable.

IOW, reviewing a diff is, in my opinion, futile. Reviewing the final state of 
the code is fruitful. I'll post my comments from that point of view later on, 
but I wanted to let everybody know the scope of my review first.

> Shell script rewrite
> 
>
> Key: HADOOP-9902
> URL: https://issues.apache.org/jira/browse/HADOOP-9902
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>  Labels: releasenotes
> Attachments: HADOOP-9902-10.patch, HADOOP-9902-11.patch, 
> HADOOP-9902-12.patch, HADOOP-9902-13-branch-2.patch, HADOOP-9902-13.patch, 
> HADOOP-9902-2.patch, HADOOP-9902-3.patch, HADOOP-9902-4.patch, 
> HADOOP-9902-5.patch, HADOOP-9902-6.patch, HADOOP-9902-7.patch, 
> HADOOP-9902-8.patch, HADOOP-9902-9.patch, HADOOP-9902.patch, HADOOP-9902.txt, 
> hadoop-9902-1.patch, more-info.txt
>
>
> Umbrella JIRA for shell script rewrite.  See more-info.txt for more details.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10772) Generating RPMs for common, hdfs, httpfs, mapreduce , yarn and tools

2014-07-06 Thread Roman Shaposhnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14053275#comment-14053275
 ] 

Roman Shaposhnik commented on HADOOP-10772:
---

First of all, Bigtop is definitely *not* jut for x86_64. If you see bugs -- 
we'd appreciate bug reports.

Second of all, RPMs (or any other packaging) is not just about wrapping bits 
into an archive. If you want that you know where to get fpm and run it on the 
result of Hadoop build (single command and you're done!). Proper packaging 
required hooks into the underlying OS such as init.d/systemd scripts, etc. I'm 
not sure Hadoop as a project would be perfectly suited for maintaining those.

> Generating RPMs for common, hdfs, httpfs, mapreduce , yarn and tools 
> -
>
> Key: HADOOP-10772
> URL: https://issues.apache.org/jira/browse/HADOOP-10772
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Jinghui Wang
>Assignee: Jinghui Wang
> Attachments: HADOOP-10772.patch
>
>
> Generating RPMs for hadoop-common, hadoop-hdfs, hadoop-hdfs-httpfs, 
> hadoop-mapreduce , hadoop-yarn-project and hadoop-tools-dist with dist build 
> profile.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10772) Generating RPMs for common, hdfs, httpfs, mapreduce , yarn and tools

2014-07-01 Thread Roman Shaposhnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14049567#comment-14049567
 ] 

Roman Shaposhnik commented on HADOOP-10772:
---

How is this going to be compatible with existing packaging efforts (that every 
commercial vendor is using) coming from Apache Bigtop?

> Generating RPMs for common, hdfs, httpfs, mapreduce , yarn and tools 
> -
>
> Key: HADOOP-10772
> URL: https://issues.apache.org/jira/browse/HADOOP-10772
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Jinghui Wang
>Assignee: Jinghui Wang
> Attachments: HADOOP-10772.patch
>
>
> Generating RPMs for hadoop-common, hadoop-hdfs, hadoop-hdfs-httpfs, 
> hadoop-mapreduce , hadoop-yarn-project and hadoop-tools-dist with dist build 
> profile.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10397) No 64-bit native lib in Hadoop releases

2014-03-10 Thread Roman Shaposhnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13926183#comment-13926183
 ] 

Roman Shaposhnik commented on HADOOP-10397:
---

I would be *very* cautious about setting expectations for having native 
libraries released as part of the tarball. In fact, I'd rather have them 
dropped completely. Otherwise you have to guarantee that you build on the 
lowest common denominator of the Linux OSes people are using and you 
essentially pin down your build environment. Something that I'm not sure all 
future Hadoop RMs are willing to do.

> No 64-bit native lib in Hadoop releases
> ---
>
> Key: HADOOP-10397
> URL: https://issues.apache.org/jira/browse/HADOOP-10397
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Zhijie Shen
>
> Recently, I had a chance to talk to a Hadoop user, who complained there's no 
> 64-bit native lib in Hadaoop releases, and it was user unfriendly to make 
> them download all the dependenies to build 64-bit themselves.
> Hence I checked the recent two releases, 2.2 and 2.3, whose native lib are 
> both ELF 32-bit LSB shared objects. I'm not aware of the reason why we don't 
> release 64-bit, but I'd like to open the ticket to tackle this issue given we 
> didn't before.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-9962) in order to avoid dependency divergence within Hadoop itself lets enable DependencyConvergence

2013-09-14 Thread Roman Shaposhnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Shaposhnik updated HADOOP-9962:
-

Attachment: HADOOP-9962.patch.txt

Without HADOOP-9961 the build is going to fail with this enabled. Please 
review, though.

> in order to avoid dependency divergence within Hadoop itself lets enable 
> DependencyConvergence
> --
>
> Key: HADOOP-9962
> URL: https://issues.apache.org/jira/browse/HADOOP-9962
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0, 2.1.1-beta
>Reporter: Roman Shaposhnik
>Assignee: Roman Shaposhnik
> Attachments: HADOOP-9962.patch.txt
>
>
> In order to avoid the likes of HADOOP-9961 it may be useful for us to enable 
> DependencyConvergence check in  maven-enforcer-plugin.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9961) versions of a few transitive dependencies diverged between hadoop subprojects

2013-09-14 Thread Roman Shaposhnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Shaposhnik updated HADOOP-9961:
-

Attachment: HADOOP-9961.patch.txt

Patch that harmonizes everything that an enforcer plugin complains about (see 
HADOOP-9962). Produces the dist tarball with the versions of dependencies as 
close to the current ones as possible.

> versions of a few transitive dependencies diverged between hadoop subprojects
> -
>
> Key: HADOOP-9961
> URL: https://issues.apache.org/jira/browse/HADOOP-9961
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0
>Reporter: Roman Shaposhnik
>Assignee: Roman Shaposhnik
>Priority: Minor
> Attachments: HADOOP-9961.patch.txt
>
>
> I've noticed a few divergences between secondary dependencies of the various 
> hadoop subprojects. For example:
> {noformat}
> [ERROR]
> Dependency convergence error for org.apache.commons:commons-compress:1.4.1 
> paths to dependency are:
> +-org.apache.hadoop:hadoop-client:3.0.0-SNAPSHOT
>   +-org.apache.hadoop:hadoop-common:3.0.0-20130913.204420-3360
> +-org.apache.avro:avro:1.7.4
>   +-org.apache.commons:commons-compress:1.4.1
> and
> +-org.apache.hadoop:hadoop-client:3.0.0-SNAPSHOT
>   +-org.apache.hadoop:hadoop-common:3.0.0-20130913.204420-3360
> +-org.apache.commons:commons-compress:1.4
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9961) versions of a few transitive dependencies diverged between hadoop subprojects

2013-09-14 Thread Roman Shaposhnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Shaposhnik updated HADOOP-9961:
-

Status: Patch Available  (was: Open)

> versions of a few transitive dependencies diverged between hadoop subprojects
> -
>
> Key: HADOOP-9961
> URL: https://issues.apache.org/jira/browse/HADOOP-9961
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0
>Reporter: Roman Shaposhnik
>Assignee: Roman Shaposhnik
>Priority: Minor
> Attachments: HADOOP-9961.patch.txt
>
>
> I've noticed a few divergences between secondary dependencies of the various 
> hadoop subprojects. For example:
> {noformat}
> [ERROR]
> Dependency convergence error for org.apache.commons:commons-compress:1.4.1 
> paths to dependency are:
> +-org.apache.hadoop:hadoop-client:3.0.0-SNAPSHOT
>   +-org.apache.hadoop:hadoop-common:3.0.0-20130913.204420-3360
> +-org.apache.avro:avro:1.7.4
>   +-org.apache.commons:commons-compress:1.4.1
> and
> +-org.apache.hadoop:hadoop-client:3.0.0-SNAPSHOT
>   +-org.apache.hadoop:hadoop-common:3.0.0-20130913.204420-3360
> +-org.apache.commons:commons-compress:1.4
> {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9961) versions of a few transitive dependencies diverged between hadoop subprojects

2013-09-13 Thread Roman Shaposhnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13767132#comment-13767132
 ] 

Roman Shaposhnik commented on HADOOP-9961:
--

Here's a full list of divergent dependencies:

{noformat}
Dependency convergence error for org.apache.commons:commons-compress:1.4.1 
paths to dependency are:
+-org.apache.hadoop:hadoop-client:3.0.0-SNAPSHOT
  +-org.apache.hadoop:hadoop-common:3.0.0-20130913.204420-3360
+-org.apache.avro:avro:1.7.4
  +-org.apache.commons:commons-compress:1.4.1
and
+-org.apache.hadoop:hadoop-client:3.0.0-SNAPSHOT
  +-org.apache.hadoop:hadoop-common:3.0.0-20130913.204420-3360
+-org.apache.commons:commons-compress:1.4

Dependency convergence error for org.jboss.netty:netty:3.2.4.Final paths to 
dependency are:
+-org.apache.hadoop.contrib:hadoop-hdfs-bkjournal:3.0.0-SNAPSHOT
  +-org.apache.bookkeeper:bookkeeper-server:4.0.0
+-org.jboss.netty:netty:3.2.4.Final
and
+-org.apache.hadoop.contrib:hadoop-hdfs-bkjournal:3.0.0-SNAPSHOT
  +-org.apache.zookeeper:zookeeper:3.4.2
+-org.jboss.netty:netty:3.2.2.Final


Dependency convergence error for io.netty:netty:3.5.11.Final paths to 
dependency are:
+-org.apache.hadoop:hadoop-hdfs-nfs:3.0.0-SNAPSHOT
  +-org.apache.hadoop:hadoop-nfs:3.0.0-SNAPSHOT
+-io.netty:netty:3.5.11.Final
and
+-org.apache.hadoop:hadoop-hdfs-nfs:3.0.0-SNAPSHOT
  +-io.netty:netty:3.6.2.Final


Dependency convergence error for org.glassfish:javax.servlet:3.0 paths to 
dependency are:
+-org.apache.hadoop:hadoop-mapreduce-examples:3.0.0-SNAPSHOT
  +-org.apache.hadoop:hadoop-yarn-server-tests:3.0.0-SNAPSHOT
+-com.sun.jersey.jersey-test-framework:jersey-test-framework-grizzly2:1.8
  +-com.sun.jersey.jersey-test-framework:jersey-test-framework-core:1.8
+-org.glassfish:javax.servlet:3.0
and
+-org.apache.hadoop:hadoop-mapreduce-examples:3.0.0-SNAPSHOT
  +-org.apache.hadoop:hadoop-yarn-server-tests:3.0.0-SNAPSHOT
+-com.sun.jersey.jersey-test-framework:jersey-test-framework-grizzly2:1.8
  +-com.sun.jersey:jersey-grizzly2:1.8
+-org.glassfish:javax.servlet:3.1

Dependency convergence error for org.codehaus.plexus:plexus-utils:2.0.4 paths 
to dependency are:
+-org.apache.hadoop:hadoop-maven-plugins:3.0.0-SNAPSHOT
  +-org.apache.maven:maven-plugin-api:3.0
+-org.apache.maven:maven-model:3.0
  +-org.codehaus.plexus:plexus-utils:2.0.4
and
+-org.apache.hadoop:hadoop-maven-plugins:3.0.0-SNAPSHOT
  +-org.apache.maven:maven-plugin-api:3.0
+-org.apache.maven:maven-artifact:3.0
  +-org.codehaus.plexus:plexus-utils:2.0.4
and
+-org.apache.hadoop:hadoop-maven-plugins:3.0.0-SNAPSHOT
  +-org.apache.maven:maven-plugin-api:3.0
+-org.sonatype.sisu:sisu-inject-plexus:1.4.2
  +-org.codehaus.plexus:plexus-utils:2.0.5
and
+-org.apache.hadoop:hadoop-maven-plugins:3.0.0-SNAPSHOT
  +-org.apache.maven:maven-core:3.0
+-org.apache.maven:maven-settings:3.0
  +-org.codehaus.plexus:plexus-utils:2.0.4
and
+-org.apache.hadoop:hadoop-maven-plugins:3.0.0-SNAPSHOT
  +-org.apache.maven:maven-core:3.0
+-org.apache.maven:maven-settings-builder:3.0
  +-org.codehaus.plexus:plexus-utils:2.0.4
and
+-org.apache.hadoop:hadoop-maven-plugins:3.0.0-SNAPSHOT
  +-org.apache.maven:maven-core:3.0
+-org.apache.maven:maven-repository-metadata:3.0
  +-org.codehaus.plexus:plexus-utils:2.0.4
and
+-org.apache.hadoop:hadoop-maven-plugins:3.0.0-SNAPSHOT
  +-org.apache.maven:maven-core:3.0
+-org.apache.maven:maven-model-builder:3.0
  +-org.codehaus.plexus:plexus-utils:2.0.4
and
+-org.apache.hadoop:hadoop-maven-plugins:3.0.0-SNAPSHOT
  +-org.apache.maven:maven-core:3.0
+-org.codehaus.plexus:plexus-utils:2.0.4
and
+-org.apache.hadoop:hadoop-maven-plugins:3.0.0-SNAPSHOT
  +-org.apache.maven:maven-core:3.0
+-org.sonatype.plexus:plexus-sec-dispatcher:1.3
  +-org.codehaus.plexus:plexus-utils:1.5.5

Dependency convergence error for 
org.codehaus.plexus:plexus-component-annotations:1.5.4 paths to dependency are:
+-org.apache.hadoop:hadoop-maven-plugins:3.0.0-SNAPSHOT
  +-org.apache.maven:maven-plugin-api:3.0
+-org.sonatype.sisu:sisu-inject-plexus:1.4.2
  +-org.codehaus.plexus:plexus-component-annotations:1.5.4
and
+-org.apache.hadoop:hadoop-maven-plugins:3.0.0-SNAPSHOT
  +-org.apache.maven:maven-core:3.0
+-org.apache.maven:maven-settings-builder:3.0
  +-org.codehaus.plexus:plexus-component-annotations:1.5.5
and
+-org.apache.hadoop:hadoop-maven-plugins:3.0.0-SNAPSHOT
  +-org.apache.maven:maven-core:3.0
+-org.apache.maven:maven-model-builder:3.0
  +-org.codehaus.plexus:plexus-component-annotations:1.5.5
and
+-org.apache.hadoop:hadoop-maven-plugins:3.0.0-SNAPSHOT
  +-org.apache.maven:maven-core:3.0
+-org.apache.maven:maven-aether-provider:3.0
  +-org.codehaus.plexus:plexus-component-annotations:1.5.5
and
+-org.apache.hadoop:hadoop-maven-plugins:3.0.0-SNAPSHOT
  +-org.apache.maven:maven-core

[jira] [Created] (HADOOP-9962) in order to avoid dependency divergence within Hadoop itself lets enable DependencyConvergence

2013-09-13 Thread Roman Shaposhnik (JIRA)
Roman Shaposhnik created HADOOP-9962:


 Summary: in order to avoid dependency divergence within Hadoop 
itself lets enable DependencyConvergence
 Key: HADOOP-9962
 URL: https://issues.apache.org/jira/browse/HADOOP-9962
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: 3.0.0, 2.1.1-beta
Reporter: Roman Shaposhnik
Assignee: Roman Shaposhnik


In order to avoid the likes of HADOOP-9961 it may be useful for us to enable 
DependencyConvergence check in  maven-enforcer-plugin.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9961) versions of a few transitive dependencies diverged between hadoop subprojects

2013-09-13 Thread Roman Shaposhnik (JIRA)
Roman Shaposhnik created HADOOP-9961:


 Summary: versions of a few transitive dependencies diverged 
between hadoop subprojects
 Key: HADOOP-9961
 URL: https://issues.apache.org/jira/browse/HADOOP-9961
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 3.0.0
Reporter: Roman Shaposhnik
Assignee: Roman Shaposhnik
Priority: Minor


I've noticed a few divergences between secondary dependencies of the various 
hadoop subprojects. For example:
{noformat}
[ERROR]
Dependency convergence error for org.apache.commons:commons-compress:1.4.1 
paths to dependency are:
+-org.apache.hadoop:hadoop-client:3.0.0-SNAPSHOT
  +-org.apache.hadoop:hadoop-common:3.0.0-20130913.204420-3360
+-org.apache.avro:avro:1.7.4
  +-org.apache.commons:commons-compress:1.4.1
and
+-org.apache.hadoop:hadoop-client:3.0.0-SNAPSHOT
  +-org.apache.hadoop:hadoop-common:3.0.0-20130913.204420-3360
+-org.apache.commons:commons-compress:1.4
{noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9923) yarn staging area on hdfs has wrong permission and is created by the wrong user

2013-08-30 Thread Roman Shaposhnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13754969#comment-13754969
 ] 

Roman Shaposhnik commented on HADOOP-9923:
--

I've always thought that it is better to initialize HDFS before the first use 
just like we've come to expect to initialize a freshly formatted root Linux 
filesystem with something like 'base-files' package in Debian. In fact, Bigtop 
provides a specicial initialization script just for that purpose so that users 
of Bigtop-derived distributions of Hadoop can simply run init-hdfs and now 
worry about minute details of exact perms/ownerships.

You can take a look at our implementation over here: 
https://git-wip-us.apache.org/repos/asf?p=bigtop.git;a=blob;f=bigtop-packages/src/common/hadoop/init-hdfs.sh;h=bc96761cef604a6bb42fc09e7d439b8250993973;hb=HEAD

We also plan to improve it for Bigtop 0.7.0 release.
   

> yarn staging area on hdfs has wrong permission and is created by the wrong 
> user
> ---
>
> Key: HADOOP-9923
> URL: https://issues.apache.org/jira/browse/HADOOP-9923
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.1.0-beta
>Reporter: André Kelpe
>
> I am setting up a cluster with hadoop 2.1-beta that consists of the following 
> components:
> master: runs the namenode, the resourcemanager and the job history server.
> hadoop1, hadoop2, hadoop3: run datanodes and node managers
> I created 3 system users for the different components, like explained in the 
> docs:
> hdfs: runs all things hdfs
> yarn: runs all things yarn
> mapred: runs the job history server
> If I now boot up the cluster, I cannot submit jobs since the yarn staging 
> area permissions do not allow it.
> What I found out is, that the job-history-server is creating the staging 
> directory while starting. This first of all causes it to be owned by the 
> wrong user (mapred) and having the wrong permision (770). The docs are not 
> really clear if I am supposed to start hdfs first, then create the staging 
> area by hand and then start the job-history-server or if this is supposed to 
> happen automatically by itself.
> In any case, either the docs should be updated or the job-history-server 
> should not create the directory.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9917) cryptic warning when killing a job running with yarn

2013-08-29 Thread Roman Shaposhnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13754369#comment-13754369
 ] 

Roman Shaposhnik commented on HADOOP-9917:
--

The cli change simply reflects the fact that 3 parts of Hadoop 2.x (HDFS, YARN, 
MR) are actually independent of each other. To a degree that one can, lets say, 
use YARN but not MR, or even use YARN/MR over an alternative filesystem 
implementation. Hence split of functionality into separate scripts.

> cryptic warning when killing a job running with yarn
> 
>
> Key: HADOOP-9917
> URL: https://issues.apache.org/jira/browse/HADOOP-9917
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.1.0-beta
>Reporter: André Kelpe
>Assignee: Roman Shaposhnik
>Priority: Minor
> Attachments: HADOOP-9917.patch.txt
>
>
> When I am killing a job like this
> hadoop job -kill 
> I am getting a cryptic warning, which I don't really understand:
> DEPRECATED: Use of this script to execute mapred command is deprecated.
> Instead use the mapred command for it.
> I fail parsing this and I believe many others will do too. Please make this 
> warning clearer.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9917) cryptic warning when killing a job running with yarn

2013-08-29 Thread Roman Shaposhnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Shaposhnik updated HADOOP-9917:
-

Status: Patch Available  (was: Open)

> cryptic warning when killing a job running with yarn
> 
>
> Key: HADOOP-9917
> URL: https://issues.apache.org/jira/browse/HADOOP-9917
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.1.0-beta
>Reporter: André Kelpe
>Priority: Minor
> Attachments: HADOOP-9917.patch.txt
>
>
> When I am killing a job like this
> hadoop job -kill 
> I am getting a cryptic warning, which I don't really understand:
> DEPRECATED: Use of this script to execute mapred command is deprecated.
> Instead use the mapred command for it.
> I fail parsing this and I believe many others will do too. Please make this 
> warning clearer.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9917) cryptic warning when killing a job running with yarn

2013-08-29 Thread Roman Shaposhnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Shaposhnik updated HADOOP-9917:
-

Attachment: HADOOP-9917.patch.txt

> cryptic warning when killing a job running with yarn
> 
>
> Key: HADOOP-9917
> URL: https://issues.apache.org/jira/browse/HADOOP-9917
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.1.0-beta
>Reporter: André Kelpe
>Priority: Minor
> Attachments: HADOOP-9917.patch.txt
>
>
> When I am killing a job like this
> hadoop job -kill 
> I am getting a cryptic warning, which I don't really understand:
> DEPRECATED: Use of this script to execute mapred command is deprecated.
> Instead use the mapred command for it.
> I fail parsing this and I believe many others will do too. Please make this 
> warning clearer.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (HADOOP-9917) cryptic warning when killing a job running with yarn

2013-08-29 Thread Roman Shaposhnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Shaposhnik reassigned HADOOP-9917:


Assignee: Roman Shaposhnik

> cryptic warning when killing a job running with yarn
> 
>
> Key: HADOOP-9917
> URL: https://issues.apache.org/jira/browse/HADOOP-9917
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.1.0-beta
>Reporter: André Kelpe
>Assignee: Roman Shaposhnik
>Priority: Minor
> Attachments: HADOOP-9917.patch.txt
>
>
> When I am killing a job like this
> hadoop job -kill 
> I am getting a cryptic warning, which I don't really understand:
> DEPRECATED: Use of this script to execute mapred command is deprecated.
> Instead use the mapred command for it.
> I fail parsing this and I believe many others will do too. Please make this 
> warning clearer.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9917) cryptic warning when killing a job running with yarn

2013-08-29 Thread Roman Shaposhnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13753759#comment-13753759
 ] 

Roman Shaposhnik commented on HADOOP-9917:
--

It simply means that users are encouraged to start using 'mapred' command for 
manipulating their MR jobs instead of a top level 'hadoop' command. Perhaps 
just adding quotes (like I did in a preceding sentence) around 'mapred' to make 
it standout as a name of a command to use would do? Or mapred(1)?

> cryptic warning when killing a job running with yarn
> 
>
> Key: HADOOP-9917
> URL: https://issues.apache.org/jira/browse/HADOOP-9917
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.1.0-beta
>Reporter: André Kelpe
>Priority: Minor
>
> When I am killing a job like this
> hadoop job -kill 
> I am getting a cryptic warning, which I don't really understand:
> DEPRECATED: Use of this script to execute mapred command is deprecated.
> Instead use the mapred command for it.
> I fail parsing this and I believe many others will do too. Please make this 
> warning clearer.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9917) cryptic warning when killing a job running with yarn

2013-08-29 Thread Roman Shaposhnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13753747#comment-13753747
 ] 

Roman Shaposhnik commented on HADOOP-9917:
--

Would you be able to suggest a better wording?

> cryptic warning when killing a job running with yarn
> 
>
> Key: HADOOP-9917
> URL: https://issues.apache.org/jira/browse/HADOOP-9917
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.1.0-beta
>Reporter: André Kelpe
>Priority: Minor
>
> When I am killing a job like this
> hadoop job -kill 
> I am getting a cryptic warning, which I don't really understand:
> DEPRECATED: Use of this script to execute mapred command is deprecated.
> Instead use the mapred command for it.
> I fail parsing this and I believe many others will do too. Please make this 
> warning clearer.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9911) hadoop 2.1.0-beta tarball only contains 32bit native libraries

2013-08-28 Thread Roman Shaposhnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13752700#comment-13752700
 ] 

Roman Shaposhnik commented on HADOOP-9911:
--

There was a change in layout in 2.x of where the native binaries are coming 
from -- they are no longer under and arch-specific folder. 

> hadoop 2.1.0-beta tarball only contains 32bit native libraries
> --
>
> Key: HADOOP-9911
> URL: https://issues.apache.org/jira/browse/HADOOP-9911
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.1.0-beta
>Reporter: André Kelpe
>
> I am setting up a cluster on 64bit linux and I noticed, that the tarball only 
> ships with 32 bit libraries:
> $ pwd
> /opt/hadoop-2.1.0-beta/lib/native
> $ ls -al
> total 2376
> drwxr-xr-x 2 67974 users   4096 Aug 15 20:59 .
> drwxr-xr-x 3 67974 users   4096 Aug 15 20:59 ..
> -rw-r--r-- 1 67974 users 598578 Aug 15 20:59 libhadoop.a
> -rw-r--r-- 1 67974 users 764772 Aug 15 20:59 libhadooppipes.a
> lrwxrwxrwx 1 67974 users 18 Aug 15 20:59 libhadoop.so -> 
> libhadoop.so.1.0.0
> -rwxr-xr-x 1 67974 users 407568 Aug 15 20:59 libhadoop.so.1.0.0
> -rw-r--r-- 1 67974 users 304632 Aug 15 20:59 libhadooputils.a
> -rw-r--r-- 1 67974 users 184414 Aug 15 20:59 libhdfs.a
> lrwxrwxrwx 1 67974 users 16 Aug 15 20:59 libhdfs.so -> libhdfs.so.0.0.0
> -rwxr-xr-x 1 67974 users 149556 Aug 15 20:59 libhdfs.so.0.0.0
> $ file *
> libhadoop.a:current ar archive
> libhadooppipes.a:   current ar archive
> libhadoop.so:   symbolic link to `libhadoop.so.1.0.0'
> libhadoop.so.1.0.0: ELF 32-bit LSB shared object, Intel 80386, version 1 
> (SYSV), dynamically linked, 
> BuildID[sha1]=0x527e88ec3e92a95389839bd3fc9d7dbdebf654d6, not stripped
> libhadooputils.a:   current ar archive
> libhdfs.a:  current ar archive
> libhdfs.so: symbolic link to `libhdfs.so.0.0.0'
> libhdfs.so.0.0.0:   ELF 32-bit LSB shared object, Intel 80386, version 1 
> (SYSV), dynamically linked, 
> BuildID[sha1]=0xddb2abae9272f584edbe22c76b43fcda9436f685, not stripped
> $ hadoop checknative
> 13/08/28 18:11:17 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> Native library checking:
> hadoop: false 
> zlib:   false 
> snappy: false 
> lz4:false 
> bzip2:  false 
> 13/08/28 18:11:17 INFO util.ExitUtil: Exiting with status 1

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9911) hadoop 2.1.0-beta tarball only contains 32bit native libraries

2013-08-28 Thread Roman Shaposhnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13752663#comment-13752663
 ] 

Roman Shaposhnik commented on HADOOP-9911:
--

I guess a bigger question is whether it actually even makes sense to ship 
native bits at all. I'm afraid there's no way that Hadoop binary convenience 
artifacts can satisfy all the possible Linux combinations out there. 32bit vs. 
64bit is just one aspect of it.

> hadoop 2.1.0-beta tarball only contains 32bit native libraries
> --
>
> Key: HADOOP-9911
> URL: https://issues.apache.org/jira/browse/HADOOP-9911
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.1.0-beta
>Reporter: André Kelpe
>
> I am setting up a cluster on 64bit linux and I noticed, that the tarball only 
> ships with 32 bit libraries:
> $ pwd
> /opt/hadoop-2.1.0-beta/lib/native
> $ ls -al
> total 2376
> drwxr-xr-x 2 67974 users   4096 Aug 15 20:59 .
> drwxr-xr-x 3 67974 users   4096 Aug 15 20:59 ..
> -rw-r--r-- 1 67974 users 598578 Aug 15 20:59 libhadoop.a
> -rw-r--r-- 1 67974 users 764772 Aug 15 20:59 libhadooppipes.a
> lrwxrwxrwx 1 67974 users 18 Aug 15 20:59 libhadoop.so -> 
> libhadoop.so.1.0.0
> -rwxr-xr-x 1 67974 users 407568 Aug 15 20:59 libhadoop.so.1.0.0
> -rw-r--r-- 1 67974 users 304632 Aug 15 20:59 libhadooputils.a
> -rw-r--r-- 1 67974 users 184414 Aug 15 20:59 libhdfs.a
> lrwxrwxrwx 1 67974 users 16 Aug 15 20:59 libhdfs.so -> libhdfs.so.0.0.0
> -rwxr-xr-x 1 67974 users 149556 Aug 15 20:59 libhdfs.so.0.0.0
> $ file *
> libhadoop.a:current ar archive
> libhadooppipes.a:   current ar archive
> libhadoop.so:   symbolic link to `libhadoop.so.1.0.0'
> libhadoop.so.1.0.0: ELF 32-bit LSB shared object, Intel 80386, version 1 
> (SYSV), dynamically linked, 
> BuildID[sha1]=0x527e88ec3e92a95389839bd3fc9d7dbdebf654d6, not stripped
> libhadooputils.a:   current ar archive
> libhdfs.a:  current ar archive
> libhdfs.so: symbolic link to `libhdfs.so.0.0.0'
> libhdfs.so.0.0.0:   ELF 32-bit LSB shared object, Intel 80386, version 1 
> (SYSV), dynamically linked, 
> BuildID[sha1]=0xddb2abae9272f584edbe22c76b43fcda9436f685, not stripped
> $ hadoop checknative
> 13/08/28 18:11:17 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> Native library checking:
> hadoop: false 
> zlib:   false 
> snappy: false 
> lz4:false 
> bzip2:  false 
> 13/08/28 18:11:17 INFO util.ExitUtil: Exiting with status 1

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9676) make maximum RPC buffer size configurable

2013-07-01 Thread Roman Shaposhnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13697368#comment-13697368
 ] 

Roman Shaposhnik commented on HADOOP-9676:
--

Tested this patch on top of branch-2.1 with Bigtop -- the biggest issue (NN 
OOMing) is now gone, but a few subtests from TestCLI still fail. A big +1 to 
have this patch as part of 2.1

> make maximum RPC buffer size configurable
> -
>
> Key: HADOOP-9676
> URL: https://issues.apache.org/jira/browse/HADOOP-9676
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.1.0-beta
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>Priority: Minor
> Attachments: HADOOP-9676.001.patch, HADOOP-9676.003.patch
>
>
> Currently the RPC server just allocates however much memory the client asks 
> for, without validating.  It would be nice to make the maximum RPC buffer 
> size configurable.  This would prevent a rogue client from bringing down the 
> NameNode (or other Hadoop daemon) with a few requests for 2 GB buffers.  It 
> would also make it easier to debug issues with super-large RPCs or malformed 
> headers, since OOMs can be difficult for developers to reproduce.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9654) IPC timeout doesn't seem to be kicking in

2013-06-20 Thread Roman Shaposhnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13689946#comment-13689946
 ] 

Roman Shaposhnik commented on HADOOP-9654:
--

[~jagane] as a matter of fact I didn't know that -- thanks a million for 
bringing this up! I can definitely give your suggestion a try (the NN keeps 
OOMing -- which gives me a perfect testbed for this).

I do have a question for the rest of the folks here though -- a client that 
never times out doesn't strike me as a great default. Am I missing something? 
Should we change the default for the client to actually timeout?

> IPC timeout doesn't seem to be kicking in
> -
>
> Key: HADOOP-9654
> URL: https://issues.apache.org/jira/browse/HADOOP-9654
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.1.0-beta
>Reporter: Roman Shaposhnik
>
> During my Bigtop testing I made the NN OOM. This, in turn, made all of the 
> clients stuck in the IPC call (even the new clients that I run *after* the NN 
> went OOM). Here's an example of a jstack output on the client that was 
> running:
> {noformat}
> $ hadoop fs -lsr /
> {noformat}
> Stacktrace:
> {noformat}
> /usr/java/jdk1.6.0_21/bin/jstack 19078
> 2013-06-19 23:14:00
> Full thread dump Java HotSpot(TM) 64-Bit Server VM (17.0-b16 mixed mode):
> "Attach Listener" daemon prio=10 tid=0x7fcd8c8c1800 nid=0x5105 waiting on 
> condition [0x]
>java.lang.Thread.State: RUNNABLE
> "IPC Client (1223039541) connection to 
> ip-10-144-82-213.ec2.internal/10.144.82.213:17020 from root" daemon prio=10 
> tid=0x7fcd8c7ea000 nid=0x4aa0 runnable [0x7fcd443e2000]
>java.lang.Thread.State: RUNNABLE
>   at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
>   at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210)
>   at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
>   at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
>   - locked <0x7fcd7529de18> (a sun.nio.ch.Util$1)
>   - locked <0x7fcd7529de00> (a java.util.Collections$UnmodifiableSet)
>   - locked <0x7fcd7529da80> (a sun.nio.ch.EPollSelectorImpl)
>   at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
>   at 
> org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335)
>   at 
> org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)
>   at 
> org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161)
>   at 
> org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131)
>   at java.io.FilterInputStream.read(FilterInputStream.java:116)
>   at java.io.FilterInputStream.read(FilterInputStream.java:116)
>   at 
> org.apache.hadoop.ipc.Client$Connection$PingInputStream.read(Client.java:421)
>   at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
>   at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
>   - locked <0x7fcd752aaf18> (a java.io.BufferedInputStream)
>   at java.io.DataInputStream.readInt(DataInputStream.java:370)
>   at 
> org.apache.hadoop.ipc.Client$Connection.receiveRpcResponse(Client.java:943)
>   at org.apache.hadoop.ipc.Client$Connection.run(Client.java:840)
> "Low Memory Detector" daemon prio=10 tid=0x7fcd8c09 nid=0x4a9b 
> runnable [0x]
>java.lang.Thread.State: RUNNABLE
> "CompilerThread1" daemon prio=10 tid=0x7fcd8c08d800 nid=0x4a9a waiting on 
> condition [0x]
>java.lang.Thread.State: RUNNABLE
> "CompilerThread0" daemon prio=10 tid=0x7fcd8c08a800 nid=0x4a99 waiting on 
> condition [0x]
>java.lang.Thread.State: RUNNABLE
> "Signal Dispatcher" daemon prio=10 tid=0x7fcd8c088800 nid=0x4a98 runnable 
> [0x]
>java.lang.Thread.State: RUNNABLE
> "Finalizer" daemon prio=10 tid=0x7fcd8c06a000 nid=0x4a97 in Object.wait() 
> [0x7fcd902e9000]
>java.lang.Thread.State: WAITING (on object monitor)
>   at java.lang.Object.wait(Native Method)
>   - waiting on <0x7fcd75fc0470> (a java.lang.ref.ReferenceQueue$Lock)
>   at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:118)
>   - locked <0x7fcd75fc0470> (a java.lang.ref.ReferenceQueue$Lock)
>   at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:134)
>   at java.lang.ref.Finalizer$FinalizerThread.run(Finalizer.java:159)
> "Reference Handler" daemon prio=10 tid=0x7fcd8c068000 nid=0x4a96 in 
> Object.wait() [0x7fcd903ea000]
>java.lang.Thread.State: WAITING (on object monitor)
>   at java.lang.Object.wait(Native Method)
>   - waiting on <0x7fcd75fc0550> (a java.lang.ref.Reference$Lock)
>   at ja

[jira] [Created] (HADOOP-9654) IPC timeout doesn't seem to be kicking in

2013-06-19 Thread Roman Shaposhnik (JIRA)
Roman Shaposhnik created HADOOP-9654:


 Summary: IPC timeout doesn't seem to be kicking in
 Key: HADOOP-9654
 URL: https://issues.apache.org/jira/browse/HADOOP-9654
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Affects Versions: 2.1.0-beta
Reporter: Roman Shaposhnik


During my Bigtop testing I made the NN OOM. This, in turn, made all of the 
clients stuck in the IPC call (even the new clients that I run *after* the NN 
went OOM). Here's an example of a jstack output on the client that was running:

{noformat}
$ hadoop fs -lsr /
{noformat}

Stacktrace:

{noformat}
/usr/java/jdk1.6.0_21/bin/jstack 19078
2013-06-19 23:14:00
Full thread dump Java HotSpot(TM) 64-Bit Server VM (17.0-b16 mixed mode):

"Attach Listener" daemon prio=10 tid=0x7fcd8c8c1800 nid=0x5105 waiting on 
condition [0x]
   java.lang.Thread.State: RUNNABLE

"IPC Client (1223039541) connection to 
ip-10-144-82-213.ec2.internal/10.144.82.213:17020 from root" daemon prio=10 
tid=0x7fcd8c7ea000 nid=0x4aa0 runnable [0x7fcd443e2000]
   java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:210)
at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
- locked <0x7fcd7529de18> (a sun.nio.ch.Util$1)
- locked <0x7fcd7529de00> (a java.util.Collections$UnmodifiableSet)
- locked <0x7fcd7529da80> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
at 
org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335)
at 
org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)
at 
org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161)
at 
org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131)
at java.io.FilterInputStream.read(FilterInputStream.java:116)
at java.io.FilterInputStream.read(FilterInputStream.java:116)
at 
org.apache.hadoop.ipc.Client$Connection$PingInputStream.read(Client.java:421)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
- locked <0x7fcd752aaf18> (a java.io.BufferedInputStream)
at java.io.DataInputStream.readInt(DataInputStream.java:370)
at 
org.apache.hadoop.ipc.Client$Connection.receiveRpcResponse(Client.java:943)
at org.apache.hadoop.ipc.Client$Connection.run(Client.java:840)

"Low Memory Detector" daemon prio=10 tid=0x7fcd8c09 nid=0x4a9b runnable 
[0x]
   java.lang.Thread.State: RUNNABLE

"CompilerThread1" daemon prio=10 tid=0x7fcd8c08d800 nid=0x4a9a waiting on 
condition [0x]
   java.lang.Thread.State: RUNNABLE

"CompilerThread0" daemon prio=10 tid=0x7fcd8c08a800 nid=0x4a99 waiting on 
condition [0x]
   java.lang.Thread.State: RUNNABLE

"Signal Dispatcher" daemon prio=10 tid=0x7fcd8c088800 nid=0x4a98 runnable 
[0x]
   java.lang.Thread.State: RUNNABLE

"Finalizer" daemon prio=10 tid=0x7fcd8c06a000 nid=0x4a97 in Object.wait() 
[0x7fcd902e9000]
   java.lang.Thread.State: WAITING (on object monitor)
at java.lang.Object.wait(Native Method)
- waiting on <0x7fcd75fc0470> (a java.lang.ref.ReferenceQueue$Lock)
at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:118)
- locked <0x7fcd75fc0470> (a java.lang.ref.ReferenceQueue$Lock)
at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:134)
at java.lang.ref.Finalizer$FinalizerThread.run(Finalizer.java:159)

"Reference Handler" daemon prio=10 tid=0x7fcd8c068000 nid=0x4a96 in 
Object.wait() [0x7fcd903ea000]
   java.lang.Thread.State: WAITING (on object monitor)
at java.lang.Object.wait(Native Method)
- waiting on <0x7fcd75fc0550> (a java.lang.ref.Reference$Lock)
at java.lang.Object.wait(Object.java:485)
at java.lang.ref.Reference$ReferenceHandler.run(Reference.java:116)
- locked <0x7fcd75fc0550> (a java.lang.ref.Reference$Lock)

"main" prio=10 tid=0x7fcd8c00a800 nid=0x4a92 in Object.wait() 
[0x7fcd91b06000]
   java.lang.Thread.State: WAITING (on object monitor)
at java.lang.Object.wait(Native Method)
- waiting on <0x7fcd752528e8> (a org.apache.hadoop.ipc.Client$Call)
at java.lang.Object.wait(Object.java:485)
at org.apache.hadoop.ipc.Client.call(Client.java:1284)
- locked <0x7fcd752528e8> (a org.apache.hadoop.ipc.Client$Call)
at org.apache.hadoop.ipc.Client.call(Client.java:1250)
at 
org.apache.hadoo

[jira] [Commented] (HADOOP-9444) $var shell substitution in properties are not expanded in hadoop-policy.xml

2013-03-29 Thread Roman Shaposhnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9444?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13617521#comment-13617521
 ] 

Roman Shaposhnik commented on HADOOP-9444:
--

Perfect! Thanks a million for your quick review and commit!

> $var shell substitution in properties are not expanded in hadoop-policy.xml
> ---
>
> Key: HADOOP-9444
> URL: https://issues.apache.org/jira/browse/HADOOP-9444
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 2.0.1-alpha
> Environment: BigTop Kerberized cluster test environment
>Reporter: Konstantin Boudnik
>Assignee: Roman Shaposhnik
>Priority: Blocker
> Fix For: 2.0.4-alpha
>
> Attachments: YARN-509.patch.txt
>
>
> During BigTop 0.6.0 release test cycle, [~rvs] came around the following 
> problem:
> {noformat}
> 013-03-26 15:37:03,573 FATAL
> org.apache.hadoop.yarn.server.nodemanager.NodeManager: Error starting
> NodeManager
> org.apache.hadoop.yarn.YarnException: Failed to Start
> org.apache.hadoop.yarn.server.nodemanager.NodeManager
> at 
> org.apache.hadoop.yarn.service.CompositeService.start(CompositeService.java:78)
> at 
> org.apache.hadoop.yarn.server.nodemanager.NodeManager.start(NodeManager.java:199)
> at 
> org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartNodeManager(NodeManager.java:322)
> at 
> org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:359)
> Caused by: org.apache.avro.AvroRuntimeException:
> java.lang.reflect.UndeclaredThrowableException
> at 
> org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.start(NodeStatusUpdaterImpl.java:162)
> at 
> org.apache.hadoop.yarn.service.CompositeService.start(CompositeService.java:68)
> ... 3 more
> Caused by: java.lang.reflect.UndeclaredThrowableException
> at 
> org.apache.hadoop.yarn.exceptions.impl.pb.YarnRemoteExceptionPBImpl.unwrapAndThrowException(YarnRemoteExceptionPBImpl.java:128)
> at 
> org.apache.hadoop.yarn.server.api.impl.pb.client.ResourceTrackerPBClientImpl.registerNodeManager(ResourceTrackerPBClientImpl.java:61)
> at 
> org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.registerWithRM(NodeStatusUpdaterImpl.java:199)
> at 
> org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl.start(NodeStatusUpdaterImpl.java:158)
> ... 4 more
> Caused by: 
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException):
> User yarn/ip-10-46-37-244.ec2.internal@BIGTOP (auth:KERBEROS) is not
> authorized for protocol interface
> org.apache.hadoop.yarn.server.api.ResourceTrackerPB, expected client
> Kerberos principal is yarn/ip-10-46-37-244.ec2.internal@BIGTOP
> at org.apache.hadoop.ipc.Client.call(Client.java:1235)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
> at $Proxy26.registerNodeManager(Unknown Source)
> at 
> org.apache.hadoop.yarn.server.api.impl.pb.client.ResourceTrackerPBClientImpl.registerNodeManager(ResourceTrackerPBClientImpl.java:59)
> ... 6 more
> {noformat}
> The most significant part is 
> {{User yarn/ip-10-46-37-244.ec2.internal@BIGTOP (auth:KERBEROS) is not 
> authorized for protocol interface  
> org.apache.hadoop.yarn.server.api.ResourceTrackerPB}} indicating that 
> ResourceTrackerPB hasn't been annotated with {{@KerberosInfo}} nor 
> {{@TokenInfo}}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Reopened] (HADOOP-9399) protoc maven plugin doesn't work on mvn 3.0.2

2013-03-22 Thread Roman Shaposhnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9399?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Shaposhnik reopened HADOOP-9399:
--

  Assignee: Konstantin Boudnik  (was: Todd Lipcon)

Cos, can we, please, backport it to branch-2.0.4-alpha?

> protoc maven plugin doesn't work on mvn 3.0.2
> -
>
> Key: HADOOP-9399
> URL: https://issues.apache.org/jira/browse/HADOOP-9399
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0
>Reporter: Todd Lipcon
>Assignee: Konstantin Boudnik
>Priority: Minor
> Fix For: 3.0.0, 2.0.5-beta, 2.0.4-alpha
>
> Attachments: hadoop-9399.txt
>
>
> On my machine with mvn 3.0.2, I get a ClassCastException trying to use the 
> maven protoc plugin. The issue seems to be that mvn 3.0.2 sees the List 
> parameter, and doesn't see the generic type argument, and stuffs Strings 
> inside instead. So, we get ClassCastException trying to use the objects as 
> Files.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9399) protoc maven plugin doesn't work on mvn 3.0.2

2013-03-22 Thread Roman Shaposhnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9399?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Shaposhnik updated HADOOP-9399:
-

Fix Version/s: 2.0.4-alpha

> protoc maven plugin doesn't work on mvn 3.0.2
> -
>
> Key: HADOOP-9399
> URL: https://issues.apache.org/jira/browse/HADOOP-9399
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
>Priority: Minor
> Fix For: 3.0.0, 2.0.5-beta, 2.0.4-alpha
>
> Attachments: hadoop-9399.txt
>
>
> On my machine with mvn 3.0.2, I get a ClassCastException trying to use the 
> maven protoc plugin. The issue seems to be that mvn 3.0.2 sees the List 
> parameter, and doesn't see the generic type argument, and stuffs Strings 
> inside instead. So, we get ClassCastException trying to use the objects as 
> Files.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9299) kerberos name resolution is kicking in even when kerberos is not configured

2013-03-18 Thread Roman Shaposhnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Shaposhnik updated HADOOP-9299:
-

Attachment: HADOOP-9299-branch2.0.4.patch

> kerberos name resolution is kicking in even when kerberos is not configured
> ---
>
> Key: HADOOP-9299
> URL: https://issues.apache.org/jira/browse/HADOOP-9299
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.0.3-alpha
>Reporter: Roman Shaposhnik
>Assignee: Daryn Sharp
>Priority: Blocker
> Fix For: 3.0.0, 2.0.5-beta
>
> Attachments: HADOOP-9299-branch2.0.4.patch, HADOOP-9299.patch, 
> HADOOP-9299.patch
>
>
> Here's what I'm observing on a fully distributed cluster deployed via Bigtop 
> from the RC0 2.0.3-alpha tarball:
> {noformat}
> 528077-oozie-tucu-W@mr-node] Error starting action [mr-node]. ErrorType 
> [TRANSIENT], ErrorCode [JA009], Message [JA009: 
> org.apache.hadoop.security.authentication.util.KerberosName$NoMatchingRule: 
> No rules applied to yarn/localhost@LOCALREALM
> at 
> org.apache.hadoop.security.token.delegation.AbstractDelegationTokenIdentifier.(AbstractDelegationTokenIdentifier.java:68)
> at 
> org.apache.hadoop.mapreduce.v2.api.MRDelegationTokenIdentifier.(MRDelegationTokenIdentifier.java:51)
> at 
> org.apache.hadoop.mapreduce.v2.hs.HistoryClientService$HSClientProtocolHandler.getDelegationToken(HistoryClientService.java:336)
> at 
> org.apache.hadoop.mapreduce.v2.api.impl.pb.service.MRClientProtocolPBServiceImpl.getDelegationToken(MRClientProtocolPBServiceImpl.java:210)
> at 
> org.apache.hadoop.yarn.proto.MRClientProtocol$MRClientProtocolService$2.callBlockingMethod(MRClientProtocol.java:240)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1735)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1731)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1441)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1729)
> Caused by: 
> org.apache.hadoop.security.authentication.util.KerberosName$NoMatchingRule: 
> No rules applied to yarn/localhost@LOCALREALM
> at 
> org.apache.hadoop.security.authentication.util.KerberosName.getShortName(KerberosName.java:378)
> at 
> org.apache.hadoop.security.token.delegation.AbstractDelegationTokenIdentifier.(AbstractDelegationTokenIdentifier.java:66)
> ... 12 more
> ]
> {noformat}
> This is submitting a mapreduce job via Oozie 3.3.1. The reason I think this 
> is a Hadoop issue rather than the oozie one is because when I hack 
> /etc/krb5.conf to be:
> {noformat}
> [libdefaults]
>ticket_lifetime = 600
>default_realm = LOCALHOST
>default_tkt_enctypes = des3-hmac-sha1 des-cbc-crc
>default_tgs_enctypes = des3-hmac-sha1 des-cbc-crc
> [realms]
>LOCALHOST = {
>kdc = localhost:88
>default_domain = .local
>}
> [domain_realm]
>.local = LOCALHOST
> [logging]
>kdc = FILE:/var/log/krb5kdc.log
>admin_server = FILE:/var/log/kadmin.log
>default = FILE:/var/log/krb5lib.log
> {noformat}
> The issue goes away. 
> Now, once again -- the kerberos auth is NOT configured for Hadoop, hence it 
> should NOT pay attention to /etc/krb5.conf to begin with.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9299) kerberos name resolution is kicking in even when kerberos is not configured

2013-03-18 Thread Roman Shaposhnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13605412#comment-13605412
 ] 

Roman Shaposhnik commented on HADOOP-9299:
--

Thanks Cos! I'm attaching a backport of the original patch into branch-2.0.4

> kerberos name resolution is kicking in even when kerberos is not configured
> ---
>
> Key: HADOOP-9299
> URL: https://issues.apache.org/jira/browse/HADOOP-9299
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.0.3-alpha
>Reporter: Roman Shaposhnik
>Assignee: Daryn Sharp
>Priority: Blocker
> Fix For: 3.0.0, 2.0.5-beta
>
> Attachments: HADOOP-9299-branch2.0.4.patch, HADOOP-9299.patch, 
> HADOOP-9299.patch
>
>
> Here's what I'm observing on a fully distributed cluster deployed via Bigtop 
> from the RC0 2.0.3-alpha tarball:
> {noformat}
> 528077-oozie-tucu-W@mr-node] Error starting action [mr-node]. ErrorType 
> [TRANSIENT], ErrorCode [JA009], Message [JA009: 
> org.apache.hadoop.security.authentication.util.KerberosName$NoMatchingRule: 
> No rules applied to yarn/localhost@LOCALREALM
> at 
> org.apache.hadoop.security.token.delegation.AbstractDelegationTokenIdentifier.(AbstractDelegationTokenIdentifier.java:68)
> at 
> org.apache.hadoop.mapreduce.v2.api.MRDelegationTokenIdentifier.(MRDelegationTokenIdentifier.java:51)
> at 
> org.apache.hadoop.mapreduce.v2.hs.HistoryClientService$HSClientProtocolHandler.getDelegationToken(HistoryClientService.java:336)
> at 
> org.apache.hadoop.mapreduce.v2.api.impl.pb.service.MRClientProtocolPBServiceImpl.getDelegationToken(MRClientProtocolPBServiceImpl.java:210)
> at 
> org.apache.hadoop.yarn.proto.MRClientProtocol$MRClientProtocolService$2.callBlockingMethod(MRClientProtocol.java:240)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1735)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1731)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1441)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1729)
> Caused by: 
> org.apache.hadoop.security.authentication.util.KerberosName$NoMatchingRule: 
> No rules applied to yarn/localhost@LOCALREALM
> at 
> org.apache.hadoop.security.authentication.util.KerberosName.getShortName(KerberosName.java:378)
> at 
> org.apache.hadoop.security.token.delegation.AbstractDelegationTokenIdentifier.(AbstractDelegationTokenIdentifier.java:66)
> ... 12 more
> ]
> {noformat}
> This is submitting a mapreduce job via Oozie 3.3.1. The reason I think this 
> is a Hadoop issue rather than the oozie one is because when I hack 
> /etc/krb5.conf to be:
> {noformat}
> [libdefaults]
>ticket_lifetime = 600
>default_realm = LOCALHOST
>default_tkt_enctypes = des3-hmac-sha1 des-cbc-crc
>default_tgs_enctypes = des3-hmac-sha1 des-cbc-crc
> [realms]
>LOCALHOST = {
>kdc = localhost:88
>default_domain = .local
>}
> [domain_realm]
>.local = LOCALHOST
> [logging]
>kdc = FILE:/var/log/krb5kdc.log
>admin_server = FILE:/var/log/kadmin.log
>default = FILE:/var/log/krb5lib.log
> {noformat}
> The issue goes away. 
> Now, once again -- the kerberos auth is NOT configured for Hadoop, hence it 
> should NOT pay attention to /etc/krb5.conf to begin with.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9299) kerberos name resolution is kicking in even when kerberos is not configured

2013-03-18 Thread Roman Shaposhnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13605193#comment-13605193
 ] 

Roman Shaposhnik commented on HADOOP-9299:
--

Wasn't there an agreement for this to be fixed for 2.0.4-alpha?

> kerberos name resolution is kicking in even when kerberos is not configured
> ---
>
> Key: HADOOP-9299
> URL: https://issues.apache.org/jira/browse/HADOOP-9299
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.0.3-alpha
>Reporter: Roman Shaposhnik
>Assignee: Daryn Sharp
>Priority: Blocker
> Fix For: 3.0.0, 2.0.5-beta
>
> Attachments: HADOOP-9299.patch, HADOOP-9299.patch
>
>
> Here's what I'm observing on a fully distributed cluster deployed via Bigtop 
> from the RC0 2.0.3-alpha tarball:
> {noformat}
> 528077-oozie-tucu-W@mr-node] Error starting action [mr-node]. ErrorType 
> [TRANSIENT], ErrorCode [JA009], Message [JA009: 
> org.apache.hadoop.security.authentication.util.KerberosName$NoMatchingRule: 
> No rules applied to yarn/localhost@LOCALREALM
> at 
> org.apache.hadoop.security.token.delegation.AbstractDelegationTokenIdentifier.(AbstractDelegationTokenIdentifier.java:68)
> at 
> org.apache.hadoop.mapreduce.v2.api.MRDelegationTokenIdentifier.(MRDelegationTokenIdentifier.java:51)
> at 
> org.apache.hadoop.mapreduce.v2.hs.HistoryClientService$HSClientProtocolHandler.getDelegationToken(HistoryClientService.java:336)
> at 
> org.apache.hadoop.mapreduce.v2.api.impl.pb.service.MRClientProtocolPBServiceImpl.getDelegationToken(MRClientProtocolPBServiceImpl.java:210)
> at 
> org.apache.hadoop.yarn.proto.MRClientProtocol$MRClientProtocolService$2.callBlockingMethod(MRClientProtocol.java:240)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1735)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1731)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1441)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1729)
> Caused by: 
> org.apache.hadoop.security.authentication.util.KerberosName$NoMatchingRule: 
> No rules applied to yarn/localhost@LOCALREALM
> at 
> org.apache.hadoop.security.authentication.util.KerberosName.getShortName(KerberosName.java:378)
> at 
> org.apache.hadoop.security.token.delegation.AbstractDelegationTokenIdentifier.(AbstractDelegationTokenIdentifier.java:66)
> ... 12 more
> ]
> {noformat}
> This is submitting a mapreduce job via Oozie 3.3.1. The reason I think this 
> is a Hadoop issue rather than the oozie one is because when I hack 
> /etc/krb5.conf to be:
> {noformat}
> [libdefaults]
>ticket_lifetime = 600
>default_realm = LOCALHOST
>default_tkt_enctypes = des3-hmac-sha1 des-cbc-crc
>default_tgs_enctypes = des3-hmac-sha1 des-cbc-crc
> [realms]
>LOCALHOST = {
>kdc = localhost:88
>default_domain = .local
>}
> [domain_realm]
>.local = LOCALHOST
> [logging]
>kdc = FILE:/var/log/krb5kdc.log
>admin_server = FILE:/var/log/kadmin.log
>default = FILE:/var/log/krb5lib.log
> {noformat}
> The issue goes away. 
> Now, once again -- the kerberos auth is NOT configured for Hadoop, hence it 
> should NOT pay attention to /etc/krb5.conf to begin with.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-9409) MR Client gets an renewer token exception while Oozie is submitting a job

2013-03-15 Thread Roman Shaposhnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Shaposhnik updated HADOOP-9409:
-

Summary: MR Client gets an renewer token exception while Oozie is 
submitting a job  (was: Oozie gets a bizarre exception while submitting a job: 
Message missing required fields: renewer)

Updating the description to match the insight from Sid

> MR Client gets an renewer token exception while Oozie is submitting a job
> -
>
> Key: HADOOP-9409
> URL: https://issues.apache.org/jira/browse/HADOOP-9409
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.0.3-alpha
>Reporter: Roman Shaposhnik
>Priority: Blocker
> Fix For: 2.0.4-alpha
>
>
> After the fix for HADOOP-9299 I'm now getting the following bizzare exception 
> in Oozie while trying to submit a job. This also seems to be KRB related:
> {noformat}
> 2013-03-15 13:34:16,555  WARN ActionStartXCommand:542 - USER[hue] GROUP[-] 
> TOKEN[] APP[MapReduce] JOB[001-130315123130987-oozie-oozi-W] 
> ACTION[001-130315123130987-oozie-oozi-W@Sleep] Error starting action 
> [Sleep]. ErrorType [ERROR], ErrorCode [UninitializedMessageException], 
> Message [UninitializedMessageException: Message missing required fields: 
> renewer]
> org.apache.oozie.action.ActionExecutorException: 
> UninitializedMessageException: Message missing required fields: renewer
>   at 
> org.apache.oozie.action.ActionExecutor.convertException(ActionExecutor.java:401)
>   at 
> org.apache.oozie.action.hadoop.JavaActionExecutor.submitLauncher(JavaActionExecutor.java:738)
>   at 
> org.apache.oozie.action.hadoop.JavaActionExecutor.start(JavaActionExecutor.java:889)
>   at 
> org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:211)
>   at 
> org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:59)
>   at org.apache.oozie.command.XCommand.call(XCommand.java:277)
>   at 
> org.apache.oozie.service.CallableQueueService$CompositeCallable.call(CallableQueueService.java:326)
>   at 
> org.apache.oozie.service.CallableQueueService$CompositeCallable.call(CallableQueueService.java:255)
>   at 
> org.apache.oozie.service.CallableQueueService$CallableWrapper.run(CallableQueueService.java:175)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>   at java.lang.Thread.run(Thread.java:662)
> Caused by: com.google.protobuf.UninitializedMessageException: Message missing 
> required fields: renewer
>   at 
> com.google.protobuf.AbstractMessage$Builder.newUninitializedMessageException(AbstractMessage.java:605)
>   at 
> org.apache.hadoop.security.proto.SecurityProtos$GetDelegationTokenRequestProto$Builder.build(SecurityProtos.java:973)
>   at 
> org.apache.hadoop.mapreduce.v2.api.protocolrecords.impl.pb.GetDelegationTokenRequestPBImpl.mergeLocalToProto(GetDelegationTokenRequestPBImpl.java:84)
>   at 
> org.apache.hadoop.mapreduce.v2.api.protocolrecords.impl.pb.GetDelegationTokenRequestPBImpl.getProto(GetDelegationTokenRequestPBImpl.java:67)
>   at 
> org.apache.hadoop.mapreduce.v2.api.impl.pb.client.MRClientProtocolPBClientImpl.getDelegationToken(MRClientProtocolPBClientImpl.java:200)
>   at 
> org.apache.hadoop.mapred.YARNRunner.getDelegationTokenFromHS(YARNRunner.java:194)
>   at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:273)
>   at 
> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:392)
>   at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1218)
>   at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1215)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:396)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1439)
>   at org.apache.hadoop.mapreduce.Job.submit(Job.java:1215)
>   at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:581)
>   at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:576)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:396)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1439)
>   at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:576)
>   at 
> org.apache.oozie.action.hadoop.JavaActionExecutor.submitLauncher(JavaActionExecutor.java:723)
>   ... 10 more
> 2013-03-15 13:34:16,555  WARN ActionStartXCommand:542 - USER[hue] GROUP[-] 
> TOKEN[] APP[MapReduce] JOB[001-1303151231

[jira] [Commented] (HADOOP-9299) kerberos name resolution is kicking in even when kerberos is not configured

2013-03-15 Thread Roman Shaposhnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13603600#comment-13603600
 ] 

Roman Shaposhnik commented on HADOOP-9299:
--

Daryn, thanks for providing a patch! I've updated it to be applicable to the 
branch-2.0.4-alpha and pulled into Bigtop. Things did improve slighly but we 
don't seem to be out of the woods yet. Now, I'm getting the exception that 
seems to be somehow krb related when Oozie tries to submit a job. I filed 
HADOOP-9409 to track it.

Thank you guys for your help tracking this down!

> kerberos name resolution is kicking in even when kerberos is not configured
> ---
>
> Key: HADOOP-9299
> URL: https://issues.apache.org/jira/browse/HADOOP-9299
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.0.3-alpha
>Reporter: Roman Shaposhnik
>Assignee: Daryn Sharp
>Priority: Blocker
> Attachments: HADOOP-9299.patch, HADOOP-9299.patch
>
>
> Here's what I'm observing on a fully distributed cluster deployed via Bigtop 
> from the RC0 2.0.3-alpha tarball:
> {noformat}
> 528077-oozie-tucu-W@mr-node] Error starting action [mr-node]. ErrorType 
> [TRANSIENT], ErrorCode [JA009], Message [JA009: 
> org.apache.hadoop.security.authentication.util.KerberosName$NoMatchingRule: 
> No rules applied to yarn/localhost@LOCALREALM
> at 
> org.apache.hadoop.security.token.delegation.AbstractDelegationTokenIdentifier.(AbstractDelegationTokenIdentifier.java:68)
> at 
> org.apache.hadoop.mapreduce.v2.api.MRDelegationTokenIdentifier.(MRDelegationTokenIdentifier.java:51)
> at 
> org.apache.hadoop.mapreduce.v2.hs.HistoryClientService$HSClientProtocolHandler.getDelegationToken(HistoryClientService.java:336)
> at 
> org.apache.hadoop.mapreduce.v2.api.impl.pb.service.MRClientProtocolPBServiceImpl.getDelegationToken(MRClientProtocolPBServiceImpl.java:210)
> at 
> org.apache.hadoop.yarn.proto.MRClientProtocol$MRClientProtocolService$2.callBlockingMethod(MRClientProtocol.java:240)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1735)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1731)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1441)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1729)
> Caused by: 
> org.apache.hadoop.security.authentication.util.KerberosName$NoMatchingRule: 
> No rules applied to yarn/localhost@LOCALREALM
> at 
> org.apache.hadoop.security.authentication.util.KerberosName.getShortName(KerberosName.java:378)
> at 
> org.apache.hadoop.security.token.delegation.AbstractDelegationTokenIdentifier.(AbstractDelegationTokenIdentifier.java:66)
> ... 12 more
> ]
> {noformat}
> This is submitting a mapreduce job via Oozie 3.3.1. The reason I think this 
> is a Hadoop issue rather than the oozie one is because when I hack 
> /etc/krb5.conf to be:
> {noformat}
> [libdefaults]
>ticket_lifetime = 600
>default_realm = LOCALHOST
>default_tkt_enctypes = des3-hmac-sha1 des-cbc-crc
>default_tgs_enctypes = des3-hmac-sha1 des-cbc-crc
> [realms]
>LOCALHOST = {
>kdc = localhost:88
>default_domain = .local
>}
> [domain_realm]
>.local = LOCALHOST
> [logging]
>kdc = FILE:/var/log/krb5kdc.log
>admin_server = FILE:/var/log/kadmin.log
>default = FILE:/var/log/krb5lib.log
> {noformat}
> The issue goes away. 
> Now, once again -- the kerberos auth is NOT configured for Hadoop, hence it 
> should NOT pay attention to /etc/krb5.conf to begin with.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9409) Oozie gets a bizarre exception while submitting a job: Message missing required fields: renewer

2013-03-15 Thread Roman Shaposhnik (JIRA)
Roman Shaposhnik created HADOOP-9409:


 Summary: Oozie gets a bizarre exception while submitting a job: 
Message missing required fields: renewer
 Key: HADOOP-9409
 URL: https://issues.apache.org/jira/browse/HADOOP-9409
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.3-alpha
Reporter: Roman Shaposhnik
Priority: Blocker
 Fix For: 2.0.4-alpha


After the fix for HADOOP-9299 I'm now getting the following bizzare exception 
in Oozie while trying to submit a job. This also seems to be KRB related:

{noformat}
2013-03-15 13:34:16,555  WARN ActionStartXCommand:542 - USER[hue] GROUP[-] 
TOKEN[] APP[MapReduce] JOB[001-130315123130987-oozie-oozi-W] 
ACTION[001-130315123130987-oozie-oozi-W@Sleep] Error starting action 
[Sleep]. ErrorType [ERROR], ErrorCode [UninitializedMessageException], Message 
[UninitializedMessageException: Message missing required fields: renewer]
org.apache.oozie.action.ActionExecutorException: UninitializedMessageException: 
Message missing required fields: renewer
at 
org.apache.oozie.action.ActionExecutor.convertException(ActionExecutor.java:401)
at 
org.apache.oozie.action.hadoop.JavaActionExecutor.submitLauncher(JavaActionExecutor.java:738)
at 
org.apache.oozie.action.hadoop.JavaActionExecutor.start(JavaActionExecutor.java:889)
at 
org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:211)
at 
org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:59)
at org.apache.oozie.command.XCommand.call(XCommand.java:277)
at 
org.apache.oozie.service.CallableQueueService$CompositeCallable.call(CallableQueueService.java:326)
at 
org.apache.oozie.service.CallableQueueService$CompositeCallable.call(CallableQueueService.java:255)
at 
org.apache.oozie.service.CallableQueueService$CallableWrapper.run(CallableQueueService.java:175)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
Caused by: com.google.protobuf.UninitializedMessageException: Message missing 
required fields: renewer
at 
com.google.protobuf.AbstractMessage$Builder.newUninitializedMessageException(AbstractMessage.java:605)
at 
org.apache.hadoop.security.proto.SecurityProtos$GetDelegationTokenRequestProto$Builder.build(SecurityProtos.java:973)
at 
org.apache.hadoop.mapreduce.v2.api.protocolrecords.impl.pb.GetDelegationTokenRequestPBImpl.mergeLocalToProto(GetDelegationTokenRequestPBImpl.java:84)
at 
org.apache.hadoop.mapreduce.v2.api.protocolrecords.impl.pb.GetDelegationTokenRequestPBImpl.getProto(GetDelegationTokenRequestPBImpl.java:67)
at 
org.apache.hadoop.mapreduce.v2.api.impl.pb.client.MRClientProtocolPBClientImpl.getDelegationToken(MRClientProtocolPBClientImpl.java:200)
at 
org.apache.hadoop.mapred.YARNRunner.getDelegationTokenFromHS(YARNRunner.java:194)
at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:273)
at 
org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:392)
at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1218)
at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1215)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1439)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1215)
at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:581)
at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:576)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1439)
at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:576)
at 
org.apache.oozie.action.hadoop.JavaActionExecutor.submitLauncher(JavaActionExecutor.java:723)
... 10 more
2013-03-15 13:34:16,555  WARN ActionStartXCommand:542 - USER[hue] GROUP[-] 
TOKEN[] APP[MapReduce] JOB[001-13031512313
{noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-9299) kerberos name resolution is kicking in even when kerberos is not configured

2013-03-13 Thread Roman Shaposhnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13601712#comment-13601712
 ] 

Roman Shaposhnik commented on HADOOP-9299:
--

I'd be more than happy to test any patches in Bigtop

> kerberos name resolution is kicking in even when kerberos is not configured
> ---
>
> Key: HADOOP-9299
> URL: https://issues.apache.org/jira/browse/HADOOP-9299
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.0.3-alpha
>Reporter: Roman Shaposhnik
>Priority: Blocker
>
> Here's what I'm observing on a fully distributed cluster deployed via Bigtop 
> from the RC0 2.0.3-alpha tarball:
> {noformat}
> 528077-oozie-tucu-W@mr-node] Error starting action [mr-node]. ErrorType 
> [TRANSIENT], ErrorCode [JA009], Message [JA009: 
> org.apache.hadoop.security.authentication.util.KerberosName$NoMatchingRule: 
> No rules applied to yarn/localhost@LOCALREALM
> at 
> org.apache.hadoop.security.token.delegation.AbstractDelegationTokenIdentifier.(AbstractDelegationTokenIdentifier.java:68)
> at 
> org.apache.hadoop.mapreduce.v2.api.MRDelegationTokenIdentifier.(MRDelegationTokenIdentifier.java:51)
> at 
> org.apache.hadoop.mapreduce.v2.hs.HistoryClientService$HSClientProtocolHandler.getDelegationToken(HistoryClientService.java:336)
> at 
> org.apache.hadoop.mapreduce.v2.api.impl.pb.service.MRClientProtocolPBServiceImpl.getDelegationToken(MRClientProtocolPBServiceImpl.java:210)
> at 
> org.apache.hadoop.yarn.proto.MRClientProtocol$MRClientProtocolService$2.callBlockingMethod(MRClientProtocol.java:240)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1735)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1731)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1441)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1729)
> Caused by: 
> org.apache.hadoop.security.authentication.util.KerberosName$NoMatchingRule: 
> No rules applied to yarn/localhost@LOCALREALM
> at 
> org.apache.hadoop.security.authentication.util.KerberosName.getShortName(KerberosName.java:378)
> at 
> org.apache.hadoop.security.token.delegation.AbstractDelegationTokenIdentifier.(AbstractDelegationTokenIdentifier.java:66)
> ... 12 more
> ]
> {noformat}
> This is submitting a mapreduce job via Oozie 3.3.1. The reason I think this 
> is a Hadoop issue rather than the oozie one is because when I hack 
> /etc/krb5.conf to be:
> {noformat}
> [libdefaults]
>ticket_lifetime = 600
>default_realm = LOCALHOST
>default_tkt_enctypes = des3-hmac-sha1 des-cbc-crc
>default_tgs_enctypes = des3-hmac-sha1 des-cbc-crc
> [realms]
>LOCALHOST = {
>kdc = localhost:88
>default_domain = .local
>}
> [domain_realm]
>.local = LOCALHOST
> [logging]
>kdc = FILE:/var/log/krb5kdc.log
>admin_server = FILE:/var/log/kadmin.log
>default = FILE:/var/log/krb5lib.log
> {noformat}
> The issue goes away. 
> Now, once again -- the kerberos auth is NOT configured for Hadoop, hence it 
> should NOT pay attention to /etc/krb5.conf to begin with.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8796) commands_manual.html link is broken

2013-03-08 Thread Roman Shaposhnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13597344#comment-13597344
 ] 

Roman Shaposhnik commented on HADOOP-8796:
--

Yup.What I was meant to say is this: "Given the way docs look now this seems to 
no longer apply."

> commands_manual.html link is broken
> ---
>
> Key: HADOOP-8796
> URL: https://issues.apache.org/jira/browse/HADOOP-8796
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.0.1-alpha
>Reporter: Roman Shaposhnik
>Assignee: Roman Shaposhnik
>Priority: Minor
> Attachments: screenshot-1.jpg
>
>
> If you go to http://hadoop.apache.org/docs/r2.0.0-alpha/ and click on Hadoop 
> Commands you are getting a broken link: 
> http://hadoop.apache.org/docs/r2.0.0-alpha/hadoop-project-dist/hadoop-common/commands_manual.html

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8796) commands_manual.html link is broken

2013-03-08 Thread Roman Shaposhnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13597327#comment-13597327
 ] 

Roman Shaposhnik commented on HADOOP-8796:
--

The the docs look now this seems to no longer apply.

> commands_manual.html link is broken
> ---
>
> Key: HADOOP-8796
> URL: https://issues.apache.org/jira/browse/HADOOP-8796
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.0.1-alpha
>Reporter: Roman Shaposhnik
>Assignee: Roman Shaposhnik
>Priority: Minor
> Attachments: screenshot-1.jpg
>
>
> If you go to http://hadoop.apache.org/docs/r2.0.0-alpha/ and click on Hadoop 
> Commands you are getting a broken link: 
> http://hadoop.apache.org/docs/r2.0.0-alpha/hadoop-project-dist/hadoop-common/commands_manual.html

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-7996) change location of the native libraries to lib instead of lib/native

2013-03-08 Thread Roman Shaposhnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13597287#comment-13597287
 ] 

Roman Shaposhnik commented on HADOOP-7996:
--

I think we're good

> change location of the native libraries to lib instead of lib/native
> 
>
> Key: HADOOP-7996
> URL: https://issues.apache.org/jira/browse/HADOOP-7996
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build, conf, documentation, scripts
>Affects Versions: 0.20.2
>Reporter: Roman Shaposhnik
>Assignee: Eric Yang
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9301) hadoop client servlet/jsp/jetty/tomcat JARs creating conflicts in Oozie & HttpFS

2013-02-12 Thread Roman Shaposhnik (JIRA)
Roman Shaposhnik created HADOOP-9301:


 Summary: hadoop client servlet/jsp/jetty/tomcat JARs creating 
conflicts in Oozie & HttpFS
 Key: HADOOP-9301
 URL: https://issues.apache.org/jira/browse/HADOOP-9301
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.0.3-alpha
Reporter: Roman Shaposhnik
Assignee: Alejandro Abdelnur
Priority: Blocker
 Fix For: 2.0.3-alpha


Here's how to reproduce:

{noformat}
$ cd hadoop-client
$ mvn dependency:tree | egrep 'jsp|jetty'
[INFO] |  +- org.mortbay.jetty:jetty:jar:6.1.26.cloudera.2:compile
[INFO] |  +- org.mortbay.jetty:jetty-util:jar:6.1.26.cloudera.2:compile
[INFO] |  +- javax.servlet.jsp:jsp-api:jar:2.1:compile
{noformat}

This has a potential for completely screwing up clients like Oozie, etc – hence 
a blocker.

It seems that while common excludes those JARs, they are sneaking in via hdfs, 
we need to exclude them too.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-9299) kerberos resolution is kicking in even when kerberos is not configured

2013-02-12 Thread Roman Shaposhnik (JIRA)
Roman Shaposhnik created HADOOP-9299:


 Summary: kerberos resolution is kicking in even when kerberos is 
not configured
 Key: HADOOP-9299
 URL: https://issues.apache.org/jira/browse/HADOOP-9299
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.0.3-alpha
Reporter: Roman Shaposhnik
Priority: Blocker
 Fix For: 2.0.3-alpha


Here's what I'm observing on a fully distributed cluster deployed via Bigtop 
from the RC0 2.0.3-alpha tarball:

{noformat}
528077-oozie-tucu-W@mr-node] Error starting action [mr-node]. ErrorType 
[TRANSIENT], ErrorCode [JA009], Message [JA009: 
org.apache.hadoop.security.authentication.util.KerberosName$NoMatchingRule: No 
rules applied to yarn/localhost@LOCALREALM
at 
org.apache.hadoop.security.token.delegation.AbstractDelegationTokenIdentifier.(AbstractDelegationTokenIdentifier.java:68)
at 
org.apache.hadoop.mapreduce.v2.api.MRDelegationTokenIdentifier.(MRDelegationTokenIdentifier.java:51)
at 
org.apache.hadoop.mapreduce.v2.hs.HistoryClientService$HSClientProtocolHandler.getDelegationToken(HistoryClientService.java:336)
at 
org.apache.hadoop.mapreduce.v2.api.impl.pb.service.MRClientProtocolPBServiceImpl.getDelegationToken(MRClientProtocolPBServiceImpl.java:210)
at 
org.apache.hadoop.yarn.proto.MRClientProtocol$MRClientProtocolService$2.callBlockingMethod(MRClientProtocol.java:240)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:454)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1014)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1735)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1731)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1441)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1729)
Caused by: 
org.apache.hadoop.security.authentication.util.KerberosName$NoMatchingRule: No 
rules applied to yarn/localhost@LOCALREALM
at 
org.apache.hadoop.security.authentication.util.KerberosName.getShortName(KerberosName.java:378)
at 
org.apache.hadoop.security.token.delegation.AbstractDelegationTokenIdentifier.(AbstractDelegationTokenIdentifier.java:66)
... 12 more
]
{noformat}

This is submitting a mapreduce job via Oozie 3.3.1. The reason I think this is 
a Hadoop issue rather than the oozie one is because when I hack /etc/krb5.conf 
to be:

{noformat}
[libdefaults]
   ticket_lifetime = 600
   default_realm = LOCALHOST
   default_tkt_enctypes = des3-hmac-sha1 des-cbc-crc
   default_tgs_enctypes = des3-hmac-sha1 des-cbc-crc

[realms]
   LOCALHOST = {
   kdc = localhost:88
   default_domain = .local
   }

[domain_realm]
   .local = LOCALHOST

[logging]
   kdc = FILE:/var/log/krb5kdc.log
   admin_server = FILE:/var/log/kadmin.log
   default = FILE:/var/log/krb5lib.log
{noformat}

The issue goes away. 

Now, once again -- the kerberos auth is NOT configured for Hadoop, hence it 
should NOT pay attention to /etc/krb5.conf to begin with.



--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8806) libhadoop.so: dlopen should be better at locating libsnappy.so, etc.

2012-09-17 Thread Roman Shaposhnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13457370#comment-13457370
 ] 

Roman Shaposhnik commented on HADOOP-8806:
--

Since Linux (GNU really) documentation leaves quite a bit open for 
"interpretation" as far as $ORIGIN and dependencies of unbundled products go, 
here's a much more complete treatment of the from Solaris' Linkers and 
Libraries doc: 
http://docs.oracle.com/cd/E19253-01/817-1984/appendixc-4/index.html

> libhadoop.so: dlopen should be better at locating libsnappy.so, etc.
> 
>
> Key: HADOOP-8806
> URL: https://issues.apache.org/jira/browse/HADOOP-8806
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 2.0.0-alpha
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>Priority: Minor
> Fix For: 2.0.3-alpha
>
> Attachments: HADOOP-8806.003.patch, rpathtest2.tar.gz, 
> rpathtest.tar.gz
>
>
> libhadoop calls {{dlopen}} to load {{libsnappy.so}} and {{libz.so}}.  These 
> libraries can be bundled in the {{$HADOOP_ROOT/lib/native}} directory.  For 
> example, the {{-Dbundle.snappy}} build option copies {{libsnappy.so}} to this 
> directory.  However, snappy can't be loaded from this directory unless 
> {{LD_LIBRARY_PATH}} is set to include this directory.
> Can we make this configuration "just work" without needing to rely on 
> {{LD_LIBRARY_PATH}}?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8806) libhadoop.so: search java.library.path when calling dlopen

2012-09-13 Thread Roman Shaposhnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13455485#comment-13455485
 ] 

Roman Shaposhnik commented on HADOOP-8806:
--

bq. [epel discussion]

I think this is a bit of a red herring here. I'm confident that libsnappy will 
get into the distros with time and when this happens we have to be able to 
offer a choice that is not -- recompile your libhadoop.so. The current 
situation where libsnappy.so gets bundled with hadoop as a separate object that 
can ignored (if needed) is ideal from that standpoint. Statically linking all 
of it into the libhadoop.so is a hammer that I'd rather not use right away. 

At this point the problem is that we've got 2 code path. One in 
org.apache.hadoop.io.compress.snappy that does System.loadLibrary("snappy"); 
and is fine and the other one, apparently, in libhadoop.so that uses dlopen(). 
Would it be completely out of the question to focus on unifying the two?

> libhadoop.so: search java.library.path when calling dlopen
> --
>
> Key: HADOOP-8806
> URL: https://issues.apache.org/jira/browse/HADOOP-8806
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Colin Patrick McCabe
>Priority: Minor
>
> libhadoop calls {{dlopen}} to load {{libsnappy.so}} and {{libz.so}}.  These 
> libraries can be bundled in the {{$HADOOP_ROOT/lib/native}} directory.  For 
> example, the {{-Dbundle.snappy}} build option copies {{libsnappy.so}} to this 
> directory.  However, snappy can't be loaded from this directory unless 
> {{LD_LIBRARY_PATH}} is set to include this directory.
> Should we also search {{java.library.path}} when loading these libraries?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8806) libhadoop.so: search java.library.path when calling dlopen

2012-09-13 Thread Roman Shaposhnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13455440#comment-13455440
 ] 

Roman Shaposhnik commented on HADOOP-8806:
--

FYI: snappy is now pretty widely available. Even on CentOS 5: 
http://pkgs.org/search/?keyword=snappy

With that in mind, I'd rather link against it dynamically. Especially if we are 
not getting rid of dynamic aspect alltogether (libz will remain to be 
dynamically linked).

> libhadoop.so: search java.library.path when calling dlopen
> --
>
> Key: HADOOP-8806
> URL: https://issues.apache.org/jira/browse/HADOOP-8806
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Colin Patrick McCabe
>Priority: Minor
>
> libhadoop calls {{dlopen}} to load {{libsnappy.so}} and {{libz.so}}.  These 
> libraries can be bundled in the {{$HADOOP_ROOT/lib/native}} directory.  For 
> example, the {{-Dbundle.snappy}} build option copies {{libsnappy.so}} to this 
> directory.  However, snappy can't be loaded from this directory unless 
> {{LD_LIBRARY_PATH}} is set to include this directory.
> Should we also search {{java.library.path}} when loading these libraries?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-8796) commands_manual.html link is broken

2012-09-12 Thread Roman Shaposhnik (JIRA)
Roman Shaposhnik created HADOOP-8796:


 Summary: commands_manual.html link is broken
 Key: HADOOP-8796
 URL: https://issues.apache.org/jira/browse/HADOOP-8796
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.0.1-alpha
Reporter: Roman Shaposhnik
Assignee: Roman Shaposhnik
Priority: Minor
 Fix For: 2.0.2-alpha


If you go to http://hadoop.apache.org/docs/r2.0.0-alpha/ and click on Hadoop 
Commands you are getting a broken link: 
http://hadoop.apache.org/docs/r2.0.0-alpha/hadoop-project-dist/hadoop-common/commands_manual.html

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8795) BASH tab completion doesn't look in PATH, assumes path to executable is specified

2012-09-12 Thread Roman Shaposhnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13454526#comment-13454526
 ] 

Roman Shaposhnik commented on HADOOP-8795:
--

+1

> BASH tab completion doesn't look in PATH, assumes path to executable is 
> specified
> -
>
> Key: HADOOP-8795
> URL: https://issues.apache.org/jira/browse/HADOOP-8795
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Sean Mackrory
> Attachments: HADOOP-8795.patch
>
>
> bash-tab-completion/hadoop.sh checks that the first token in the command is 
> an existing, executable file - which assumes that the path to the hadoop 
> executable is specified (or that it's in the working directory). If the 
> executable is somewhere else in PATH, tab completion will not work.
> I propose that the first token be passed through 'which' so that any 
> executables in the path also get detected. I've tested that this technique 
> will work in the event that relative and absolute paths are used as well.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8756) Fix SEGV when libsnappy is in java.library.path but not LD_LIBRARY_PATH

2012-09-12 Thread Roman Shaposhnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13454365#comment-13454365
 ] 

Roman Shaposhnik commented on HADOOP-8756:
--

bq. The patch could be revised to manually search java.library.path, I guess. 
Would that be worthwhile?

Well, my concern is around all the clients that don't use hadoop launcher 
script but need to access snappy codec *on the client* side. Flume would be a 
good example here: since it launches the JVM directly it has to also makes sure 
it sets up LD_LIBRARY_PATH if we don't provide the manual search capability in 
core hadoop itself.

I guess what I'm trying to say is that we have to have a solution for things 
like Flume. It would be nice if it worked automagically.

> Fix SEGV when libsnappy is in java.library.path but not LD_LIBRARY_PATH
> ---
>
> Key: HADOOP-8756
> URL: https://issues.apache.org/jira/browse/HADOOP-8756
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: native
>Affects Versions: 2.0.2-alpha
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>Priority: Minor
> Attachments: HADOOP-8756.002.patch, HADOOP-8756.003.patch, 
> HADOOP-8756.004.patch
>
>
> We use {{System.loadLibrary("snappy")}} from the Java side.  However in 
> libhadoop, we use {{dlopen}} to open libsnappy.so dynamically.  
> System.loadLibrary uses {{java.library.path}} to resolve libraries, and 
> {{dlopen}} uses {{LD_LIBRARY_PATH}} and the system paths to resolve 
> libraries.  Because of this, the two library loading functions can be at odds.
> We should fix this so we only load the library once, preferably using the 
> standard Java {{java.library.path}}.
> We should also log the search path(s) we use for {{libsnappy.so}} when 
> loading fails, so that it's easier to diagnose configuration issues.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8756) Fix SEGV when libsnappy is in java.library.path but not LD_LIBRARY_PATH

2012-09-12 Thread Roman Shaposhnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13454333#comment-13454333
 ] 

Roman Shaposhnik commented on HADOOP-8756:
--

This seems to be related to the HADOOP-8781. Just to make sure -- is the 
patches posted to this JIRA meant to provide an alternative fix for the issue 
that won't require us changing the value of LD_LIBRARY_PATH ? If so, shouldn't 
your patches include the changes that would restore the old behavior?

> Fix SEGV when libsnappy is in java.library.path but not LD_LIBRARY_PATH
> ---
>
> Key: HADOOP-8756
> URL: https://issues.apache.org/jira/browse/HADOOP-8756
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: native
>Affects Versions: 2.0.2-alpha
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>Priority: Minor
> Attachments: HADOOP-8756.002.patch, HADOOP-8756.003.patch, 
> HADOOP-8756.004.patch
>
>
> We use {{System.loadLibrary("snappy")}} from the Java side.  However in 
> libhadoop, we use {{dlopen}} to open libsnappy.so dynamically.  
> System.loadLibrary uses {{java.library.path}} to resolve libraries, and 
> {{dlopen}} uses {{LD_LIBRARY_PATH}} and the system paths to resolve 
> libraries.  Because of this, the two library loading functions can be at odds.
> We should fix this so we only load the library once, preferably using the 
> standard Java {{java.library.path}}.
> We should also log the search path(s) we use for {{libsnappy.so}} when 
> loading fails, so that it's easier to diagnose configuration issues.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8781) hadoop-config.sh should add JAVA_LIBRARY_PATH to LD_LIBRARY_PATH

2012-09-10 Thread Roman Shaposhnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13452535#comment-13452535
 ] 

Roman Shaposhnik commented on HADOOP-8781:
--

+1

> hadoop-config.sh should add JAVA_LIBRARY_PATH to LD_LIBRARY_PATH
> 
>
> Key: HADOOP-8781
> URL: https://issues.apache.org/jira/browse/HADOOP-8781
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 2.0.2-alpha
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
> Fix For: 1.2.0, 2.0.2-alpha
>
> Attachments: HADOOP-8781-branch1.patch, HADOOP-8781-branch1.patch, 
> HADOOP-8781.patch, HADOOP-8781.patch
>
>
> Snappy SO fails to load properly if LD_LIBRARY_PATH does not include the path 
> where snappy SO is. This is observed in setups that don't have an independent 
> snappy installation (not installed by Hadoop)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8781) hadoop-config.sh should add JAVA_LIBRARY_PATH to LD_LIBRARY_PATH

2012-09-10 Thread Roman Shaposhnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13452526#comment-13452526
 ] 

Roman Shaposhnik commented on HADOOP-8781:
--

Looks good to me. One small nit -- I'd add this line to an already existing if 
statement that tests for non-emptiness of JAVA_LIBRARY_PATH a few lines down.

> hadoop-config.sh should add JAVA_LIBRARY_PATH to LD_LIBRARY_PATH
> 
>
> Key: HADOOP-8781
> URL: https://issues.apache.org/jira/browse/HADOOP-8781
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 2.0.2-alpha
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
> Fix For: 1.2.0, 2.0.2-alpha
>
> Attachments: HADOOP-8781-branch1.patch, HADOOP-8781.patch
>
>
> Snappy SO fails to load properly if LD_LIBRARY_PATH does not include the path 
> where snappy SO is. This is observed in setups that don't have an independent 
> snappy installation (not installed by Hadoop)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8460) Document proper setting of HADOOP_PID_DIR and HADOOP_SECURE_DN_PID_DIR

2012-06-01 Thread Roman Shaposhnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13287477#comment-13287477
 ] 

Roman Shaposhnik commented on HADOOP-8460:
--

@Robert, I just wanted to have a fuller record of what's going on with various 
Hadoop deployments models on this JIRA for folks to to get a full disclosure.

> Document proper setting of HADOOP_PID_DIR and HADOOP_SECURE_DN_PID_DIR
> --
>
> Key: HADOOP-8460
> URL: https://issues.apache.org/jira/browse/HADOOP-8460
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 1.0.3, 2.0.0-alpha
>Reporter: Robert Joseph Evans
>Assignee: Robert Joseph Evans
> Attachments: HADOOP-8460-branch-1.txt, HADOOP-8460-branch-1.txt, 
> HADOOP-8460-trunk.txt, HADOOP-8460-trunk.txt
>
>
> We should document that in a properly setup cluster HADOOP_PID_DIR and 
> HADOOP_SECURE_DN_PID_DIR should not point to /tmp, but should point to a 
> directory that normal users do not have access to.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8460) Document proper setting of HADOOP_PID_DIR and HADOOP_SECURE_DN_PID_DIR

2012-05-31 Thread Roman Shaposhnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13287025#comment-13287025
 ] 

Roman Shaposhnik commented on HADOOP-8460:
--

This is one of the things that the packaging takes care of. Not sure what's the 
story with Hadoop native packages, but Bigtop puts everything in /var/run... so 
this is not a problem

> Document proper setting of HADOOP_PID_DIR and HADOOP_SECURE_DN_PID_DIR
> --
>
> Key: HADOOP-8460
> URL: https://issues.apache.org/jira/browse/HADOOP-8460
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 1.0.3, 2.0.0-alpha
>Reporter: Robert Joseph Evans
>Assignee: Robert Joseph Evans
> Attachments: HADOOP-8460-branch-1.txt, HADOOP-8460-trunk.txt
>
>
> We should document that in a properly setup cluster HADOOP_PID_DIR and 
> HADOOP_SECURE_DN_PID_DIR should not point to /tmp, but should point to a 
> directory that normal users do not have access to.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8399) Remove JDK5 dependency from Hadoop 1.0+ line

2012-05-14 Thread Roman Shaposhnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13274663#comment-13274663
 ] 

Roman Shaposhnik commented on HADOOP-8399:
--

A strong +1 -- would be really nice to get rid of this ancient dependency.

> Remove JDK5 dependency from Hadoop 1.0+ line
> 
>
> Key: HADOOP-8399
> URL: https://issues.apache.org/jira/browse/HADOOP-8399
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 1.0.2
>Reporter: Konstantin Boudnik
>Assignee: Konstantin Boudnik
> Attachments: HADOOP-8399.patch
>
>
> This issues has been fixed in Hadoop starting from 0.21 (see HDFS-1552).
> I propose to make the same fix for 1.0 line and get rid of JDK5 dependency 
> all together.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8353) hadoop-daemon.sh and yarn-daemon.sh can be misleading on stop

2012-05-10 Thread Roman Shaposhnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13272597#comment-13272597
 ] 

Roman Shaposhnik commented on HADOOP-8353:
--

As far as testing goes -- I just manually replaced existing scripts in the 
pseudo distributed Hadoop deployment and ran a couple of start/stop commands.

> hadoop-daemon.sh and yarn-daemon.sh can be misleading on stop
> -
>
> Key: HADOOP-8353
> URL: https://issues.apache.org/jira/browse/HADOOP-8353
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 0.23.1
>Reporter: Roman Shaposhnik
>Assignee: Roman Shaposhnik
> Attachments: HADOOP-8353-2.patch.txt, HADOOP-8353.patch.txt
>
>
> The way that stop actions is implemented is a simple SIGTERM sent to the JVM. 
> There's a time delay between when the action is called and when the process 
> actually exists. This can be misleading to the callers of the *-daemon.sh 
> scripts since they expect stop action to return when process is actually 
> stopped.
> I suggest we augment the stop action with a time-delay check for the process 
> status and a SIGKILL once the delay has expired.
> I understand that sending SIGKILL is a measure of last resort and is 
> generally frowned upon among init.d script writers, but the excuse we have 
> for Hadoop is that it is engineered to be a fault tolerant system and thus 
> there's not danger of putting system into an incontinent state by a violent 
> SIGKILL. Of course, the time delay will be long enough to make SIGKILL event 
> a rare condition.
> Finally, there's always an option of an exponential back-off type of solution 
> if we decide that SIGKILL timeout is short.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8353) hadoop-daemon.sh and yarn-daemon.sh can be misleading on stop

2012-05-10 Thread Roman Shaposhnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Shaposhnik updated HADOOP-8353:
-

Attachment: HADOOP-8353-2.patch.txt

Patch with updated message attached.

> hadoop-daemon.sh and yarn-daemon.sh can be misleading on stop
> -
>
> Key: HADOOP-8353
> URL: https://issues.apache.org/jira/browse/HADOOP-8353
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 0.23.1
>Reporter: Roman Shaposhnik
>Assignee: Roman Shaposhnik
> Fix For: 2.0.0
>
> Attachments: HADOOP-8353-2.patch.txt, HADOOP-8353.patch.txt
>
>
> The way that stop actions is implemented is a simple SIGTERM sent to the JVM. 
> There's a time delay between when the action is called and when the process 
> actually exists. This can be misleading to the callers of the *-daemon.sh 
> scripts since they expect stop action to return when process is actually 
> stopped.
> I suggest we augment the stop action with a time-delay check for the process 
> status and a SIGKILL once the delay has expired.
> I understand that sending SIGKILL is a measure of last resort and is 
> generally frowned upon among init.d script writers, but the excuse we have 
> for Hadoop is that it is engineered to be a fault tolerant system and thus 
> there's not danger of putting system into an incontinent state by a violent 
> SIGKILL. Of course, the time delay will be long enough to make SIGKILL event 
> a rare condition.
> Finally, there's always an option of an exponential back-off type of solution 
> if we decide that SIGKILL timeout is short.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8353) hadoop-daemon.sh and yarn-daemon.sh can be misleading on stop

2012-05-10 Thread Roman Shaposhnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13272442#comment-13272442
 ] 

Roman Shaposhnik commented on HADOOP-8353:
--

@Aaron,

bq. Perhaps we should make the message more verbose 

Agreed. I'll modify the patch to make it more obvious

bq. "YARN_STOP_TIMEOUT" in the MR job history serve

The unfortunate truth there is that *everything* else in that script has a 
YARN_ prefix (I suspect because it was copied from the yarn-daemon.sh). I'd 
rather keep things consistent, but if you really think this lonely var should 
be MR_ prefixed -- please let me know.

bq. Do similar changes not need to be made for the HDFS daemons?

HDFS daemons use hadoop-daemon.sh In fact at this point it can be safely called 
hdfs-daemon.sh since I don't think anything else is really using it.

> hadoop-daemon.sh and yarn-daemon.sh can be misleading on stop
> -
>
> Key: HADOOP-8353
> URL: https://issues.apache.org/jira/browse/HADOOP-8353
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 0.23.1
>Reporter: Roman Shaposhnik
>Assignee: Roman Shaposhnik
> Fix For: 2.0.0
>
> Attachments: HADOOP-8353.patch.txt
>
>
> The way that stop actions is implemented is a simple SIGTERM sent to the JVM. 
> There's a time delay between when the action is called and when the process 
> actually exists. This can be misleading to the callers of the *-daemon.sh 
> scripts since they expect stop action to return when process is actually 
> stopped.
> I suggest we augment the stop action with a time-delay check for the process 
> status and a SIGKILL once the delay has expired.
> I understand that sending SIGKILL is a measure of last resort and is 
> generally frowned upon among init.d script writers, but the excuse we have 
> for Hadoop is that it is engineered to be a fault tolerant system and thus 
> there's not danger of putting system into an incontinent state by a violent 
> SIGKILL. Of course, the time delay will be long enough to make SIGKILL event 
> a rare condition.
> Finally, there's always an option of an exponential back-off type of solution 
> if we decide that SIGKILL timeout is short.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8353) hadoop-daemon.sh and yarn-daemon.sh can be misleading on stop

2012-05-08 Thread Roman Shaposhnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Shaposhnik updated HADOOP-8353:
-

Status: Patch Available  (was: Open)

> hadoop-daemon.sh and yarn-daemon.sh can be misleading on stop
> -
>
> Key: HADOOP-8353
> URL: https://issues.apache.org/jira/browse/HADOOP-8353
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 0.23.1
>Reporter: Roman Shaposhnik
>Assignee: Roman Shaposhnik
> Fix For: 2.0.0
>
> Attachments: HADOOP-8353.patch.txt
>
>
> The way that stop actions is implemented is a simple SIGTERM sent to the JVM. 
> There's a time delay between when the action is called and when the process 
> actually exists. This can be misleading to the callers of the *-daemon.sh 
> scripts since they expect stop action to return when process is actually 
> stopped.
> I suggest we augment the stop action with a time-delay check for the process 
> status and a SIGKILL once the delay has expired.
> I understand that sending SIGKILL is a measure of last resort and is 
> generally frowned upon among init.d script writers, but the excuse we have 
> for Hadoop is that it is engineered to be a fault tolerant system and thus 
> there's not danger of putting system into an incontinent state by a violent 
> SIGKILL. Of course, the time delay will be long enough to make SIGKILL event 
> a rare condition.
> Finally, there's always an option of an exponential back-off type of solution 
> if we decide that SIGKILL timeout is short.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8353) hadoop-daemon.sh and yarn-daemon.sh can be misleading on stop

2012-05-08 Thread Roman Shaposhnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Shaposhnik updated HADOOP-8353:
-

Attachment: HADOOP-8353.patch.txt

> hadoop-daemon.sh and yarn-daemon.sh can be misleading on stop
> -
>
> Key: HADOOP-8353
> URL: https://issues.apache.org/jira/browse/HADOOP-8353
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 0.23.1
>Reporter: Roman Shaposhnik
>Assignee: Roman Shaposhnik
> Fix For: 2.0.0
>
> Attachments: HADOOP-8353.patch.txt
>
>
> The way that stop actions is implemented is a simple SIGTERM sent to the JVM. 
> There's a time delay between when the action is called and when the process 
> actually exists. This can be misleading to the callers of the *-daemon.sh 
> scripts since they expect stop action to return when process is actually 
> stopped.
> I suggest we augment the stop action with a time-delay check for the process 
> status and a SIGKILL once the delay has expired.
> I understand that sending SIGKILL is a measure of last resort and is 
> generally frowned upon among init.d script writers, but the excuse we have 
> for Hadoop is that it is engineered to be a fault tolerant system and thus 
> there's not danger of putting system into an incontinent state by a violent 
> SIGKILL. Of course, the time delay will be long enough to make SIGKILL event 
> a rare condition.
> Finally, there's always an option of an exponential back-off type of solution 
> if we decide that SIGKILL timeout is short.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8368) Use CMake rather than autotools to build native code

2012-05-07 Thread Roman Shaposhnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13270072#comment-13270072
 ] 

Roman Shaposhnik commented on HADOOP-8368:
--

Sounds reasonable, a few questions though:
  # is this meant for branch-1, branch-2/trunk or both?
  # what do you have in mind for CMake/Maven, CMake/ant integration?

> Use CMake rather than autotools to build native code
> 
>
> Key: HADOOP-8368
> URL: https://issues.apache.org/jira/browse/HADOOP-8368
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>Priority: Minor
>
> It would be good to use cmake rather than autotools to build the native 
> (C/C++) code in Hadoop.
> Rationale:
> 1. automake depends on shell scripts, which often have problems running on 
> different operating systems.  It would be extremely difficult, and perhaps 
> impossible, to use autotools under Windows.  Even if it were possible, it 
> might require horrible workarounds like installing cygwin.  Even on Linux 
> variants like Ubuntu 12.04, there are major build issues because /bin/sh is 
> the Dash shell, rather than the Bash shell as it is in other Linux versions.  
> It is currently impossible to build the native code under Ubuntu 12.04 
> because of this problem.
> CMake has robust cross-platform support, including Windows.  It does not use 
> shell scripts.
> 2. automake error messages are very confusing.  For example, "autoreconf: 
> cannot empty /tmp/ar0.4849: Is a directory" or "Can't locate object method 
> "path" via package "Autom4te..." are common error messages.  In order to even 
> start debugging automake problems you need to learn shell, m4, sed, and the a 
> bunch of other things.  With CMake, all you have to learn is the syntax of 
> CMakeLists.txt, which is simple.
> CMake can do all the stuff autotools can, such as making sure that required 
> libraries are installed.  There is a Maven plugin for CMake as well.
> 3. Different versions of autotools can have very different behaviors.  For 
> example, the version installed under openSUSE defaults to putting libraries 
> in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults 
> to installing the same libraries under /usr/local/lib.  (This is why the FUSE 
> build is currently broken when using OpenSUSE.)  This is another source of 
> build failures and complexity.  If things go wrong, you will often get an 
> error message which is incomprehensible to normal humans (see point #2).
> CMake allows you to specify the minimum_required_version of CMake that a 
> particular CMakeLists.txt will accept.  In addition, CMake maintains strict 
> backwards compatibility between different versions.  This prevents build bugs 
> due to version skew.
> 4. autoconf, automake, and libtool are large and rather slow.  This adds to 
> build time.
> For all these reasons, I think we should switch to CMake for compiling native 
> (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8353) hadoop-daemon.sh and yarn-daemon.sh can be misleading on stop

2012-05-03 Thread Roman Shaposhnik (JIRA)
Roman Shaposhnik created HADOOP-8353:


 Summary: hadoop-daemon.sh and yarn-daemon.sh can be misleading on 
stop
 Key: HADOOP-8353
 URL: https://issues.apache.org/jira/browse/HADOOP-8353
 Project: Hadoop Common
  Issue Type: Improvement
  Components: scripts
Affects Versions: 0.23.1
Reporter: Roman Shaposhnik
Assignee: Roman Shaposhnik
 Fix For: 2.0.0


The way that stop actions is implemented is a simple SIGTERM sent to the JVM. 
There's a time delay between when the action is called and when the process 
actually exists. This can be misleading to the callers of the *-daemon.sh 
scripts since they expect stop action to return when process is actually 
stopped.

I suggest we augment the stop action with a time-delay check for the process 
status and a SIGKILL once the delay has expired.

I understand that sending SIGKILL is a measure of last resort and is generally 
frowned upon among init.d script writers, but the excuse we have for Hadoop is 
that it is engineered to be a fault tolerant system and thus there's not danger 
of putting system into an incontinent state by a violent SIGKILL. Of course, 
the time delay will be long enough to make SIGKILL event a rare condition.

Finally, there's always an option of an exponential back-off type of solution 
if we decide that SIGKILL timeout is short.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8214) make hadoop script recognize a full set of deprecated commands

2012-05-02 Thread Roman Shaposhnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Shaposhnik updated HADOOP-8214:
-

Attachment: (was: HADOOP-8214.patch.txt)

> make hadoop script recognize a full set of deprecated commands
> --
>
> Key: HADOOP-8214
> URL: https://issues.apache.org/jira/browse/HADOOP-8214
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 0.23.1
>Reporter: Roman Shaposhnik
>Assignee: Roman Shaposhnik
> Fix For: 0.23.2
>
> Attachments: HADOOP-8214.patch.txt
>
>
> bin/hadoop launcher script does a nice job of recognizing deprecated usage 
> and vectoring users towards the proper command line tools (hdfs, mapred). It 
> would be nice if we can take care of the following deprecated commands that 
> don't get the same special treatment:
> {noformat}
>   oiv  apply the offline fsimage viewer to an fsimage
>   dfsgroupsget the groups which users belong to on the Name Node
>   mrgroups get the groups which users belong to on the Job Tracker
>   mradmin  run a Map-Reduce admin client
>   jobtracker   run the MapReduce job Tracker node
>   tasktracker  run a MapReduce task Tracker node
> {noformat}
> Here's what I propos to do with them:
>   # oiv-- issue DEPRECATED warning and run hdfs oiv
>   # dfsgroups  -- issue DEPRECATED warning and run hdfs groups
>   # mrgroups   -- issue DEPRECATED warning and run mapred groups
>   # mradmin-- issue DEPRECATED warning and run yarn rmadmin
>   # jobtracker -- issue DEPRECATED warning and do nothing
>   # tasktracker-- issue DEPRECATED warning and do nothing
> Thoughts?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8214) make hadoop script recognize a full set of deprecated commands

2012-05-02 Thread Roman Shaposhnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Shaposhnik updated HADOOP-8214:
-

Attachment: HADOOP-8214.patch.txt

> make hadoop script recognize a full set of deprecated commands
> --
>
> Key: HADOOP-8214
> URL: https://issues.apache.org/jira/browse/HADOOP-8214
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 0.23.1
>Reporter: Roman Shaposhnik
>Assignee: Roman Shaposhnik
> Fix For: 0.23.2
>
> Attachments: HADOOP-8214.patch.txt
>
>
> bin/hadoop launcher script does a nice job of recognizing deprecated usage 
> and vectoring users towards the proper command line tools (hdfs, mapred). It 
> would be nice if we can take care of the following deprecated commands that 
> don't get the same special treatment:
> {noformat}
>   oiv  apply the offline fsimage viewer to an fsimage
>   dfsgroupsget the groups which users belong to on the Name Node
>   mrgroups get the groups which users belong to on the Job Tracker
>   mradmin  run a Map-Reduce admin client
>   jobtracker   run the MapReduce job Tracker node
>   tasktracker  run a MapReduce task Tracker node
> {noformat}
> Here's what I propos to do with them:
>   # oiv-- issue DEPRECATED warning and run hdfs oiv
>   # dfsgroups  -- issue DEPRECATED warning and run hdfs groups
>   # mrgroups   -- issue DEPRECATED warning and run mapred groups
>   # mradmin-- issue DEPRECATED warning and run yarn rmadmin
>   # jobtracker -- issue DEPRECATED warning and do nothing
>   # tasktracker-- issue DEPRECATED warning and do nothing
> Thoughts?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8214) make hadoop script recognize a full set of deprecated commands

2012-05-02 Thread Roman Shaposhnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Shaposhnik updated HADOOP-8214:
-

Attachment: HADOOP-8214.patch.txt

> make hadoop script recognize a full set of deprecated commands
> --
>
> Key: HADOOP-8214
> URL: https://issues.apache.org/jira/browse/HADOOP-8214
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 0.23.1
>Reporter: Roman Shaposhnik
>Assignee: Roman Shaposhnik
> Fix For: 0.23.2
>
> Attachments: HADOOP-8214.patch.txt
>
>
> bin/hadoop launcher script does a nice job of recognizing deprecated usage 
> and vectoring users towards the proper command line tools (hdfs, mapred). It 
> would be nice if we can take care of the following deprecated commands that 
> don't get the same special treatment:
> {noformat}
>   oiv  apply the offline fsimage viewer to an fsimage
>   dfsgroupsget the groups which users belong to on the Name Node
>   mrgroups get the groups which users belong to on the Job Tracker
>   mradmin  run a Map-Reduce admin client
>   jobtracker   run the MapReduce job Tracker node
>   tasktracker  run a MapReduce task Tracker node
> {noformat}
> Here's what I propos to do with them:
>   # oiv-- issue DEPRECATED warning and run hdfs oiv
>   # dfsgroups  -- issue DEPRECATED warning and run hdfs groups
>   # mrgroups   -- issue DEPRECATED warning and run mapred groups
>   # mradmin-- issue DEPRECATED warning and run yarn rmadmin
>   # jobtracker -- issue DEPRECATED warning and do nothing
>   # tasktracker-- issue DEPRECATED warning and do nothing
> Thoughts?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8214) make hadoop script recognize a full set of deprecated commands

2012-05-02 Thread Roman Shaposhnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Shaposhnik updated HADOOP-8214:
-

Attachment: (was: HADOOP-8214.patch.txt)

> make hadoop script recognize a full set of deprecated commands
> --
>
> Key: HADOOP-8214
> URL: https://issues.apache.org/jira/browse/HADOOP-8214
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 0.23.1
>Reporter: Roman Shaposhnik
>Assignee: Roman Shaposhnik
> Fix For: 0.23.2
>
> Attachments: HADOOP-8214.patch.txt
>
>
> bin/hadoop launcher script does a nice job of recognizing deprecated usage 
> and vectoring users towards the proper command line tools (hdfs, mapred). It 
> would be nice if we can take care of the following deprecated commands that 
> don't get the same special treatment:
> {noformat}
>   oiv  apply the offline fsimage viewer to an fsimage
>   dfsgroupsget the groups which users belong to on the Name Node
>   mrgroups get the groups which users belong to on the Job Tracker
>   mradmin  run a Map-Reduce admin client
>   jobtracker   run the MapReduce job Tracker node
>   tasktracker  run a MapReduce task Tracker node
> {noformat}
> Here's what I propos to do with them:
>   # oiv-- issue DEPRECATED warning and run hdfs oiv
>   # dfsgroups  -- issue DEPRECATED warning and run hdfs groups
>   # mrgroups   -- issue DEPRECATED warning and run mapred groups
>   # mradmin-- issue DEPRECATED warning and run yarn rmadmin
>   # jobtracker -- issue DEPRECATED warning and do nothing
>   # tasktracker-- issue DEPRECATED warning and do nothing
> Thoughts?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8214) make hadoop script recognize a full set of deprecated commands

2012-05-02 Thread Roman Shaposhnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Shaposhnik updated HADOOP-8214:
-

Status: Patch Available  (was: Open)

> make hadoop script recognize a full set of deprecated commands
> --
>
> Key: HADOOP-8214
> URL: https://issues.apache.org/jira/browse/HADOOP-8214
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 0.23.1
>Reporter: Roman Shaposhnik
>Assignee: Roman Shaposhnik
> Fix For: 0.23.2
>
> Attachments: HADOOP-8214.patch.txt
>
>
> bin/hadoop launcher script does a nice job of recognizing deprecated usage 
> and vectoring users towards the proper command line tools (hdfs, mapred). It 
> would be nice if we can take care of the following deprecated commands that 
> don't get the same special treatment:
> {noformat}
>   oiv  apply the offline fsimage viewer to an fsimage
>   dfsgroupsget the groups which users belong to on the Name Node
>   mrgroups get the groups which users belong to on the Job Tracker
>   mradmin  run a Map-Reduce admin client
>   jobtracker   run the MapReduce job Tracker node
>   tasktracker  run a MapReduce task Tracker node
> {noformat}
> Here's what I propos to do with them:
>   # oiv-- issue DEPRECATED warning and run hdfs oiv
>   # dfsgroups  -- issue DEPRECATED warning and run hdfs groups
>   # mrgroups   -- issue DEPRECATED warning and run mapred groups
>   # mradmin-- issue DEPRECATED warning and run yarn rmadmin
>   # jobtracker -- issue DEPRECATED warning and do nothing
>   # tasktracker-- issue DEPRECATED warning and do nothing
> Thoughts?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8214) make hadoop script recognize a full set of deprecated commands

2012-05-02 Thread Roman Shaposhnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Shaposhnik updated HADOOP-8214:
-

Attachment: HADOOP-8214.patch.txt

> make hadoop script recognize a full set of deprecated commands
> --
>
> Key: HADOOP-8214
> URL: https://issues.apache.org/jira/browse/HADOOP-8214
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 0.23.1
>Reporter: Roman Shaposhnik
>Assignee: Roman Shaposhnik
> Fix For: 0.23.2
>
> Attachments: HADOOP-8214.patch.txt
>
>
> bin/hadoop launcher script does a nice job of recognizing deprecated usage 
> and vectoring users towards the proper command line tools (hdfs, mapred). It 
> would be nice if we can take care of the following deprecated commands that 
> don't get the same special treatment:
> {noformat}
>   oiv  apply the offline fsimage viewer to an fsimage
>   dfsgroupsget the groups which users belong to on the Name Node
>   mrgroups get the groups which users belong to on the Job Tracker
>   mradmin  run a Map-Reduce admin client
>   jobtracker   run the MapReduce job Tracker node
>   tasktracker  run a MapReduce task Tracker node
> {noformat}
> Here's what I propos to do with them:
>   # oiv-- issue DEPRECATED warning and run hdfs oiv
>   # dfsgroups  -- issue DEPRECATED warning and run hdfs groups
>   # mrgroups   -- issue DEPRECATED warning and run mapred groups
>   # mradmin-- issue DEPRECATED warning and run yarn rmadmin
>   # jobtracker -- issue DEPRECATED warning and do nothing
>   # tasktracker-- issue DEPRECATED warning and do nothing
> Thoughts?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8332) make default container-executor.conf.dir be a path relative to the container-executor binary

2012-05-01 Thread Roman Shaposhnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13266219#comment-13266219
 ] 

Roman Shaposhnik commented on HADOOP-8332:
--

@Robert, I understand your concerns and that's why:
  # I would like this change to be vetted by as large of a community as possible
  # my current patch leaves an option of sticking with the absolute path for 
those who really need that

> make default container-executor.conf.dir be a path relative to the 
> container-executor binary
> 
>
> Key: HADOOP-8332
> URL: https://issues.apache.org/jira/browse/HADOOP-8332
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 0.23.1
>Reporter: Roman Shaposhnik
>Assignee: Roman Shaposhnik
> Fix For: 2.0.0
>
> Attachments: HADOOP-8332.patch.txt
>
>
> Currently, container-executor binary has an absolute pathname of its 
> configuration file baked in. This prevents an easy relocation of the 
> configuration files when dealing with multiple Hadoop installs on the same 
> node. It would be nice to at least allow for a relative path resolution 
> starting from the location of the container-executor binary itself. Something 
> like
> {noformat}
> ../etc/hadoop/
> {noformat}
> Thoughts?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8332) make default container-executor.conf.dir be a path relative to the container-executor binary

2012-05-01 Thread Roman Shaposhnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Shaposhnik updated HADOOP-8332:
-

Attachment: HADOOP-8332.patch.txt

> make default container-executor.conf.dir be a path relative to the 
> container-executor binary
> 
>
> Key: HADOOP-8332
> URL: https://issues.apache.org/jira/browse/HADOOP-8332
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 0.23.1
>Reporter: Roman Shaposhnik
>Assignee: Roman Shaposhnik
> Fix For: 2.0.0
>
> Attachments: HADOOP-8332.patch.txt
>
>
> Currently, container-executor binary has an absolute pathname of its 
> configuration file baked in. This prevents an easy relocation of the 
> configuration files when dealing with multiple Hadoop installs on the same 
> node. It would be nice to at least allow for a relative path resolution 
> starting from the location of the container-executor binary itself. Something 
> like
> {noformat}
> ../etc/hadoop/
> {noformat}
> Thoughts?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8332) make default container-executor.conf.dir be a path relative to the container-executor binary

2012-05-01 Thread Roman Shaposhnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Shaposhnik updated HADOOP-8332:
-

Attachment: (was: HADOOP-8332.patch.txt)

> make default container-executor.conf.dir be a path relative to the 
> container-executor binary
> 
>
> Key: HADOOP-8332
> URL: https://issues.apache.org/jira/browse/HADOOP-8332
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 0.23.1
>Reporter: Roman Shaposhnik
>Assignee: Roman Shaposhnik
> Fix For: 2.0.0
>
> Attachments: HADOOP-8332.patch.txt
>
>
> Currently, container-executor binary has an absolute pathname of its 
> configuration file baked in. This prevents an easy relocation of the 
> configuration files when dealing with multiple Hadoop installs on the same 
> node. It would be nice to at least allow for a relative path resolution 
> starting from the location of the container-executor binary itself. Something 
> like
> {noformat}
> ../etc/hadoop/
> {noformat}
> Thoughts?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8332) make default container-executor.conf.dir be a path relative to the container-executor binary

2012-05-01 Thread Roman Shaposhnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Shaposhnik updated HADOOP-8332:
-

Attachment: HADOOP-8332.patch.txt

This patch adds an option of specifying container-executor.conf.dir as a path 
relative to the location of the container-executor executable itself.

> make default container-executor.conf.dir be a path relative to the 
> container-executor binary
> 
>
> Key: HADOOP-8332
> URL: https://issues.apache.org/jira/browse/HADOOP-8332
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 0.23.1
>Reporter: Roman Shaposhnik
>Assignee: Roman Shaposhnik
> Fix For: 2.0.0
>
> Attachments: HADOOP-8332.patch.txt
>
>
> Currently, container-executor binary has an absolute pathname of its 
> configuration file baked in. This prevents an easy relocation of the 
> configuration files when dealing with multiple Hadoop installs on the same 
> node. It would be nice to at least allow for a relative path resolution 
> starting from the location of the container-executor binary itself. Something 
> like
> {noformat}
> ../etc/hadoop/
> {noformat}
> Thoughts?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Assigned] (HADOOP-8332) make default container-executor.conf.dir be a path relative to the container-executor binary

2012-04-30 Thread Roman Shaposhnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Shaposhnik reassigned HADOOP-8332:


Assignee: Roman Shaposhnik

> make default container-executor.conf.dir be a path relative to the 
> container-executor binary
> 
>
> Key: HADOOP-8332
> URL: https://issues.apache.org/jira/browse/HADOOP-8332
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 0.23.1
>Reporter: Roman Shaposhnik
>Assignee: Roman Shaposhnik
> Fix For: 2.0.0
>
>
> Currently, container-executor binary has an absolute pathname of its 
> configuration file baked in. This prevents an easy relocation of the 
> configuration files when dealing with multiple Hadoop installs on the same 
> node. It would be nice to at least allow for a relative path resolution 
> starting from the location of the container-executor binary itself. Something 
> like
> {noformat}
> ../etc/hadoop/
> {noformat}
> Thoughts?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8332) make default container-executor.conf.dir be a path relative to the container-executor binary

2012-04-30 Thread Roman Shaposhnik (JIRA)
Roman Shaposhnik created HADOOP-8332:


 Summary: make default container-executor.conf.dir be a path 
relative to the container-executor binary
 Key: HADOOP-8332
 URL: https://issues.apache.org/jira/browse/HADOOP-8332
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 0.23.1
Reporter: Roman Shaposhnik
 Fix For: 2.0.0


Currently, container-executor binary has an absolute pathname of its 
configuration file baked in. This prevents an easy relocation of the 
configuration files when dealing with multiple Hadoop installs on the same 
node. It would be nice to at least allow for a relative path resolution 
starting from the location of the container-executor binary itself. Something 
like
{noformat}
../etc/hadoop/
{noformat}

Thoughts?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8307) The task-controller is not packaged in the tarball

2012-04-25 Thread Roman Shaposhnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13262395#comment-13262395
 ] 

Roman Shaposhnik commented on HADOOP-8307:
--

Can we, please, get this trivial change in so I can pull this RC into Bigtop?

> The task-controller is not packaged in the tarball
> --
>
> Key: HADOOP-8307
> URL: https://issues.apache.org/jira/browse/HADOOP-8307
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 1.0.3
>Reporter: Owen O'Malley
>Assignee: Owen O'Malley
> Attachments: hadoop-8307.patch
>
>
> Ant in some situations, puts artifacts such as task-controller into the 
> build/hadoop-*/ directory before the "package" target deletes it to start 
> over.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7596) Enable jsvc to work with Hadoop RPM package

2011-08-30 Thread Roman Shaposhnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13094248#comment-13094248
 ] 

Roman Shaposhnik commented on HADOOP-7596:
--

I thought we've agreed over at HDFS-2289 to recompile jsvc as part of the build 
process. Is this JIRA somehow different?

> Enable jsvc to work with Hadoop RPM package
> ---
>
> Key: HADOOP-7596
> URL: https://issues.apache.org/jira/browse/HADOOP-7596
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.20.204.0
> Environment: Java 6, RedHat EL 5.6
>Reporter: Eric Yang
>Assignee: Eric Yang
> Fix For: 0.20.205.0
>
>
> For secure Hadoop 0.20.2xx cluster, datanode can only run with 32 bit jvm 
> because Hadoop only packages 32 bit jsvc.  The build process should download 
> proper jsvc versions base on the build architecture.  In addition, the shell 
> script should be enhanced to locate hadoop jar files in the proper location.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7206) Integrate Snappy compression

2011-06-21 Thread Roman Shaposhnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13052867#comment-13052867
 ] 

Roman Shaposhnik commented on HADOOP-7206:
--

I'd be curious to find out whether the lack of Solaris binaries in the 
/org/xerial/snappy/native/ bothers anybody:  
http://maven.xerial.org/repository/artifact/org/xerial/snappy/snappy-java/1.0.3-rc3/snappy-java-1.0.3-rc3.jar
   

> Integrate Snappy compression
> 
>
> Key: HADOOP-7206
> URL: https://issues.apache.org/jira/browse/HADOOP-7206
> Project: Hadoop Common
>  Issue Type: New Feature
>Affects Versions: 0.21.0
>Reporter: Eli Collins
>Assignee: T Jake Luciani
> Fix For: 0.23.0
>
> Attachments: HADOOP-7206-002.patch, HADOOP-7206.patch, 
> v2-HADOOP-7206-snappy-codec-using-snappy-java.txt, 
> v3-HADOOP-7206-snappy-codec-using-snappy-java.txt, 
> v4-HADOOP-7206-snappy-codec-using-snappy-java.txt, 
> v5-HADOOP-7206-snappy-codec-using-snappy-java.txt
>
>
> Google release Zippy as an open source (APLv2) project called Snappy 
> (http://code.google.com/p/snappy). This tracks integrating it into Hadoop.
> {quote}
> Snappy is a compression/decompression library. It does not aim for maximum 
> compression, or compatibility with any other compression library; instead, it 
> aims for very high speeds and reasonable compression. For instance, compared 
> to the fastest mode of zlib, Snappy is an order of magnitude faster for most 
> inputs, but the resulting compressed files are anywhere from 20% to 100% 
> bigger. On a single core of a Core i7 processor in 64-bit mode, Snappy 
> compresses at about 250 MB/sec or more and decompresses at about 500 MB/sec 
> or more.
> {quote}

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7157) Create build hosts configurations to prevent degradation of patch validation and snapshot build environment

2011-05-04 Thread Roman Shaposhnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13029090#comment-13029090
 ] 

Roman Shaposhnik commented on HADOOP-7157:
--

In general -- a strong +1 on the approach. H/W tends to go down, being 
decommissioned and tinkered with so having a reliable way to re-deploy a 
particular build/test environment is extremely important. Puppet fits the bill 
perfectly.

Now, a couple of points on Puppet usage:
   1. Not sure you need precise version strings for the kinds of  packages that 
are specified (and not all of them are specified, actually)
   2. Minor nit: you can simplify an implementation by using an array syntax 
for resource instantiation
   3. What's your plan on applying proposed recipe? I think it would be nice to 
have a cron job that would check it out from the apache repo and apply

> Create build hosts configurations to prevent degradation of patch validation 
> and snapshot build environment
> ---
>
> Key: HADOOP-7157
> URL: https://issues.apache.org/jira/browse/HADOOP-7157
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
> Environment: Apache Hadoop build hosts
>Reporter: Konstantin Boudnik
>Assignee: Konstantin Boudnik
> Attachments: HADOOP-7157.patch
>
>
> Build hosts are being re-jumped, in need for configuration updates, etc. It 
> is hard to maintain the same configuration across a 10+ hosts. A specialized 
> service such as Puppet can be used to maintain build machines software and 
> configurations at a desired level.
> Such configs should be checked into SCM system along with the of sources code.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] Commented: (HADOOP-7134) configure files that are generated as part of the released tarball need to have executable bit set

2011-02-11 Thread Roman Shaposhnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12993669#comment-12993669
 ] 

Roman Shaposhnik commented on HADOOP-7134:
--

bq. -1 tests included. The patch doesn't appear to include any new or modified 
tests.

This is a build change -- hence no new tests

> configure files that are generated as part of the released tarball need to 
> have executable bit set
> --
>
> Key: HADOOP-7134
> URL: https://issues.apache.org/jira/browse/HADOOP-7134
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Roman Shaposhnik
>Assignee: Roman Shaposhnik
> Attachments: HADOOP-7134.patch
>
>
> Currently the configure files that are packaged in a tarball are -rw-rw-r--

-- 
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Updated: (HADOOP-7134) configure files that are generated as part of the released tarball need to have executable bit set

2011-02-10 Thread Roman Shaposhnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Shaposhnik updated HADOOP-7134:
-

Status: Patch Available  (was: Open)

> configure files that are generated as part of the released tarball need to 
> have executable bit set
> --
>
> Key: HADOOP-7134
> URL: https://issues.apache.org/jira/browse/HADOOP-7134
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Roman Shaposhnik
>Assignee: Roman Shaposhnik
> Attachments: HADOOP-7134.patch
>
>
> Currently the configure files that are packaged in a tarball are -rw-rw-r--

-- 
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Updated: (HADOOP-7134) configure files that are generated as part of the released tarball need to have executable bit set

2011-02-09 Thread Roman Shaposhnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Shaposhnik updated HADOOP-7134:
-

Attachment: HADOOP-7134.patch

> configure files that are generated as part of the released tarball need to 
> have executable bit set
> --
>
> Key: HADOOP-7134
> URL: https://issues.apache.org/jira/browse/HADOOP-7134
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Roman Shaposhnik
>Assignee: Roman Shaposhnik
> Attachments: HADOOP-7134.patch
>
>
> Currently the configure files that are packaged in a tarball are -rw-rw-r--

-- 
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Commented: (HADOOP-7134) configure files that are generated as part of the released tarball need to have executable bit set

2011-02-09 Thread Roman Shaposhnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12992800#comment-12992800
 ] 

Roman Shaposhnik commented on HADOOP-7134:
--

Patch attached

> configure files that are generated as part of the released tarball need to 
> have executable bit set
> --
>
> Key: HADOOP-7134
> URL: https://issues.apache.org/jira/browse/HADOOP-7134
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Roman Shaposhnik
>Assignee: Roman Shaposhnik
> Attachments: HADOOP-7134.patch
>
>
> Currently the configure files that are packaged in a tarball are -rw-rw-r--

-- 
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Created: (HADOOP-7134) configure files that are generated as part of the released tarball need to have executable bit set

2011-02-09 Thread Roman Shaposhnik (JIRA)
configure files that are generated as part of the released tarball need to have 
executable bit set
--

 Key: HADOOP-7134
 URL: https://issues.apache.org/jira/browse/HADOOP-7134
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Roman Shaposhnik
Assignee: Roman Shaposhnik


Currently the configure files that are packaged in a tarball are -rw-rw-r--

-- 
This message is automatically generated by JIRA.
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Commented: (HADOOP-6436) Remove auto-generated native build files

2011-01-28 Thread Roman Shaposhnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12988264#action_12988264
 ] 

Roman Shaposhnik commented on HADOOP-6436:
--

This has been tested via testing a corresponding MAPREDUCE change 
(MAPREDUCE-2260) on a 64bit Linux box

> Remove auto-generated native build files 
> -
>
> Key: HADOOP-6436
> URL: https://issues.apache.org/jira/browse/HADOOP-6436
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Eli Collins
>Assignee: Roman Shaposhnik
> Attachments: 6436.patch
>
>
> The repo currently includes the automake and autoconf generated files for the 
> native build. Per discussion on HADOOP-6421 let's remove them and use the 
> host's automake and autoconf. We should also do this for libhdfs and 
> fuse-dfs. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Assigned: (HADOOP-6436) Remove auto-generated native build files

2011-01-17 Thread Roman Shaposhnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Shaposhnik reassigned HADOOP-6436:


Assignee: Roman Shaposhnik

> Remove auto-generated native build files 
> -
>
> Key: HADOOP-6436
> URL: https://issues.apache.org/jira/browse/HADOOP-6436
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Eli Collins
>Assignee: Roman Shaposhnik
> Attachments: 6436.patch
>
>
> The repo currently includes the automake and autoconf generated files for the 
> native build. Per discussion on HADOOP-6421 let's remove them and use the 
> host's automake and autoconf. We should also do this for libhdfs and 
> fuse-dfs. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HADOOP-6436) Remove auto-generated native build files

2011-01-17 Thread Roman Shaposhnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Shaposhnik updated HADOOP-6436:
-

Attachment: 6436.patch

> Remove auto-generated native build files 
> -
>
> Key: HADOOP-6436
> URL: https://issues.apache.org/jira/browse/HADOOP-6436
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Eli Collins
> Attachments: 6436.patch
>
>
> The repo currently includes the automake and autoconf generated files for the 
> native build. Per discussion on HADOOP-6421 let's remove them and use the 
> host's automake and autoconf. We should also do this for libhdfs and 
> fuse-dfs. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HADOOP-6436) Remove auto-generated native build files

2011-01-17 Thread Roman Shaposhnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Shaposhnik updated HADOOP-6436:
-

Status: Patch Available  (was: Open)

I'm attaching a first cut at the patch. Please let me know what do you think 
about the approach taken.

Notes:

1. package target is modified because the logic here is that we want those 
files to end up in the tarball before a release. As was noted by Todd on the 
original issue we want those files to be bundled for platforms that lack 
autotools.

2. the build (at least the release build) now depends on the entire autoconf

> Remove auto-generated native build files 
> -
>
> Key: HADOOP-6436
> URL: https://issues.apache.org/jira/browse/HADOOP-6436
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Eli Collins
>
> The repo currently includes the automake and autoconf generated files for the 
> native build. Per discussion on HADOOP-6421 let's remove them and use the 
> host's automake and autoconf. We should also do this for libhdfs and 
> fuse-dfs. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HADOOP-6255) Create an rpm integration project

2010-06-18 Thread Roman Shaposhnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12880359#action_12880359
 ] 

Roman Shaposhnik commented on HADOOP-6255:
--

I have two points to add to the discussion:
   1. I'm wondering whether it would be useful to slice it a bit more thinly. 
IOW, introducing the notion of these extra 
   top level targets available for packaging:
  hadoop-core
  hadoop-client
  hadoop-daemon
  hadoop-devel
  hadoop-javadoc
 

2. As for configs, I'd like to point out an example that Debian has 
established with their packaging of .20. Basically
they created one package per node type 
(http://packages.qa.debian.org/h/hadoop.html) plus one package common
among all the daemons:
  hadoop-daemons-common 
  hadoop-jobtrackerd
  hadoop-tasktrackerd
  hadoop-datanoded
  hadoop-namenoded
  hadoop-secondarynamenoded
 
 The packages themselves are pretty slim -- containing only hooks to 
make daemons plug into the service management
 system (init.d in Debian's case, but one would imagine Solaris/SMF or 
anything like that also being an option for us). 
 I also would tend to believe that these could be reasonable packages 
to be used for splitting the configs appropriately. 

> Create an rpm integration project
> -
>
> Key: HADOOP-6255
> URL: https://issues.apache.org/jira/browse/HADOOP-6255
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Owen O'Malley
>
> We should be able to create RPMs for Hadoop releases.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.