[jira] [Commented] (HADOOP-7481) Wire AOP test in Mavenized Hadoop common

2012-03-19 Thread Konstantin Boudnik (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13233215#comment-13233215
 ] 

Konstantin Boudnik commented on HADOOP-7481:


no worries: it seems I was going here longer than I expected :) Thanks.

> Wire AOP test in Mavenized Hadoop common
> 
>
> Key: HADOOP-7481
> URL: https://issues.apache.org/jira/browse/HADOOP-7481
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Alejandro Abdelnur
>Assignee: Konstantin Boudnik
>
> We ned add a Maven profile that activates the AOP injection and runs the 
> necessary AOPed tests. I believe there should be a Maven plugin for doing 
> that.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7481) Wire AOP test in Mavenized Hadoop common

2012-03-19 Thread Konstantin Boudnik (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13233196#comment-13233196
 ] 

Konstantin Boudnik commented on HADOOP-7481:


Alexandro, if you aren't working on this then please reassigned it to me - I 
should have some spare time for this project.

> Wire AOP test in Mavenized Hadoop common
> 
>
> Key: HADOOP-7481
> URL: https://issues.apache.org/jira/browse/HADOOP-7481
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
>
> We ned add a Maven profile that activates the AOP injection and runs the 
> necessary AOPed tests. I believe there should be a Maven plugin for doing 
> that.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7730) Allow TestCLI to be run against a cluster

2012-03-19 Thread Konstantin Boudnik (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13233184#comment-13233184
 ] 

Konstantin Boudnik commented on HADOOP-7730:


Do we need to apply it to 1.0.2 and resolve?

> Allow TestCLI to be run against a cluster
> -
>
> Key: HADOOP-7730
> URL: https://issues.apache.org/jira/browse/HADOOP-7730
> Project: Hadoop Common
>  Issue Type: Test
>  Components: test
>Affects Versions: 0.20.205.0, 0.22.0
>Reporter: Konstantin Boudnik
>Assignee: Konstantin Boudnik
> Fix For: 0.22.1
>
> Attachments: HADOOP-7730.patch, HADOOP-7730.trunk.patch, 
> HADOOP-7730.trunk.patch
>
>
> Use the same CLI test to test cluster bits (see HDFS-1762 for more info)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7939) Improve Hadoop subcomponent integration in Hadoop 0.23

2011-12-28 Thread Konstantin Boudnik (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13176979#comment-13176979
 ] 

Konstantin Boudnik commented on HADOOP-7939:


bq. There is a high risk of overflowing the command buffer
Ah, finally a material argument! Thanks Allen - that makes sense.

> Improve Hadoop subcomponent integration in Hadoop 0.23
> --
>
> Key: HADOOP-7939
> URL: https://issues.apache.org/jira/browse/HADOOP-7939
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, conf, documentation, scripts
>Affects Versions: 0.23.0
>Reporter: Roman Shaposhnik
>Assignee: Roman Shaposhnik
> Fix For: 0.23.1
>
>
> h1. Introduction
> For the rest of this proposal it is assumed that the current set
> of Hadoop subcomponents is:
>  * hadoop-common
>  * hadoop-hdfs
>  * hadoop-yarn
>  * hadoop-mapreduce
> It must be noted that this is an open ended list, though. For example,
> implementations of additional frameworks on top of yarn (e.g. MPI) would
> also be considered a subcomponent.
> h1. Problem statement
> Currently there's an unfortunate coupling and hard-coding present at the
> level of launcher scripts, configuration scripts and Java implementation
> code that prevents us from treating all subcomponents of Hadoop independently
> of each other. In a lot of places it is assumed that bits and pieces
> from individual subcomponents *must* be located at predefined places
> and they can not be dynamically registered/discovered during the runtime.
> This prevents a truly flexible deployment of Hadoop 0.23. 
> h1. Proposal
> NOTE: this is NOT a proposal for redefining the layout from HADOOP-6255. 
> The goal here is to keep as much of that layout in place as possible,
> while permitting different deployment layouts.
> The aim of this proposal is to introduce the needed level of indirection and
> flexibility in order to accommodate the current assumed layout of Hadoop 
> tarball
> deployments and all the other styles of deployments as well. To this end the
> following set of environment variables needs to be uniformly used in all of
> the subcomponent's launcher scripts, configuration scripts and Java code
> ( stands for a literal name of a subcomponent). These variables are
> expected to be defined by -env.sh scripts and sourcing those files is
> expected to have the desired effect of setting the environment up correctly.
>   # HADOOP__HOME
>## root of the subtree in a filesystem where a subcomponent is expected to 
> be installed 
>## default value: $0/..
>   # HADOOP__JARS 
>## a subdirectory with all of the jar files comprising subcomponent's 
> implementation 
>## default value: $(HADOOP__HOME)/share/hadoop/$()
>   # HADOOP__EXT_JARS
>## a subdirectory with all of the jar files needed for extended 
> functionality of the subcomponent (nonessential for correct work of the basic 
> functionality)
>## default value: $(HADOOP__HOME)/share/hadoop/$()/ext
>   # HADOOP__NATIVE_LIBS
>## a subdirectory with all the native libraries that component requires
>## default value: $(HADOOP__HOME)/share/hadoop/$()/native
>   # HADOOP__BIN
>## a subdirectory with all of the launcher scripts specific to the client 
> side of the component
>## default value: $(HADOOP__HOME)/bin
>   # HADOOP__SBIN
>## a subdirectory with all of the launcher scripts specific to the 
> server/system side of the component
>## default value: $(HADOOP__HOME)/sbin
>   # HADOOP__LIBEXEC
>## a subdirectory with all of the launcher scripts that are internal to 
> the implementation and should *not* be invoked directly
>## default value: $(HADOOP__HOME)/libexec
>   # HADOOP__CONF
>## a subdirectory containing configuration files for a subcomponent
>## default value: $(HADOOP__HOME)/conf
>   # HADOOP__DATA
>## a subtree in the local filesystem for storing component's persistent 
> state
>## default value: $(HADOOP__HOME)/data
>   # HADOOP__LOG
>## a subdirectory for subcomponents's log files to be stored
>## default value: $(HADOOP__HOME)/log
>   # HADOOP__RUN
>## a subdirectory with runtime system specific information
>## default value: $(HADOOP__HOME)/run
>   # HADOOP__TMP
>## a subdirectory with temprorary files
>## default value: $(HADOOP__HOME)/tmp

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7939) Improve Hadoop subcomponent integration in Hadoop 0.23

2011-12-28 Thread Konstantin Boudnik (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13176895#comment-13176895
 ] 

Konstantin Boudnik commented on HADOOP-7939:


Eric,

I fail to see any solutions being identified and accepted here ;) I think you 
have a leap of faith here.

> If one wants to add complexity to the projects, the burden of proof lies with 
> you.
It works both ways, isn't it? ;)

There is original proposals and counter one w/ symlinks. I don't see a benefit 
of the latter and there's no argument to support it. Unfortunately, comments of 
the sort 
> Absent something new added to the discussion, I don't see this as productive.
don't add any productivity nor a technical merit into the discussion.

> Improve Hadoop subcomponent integration in Hadoop 0.23
> --
>
> Key: HADOOP-7939
> URL: https://issues.apache.org/jira/browse/HADOOP-7939
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, conf, documentation, scripts
>Affects Versions: 0.23.0
>Reporter: Roman Shaposhnik
>Assignee: Roman Shaposhnik
> Fix For: 0.23.1
>
>
> h1. Introduction
> For the rest of this proposal it is assumed that the current set
> of Hadoop subcomponents is:
>  * hadoop-common
>  * hadoop-hdfs
>  * hadoop-yarn
>  * hadoop-mapreduce
> It must be noted that this is an open ended list, though. For example,
> implementations of additional frameworks on top of yarn (e.g. MPI) would
> also be considered a subcomponent.
> h1. Problem statement
> Currently there's an unfortunate coupling and hard-coding present at the
> level of launcher scripts, configuration scripts and Java implementation
> code that prevents us from treating all subcomponents of Hadoop independently
> of each other. In a lot of places it is assumed that bits and pieces
> from individual subcomponents *must* be located at predefined places
> and they can not be dynamically registered/discovered during the runtime.
> This prevents a truly flexible deployment of Hadoop 0.23. 
> h1. Proposal
> NOTE: this is NOT a proposal for redefining the layout from HADOOP-6255. 
> The goal here is to keep as much of that layout in place as possible,
> while permitting different deployment layouts.
> The aim of this proposal is to introduce the needed level of indirection and
> flexibility in order to accommodate the current assumed layout of Hadoop 
> tarball
> deployments and all the other styles of deployments as well. To this end the
> following set of environment variables needs to be uniformly used in all of
> the subcomponent's launcher scripts, configuration scripts and Java code
> ( stands for a literal name of a subcomponent). These variables are
> expected to be defined by -env.sh scripts and sourcing those files is
> expected to have the desired effect of setting the environment up correctly.
>   # HADOOP__HOME
>## root of the subtree in a filesystem where a subcomponent is expected to 
> be installed 
>## default value: $0/..
>   # HADOOP__JARS 
>## a subdirectory with all of the jar files comprising subcomponent's 
> implementation 
>## default value: $(HADOOP__HOME)/share/hadoop/$()
>   # HADOOP__EXT_JARS
>## a subdirectory with all of the jar files needed for extended 
> functionality of the subcomponent (nonessential for correct work of the basic 
> functionality)
>## default value: $(HADOOP__HOME)/share/hadoop/$()/ext
>   # HADOOP__NATIVE_LIBS
>## a subdirectory with all the native libraries that component requires
>## default value: $(HADOOP__HOME)/share/hadoop/$()/native
>   # HADOOP__BIN
>## a subdirectory with all of the launcher scripts specific to the client 
> side of the component
>## default value: $(HADOOP__HOME)/bin
>   # HADOOP__SBIN
>## a subdirectory with all of the launcher scripts specific to the 
> server/system side of the component
>## default value: $(HADOOP__HOME)/sbin
>   # HADOOP__LIBEXEC
>## a subdirectory with all of the launcher scripts that are internal to 
> the implementation and should *not* be invoked directly
>## default value: $(HADOOP__HOME)/libexec
>   # HADOOP__CONF
>## a subdirectory containing configuration files for a subcomponent
>## default value: $(HADOOP__HOME)/conf
>   # HADOOP__DATA
>## a subtree in the local filesystem for storing component's persistent 
> state
>## default value: $(HADOOP__HOME)/data
>   # HADOOP__LOG
>## a subdirectory for subcomponents's log files to be stored
>## default value: $(HADOOP__HOME)/log
>   # HADOOP__RUN
>## a subdirectory with runtime system specific information
>## default value: $(HADOOP__HOME)/run
>   # HADOOP__TMP
>## a subdirectory with temprorary files
>## default value: $(HADOOP__HOME)/tmp


[jira] [Commented] (HADOOP-7939) Improve Hadoop subcomponent integration in Hadoop 0.23

2011-12-28 Thread Konstantin Boudnik (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13176875#comment-13176875
 ] 

Konstantin Boudnik commented on HADOOP-7939:


I guess I share the confusion here: why symlinks are any better then well 
defined, documented set of variables? So far none of the comments above have 
pointed out the benefits of the former over the latter. I would love to hear 
them, if they exist.

> Improve Hadoop subcomponent integration in Hadoop 0.23
> --
>
> Key: HADOOP-7939
> URL: https://issues.apache.org/jira/browse/HADOOP-7939
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, conf, documentation, scripts
>Affects Versions: 0.23.0
>Reporter: Roman Shaposhnik
>Assignee: Roman Shaposhnik
> Fix For: 0.23.1
>
>
> h1. Introduction
> For the rest of this proposal it is assumed that the current set
> of Hadoop subcomponents is:
>  * hadoop-common
>  * hadoop-hdfs
>  * hadoop-yarn
>  * hadoop-mapreduce
> It must be noted that this is an open ended list, though. For example,
> implementations of additional frameworks on top of yarn (e.g. MPI) would
> also be considered a subcomponent.
> h1. Problem statement
> Currently there's an unfortunate coupling and hard-coding present at the
> level of launcher scripts, configuration scripts and Java implementation
> code that prevents us from treating all subcomponents of Hadoop independently
> of each other. In a lot of places it is assumed that bits and pieces
> from individual subcomponents *must* be located at predefined places
> and they can not be dynamically registered/discovered during the runtime.
> This prevents a truly flexible deployment of Hadoop 0.23. 
> h1. Proposal
> NOTE: this is NOT a proposal for redefining the layout from HADOOP-6255. 
> The goal here is to keep as much of that layout in place as possible,
> while permitting different deployment layouts.
> The aim of this proposal is to introduce the needed level of indirection and
> flexibility in order to accommodate the current assumed layout of Hadoop 
> tarball
> deployments and all the other styles of deployments as well. To this end the
> following set of environment variables needs to be uniformly used in all of
> the subcomponent's launcher scripts, configuration scripts and Java code
> ( stands for a literal name of a subcomponent). These variables are
> expected to be defined by -env.sh scripts and sourcing those files is
> expected to have the desired effect of setting the environment up correctly.
>   # HADOOP__HOME
>## root of the subtree in a filesystem where a subcomponent is expected to 
> be installed 
>## default value: $0/..
>   # HADOOP__JARS 
>## a subdirectory with all of the jar files comprising subcomponent's 
> implementation 
>## default value: $(HADOOP__HOME)/share/hadoop/$()
>   # HADOOP__EXT_JARS
>## a subdirectory with all of the jar files needed for extended 
> functionality of the subcomponent (nonessential for correct work of the basic 
> functionality)
>## default value: $(HADOOP__HOME)/share/hadoop/$()/ext
>   # HADOOP__NATIVE_LIBS
>## a subdirectory with all the native libraries that component requires
>## default value: $(HADOOP__HOME)/share/hadoop/$()/native
>   # HADOOP__BIN
>## a subdirectory with all of the launcher scripts specific to the client 
> side of the component
>## default value: $(HADOOP__HOME)/bin
>   # HADOOP__SBIN
>## a subdirectory with all of the launcher scripts specific to the 
> server/system side of the component
>## default value: $(HADOOP__HOME)/sbin
>   # HADOOP__LIBEXEC
>## a subdirectory with all of the launcher scripts that are internal to 
> the implementation and should *not* be invoked directly
>## default value: $(HADOOP__HOME)/libexec
>   # HADOOP__CONF
>## a subdirectory containing configuration files for a subcomponent
>## default value: $(HADOOP__HOME)/conf
>   # HADOOP__DATA
>## a subtree in the local filesystem for storing component's persistent 
> state
>## default value: $(HADOOP__HOME)/data
>   # HADOOP__LOG
>## a subdirectory for subcomponents's log files to be stored
>## default value: $(HADOOP__HOME)/log
>   # HADOOP__RUN
>## a subdirectory with runtime system specific information
>## default value: $(HADOOP__HOME)/run
>   # HADOOP__TMP
>## a subdirectory with temprorary files
>## default value: $(HADOOP__HOME)/tmp

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-6795) FsShell 'hadoop fs -text' does not work with other file systems

2011-12-16 Thread Konstantin Boudnik (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13171414#comment-13171414
 ] 

Konstantin Boudnik commented on HADOOP-6795:


The patch seems to be made against non-20.2 branch. Please submit a correct one.

> FsShell 'hadoop fs -text' does not work with other  file systems 
> -
>
> Key: HADOOP-6795
> URL: https://issues.apache.org/jira/browse/HADOOP-6795
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 0.20.2
>Reporter: Shunsuke Mikami
>Priority: Minor
>  Labels: patch
> Attachments: hadoop-6795.patch
>
>
> FsShell 'hadoop fs -text' can only work with file system which set by 
> fs.default.name.
> I use Gfarm file system from Hadoop.
> https://gfarm.svn.sourceforge.net/svnroot/gfarm/gfarm_hadoop/trunk/
> If i set fs.default.name to hdfs, the error "Wrong FS" occurred when i submit 
> 'hadoop fs -text' to file on gfarm file system.
>  $ hadoop fs -text gfarmfs:///home/mikami/random/part-0
>  text: Wrong FS: gfarmfs://null/home/mikami/random/part-0, expected: 
> hdfs://hostname:9000
> if i set fs.default.name to gfarmfs:///, i can get correct result.
> this command's result shouldn't depend on fs.default.name.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7730) Allow TestCLI to be run against a cluster

2011-10-31 Thread Konstantin Boudnik (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13140612#comment-13140612
 ] 

Konstantin Boudnik commented on HADOOP-7730:


There's nothing to port really, I suspect: original patch should work (or 
mostly work). It is up to the next RM to include this fix or now.

> Allow TestCLI to be run against a cluster
> -
>
> Key: HADOOP-7730
> URL: https://issues.apache.org/jira/browse/HADOOP-7730
> Project: Hadoop Common
>  Issue Type: Test
>  Components: test
>Affects Versions: 0.20.205.0, 0.22.0
>Reporter: Konstantin Boudnik
>Assignee: Konstantin Boudnik
> Fix For: 0.22.0
>
> Attachments: HADOOP-7730.patch, HADOOP-7730.trunk.patch, 
> HADOOP-7730.trunk.patch
>
>
> Use the same CLI test to test cluster bits (see HDFS-1762 for more info)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira