[jira] Updated: (HADOOP-6332) Large-scale Automated Test Framework

2010-05-12 Thread Konstantin Boudnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Boudnik updated HADOOP-6332:
---

Attachment: (was: HADOOP-6332.patch)

> Large-scale Automated Test Framework
> 
>
> Key: HADOOP-6332
> URL: https://issues.apache.org/jira/browse/HADOOP-6332
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: test
>Affects Versions: 0.22.0
>Reporter: Arun C Murthy
>Assignee: Konstantin Boudnik
> Fix For: 0.22.0
>
> Attachments: 6332-phase2.fix1.patch, 6332-phase2.fix2.patch, 
> 6332-phase2.patch, 6332.patch, 6332.patch, 6332.patch, 6332_v1.patch, 
> 6332_v2.patch, HADOOP-6332-MR.patch, HADOOP-6332-MR.patch, 
> HADOOP-6332.0.22.patch, HADOOP-6332.0.22.patch, HADOOP-6332.0.22.patch, 
> HADOOP-6332.0.22.patch, HADOOP-6332.0.22.patch, HADOOP-6332.0.22.patch, 
> HADOOP-6332.0.22.patch, HADOOP-6332.patch, HADOOP-6332.patch
>
>
> Hadoop would benefit from having a large-scale, automated, test-framework. 
> This jira is meant to be a master-jira to track relevant work.
> 
> The proposal is a junit-based, large-scale test framework which would run 
> against _real_ clusters.
> There are several pieces we need to achieve this goal:
> # A set of utilities we can use in junit-based tests to work with real, 
> large-scale hadoop clusters. E.g. utilities to bring up to deploy, start & 
> stop clusters, bring down tasktrackers, datanodes, entire racks of both etc.
> # Enhanced control-ability and inspect-ability of the various components in 
> the system e.g. daemons such as namenode, jobtracker should expose their 
> data-structures for query/manipulation etc. Tests would be much more relevant 
> if we could for e.g. query for specific states of the jobtracker, scheduler 
> etc. Clearly these apis should _not_ be part of the production clusters - 
> hence the proposal is to use aspectj to weave these new apis to 
> debug-deployments.
> 
> Related note: we should break up our tests into at least 3 categories:
> # src/test/unit -> Real unit tests using mock objects (e.g. HDFS-669 & 
> MAPREDUCE-1050).
> # src/test/integration -> Current junit tests with Mini* clusters etc.
> # src/test/system -> HADOOP-6332 and it's children

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HADOOP-6332) Large-scale Automated Test Framework

2010-05-12 Thread Konstantin Boudnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Boudnik updated HADOOP-6332:
---

Attachment: HADOOP-6332.patch

This patch adds Herriot sources to the source.jar file; removes a dependency on 
JUnit v3, and fixes some of JavaDocs issues. Also, a couple of import 
optimizations are done. 

> Large-scale Automated Test Framework
> 
>
> Key: HADOOP-6332
> URL: https://issues.apache.org/jira/browse/HADOOP-6332
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: test
>Affects Versions: 0.22.0
>Reporter: Arun C Murthy
>Assignee: Konstantin Boudnik
> Fix For: 0.22.0
>
> Attachments: 6332-phase2.fix1.patch, 6332-phase2.fix2.patch, 
> 6332-phase2.patch, 6332.patch, 6332.patch, 6332.patch, 6332_v1.patch, 
> 6332_v2.patch, HADOOP-6332-MR.patch, HADOOP-6332-MR.patch, 
> HADOOP-6332.0.22.patch, HADOOP-6332.0.22.patch, HADOOP-6332.0.22.patch, 
> HADOOP-6332.0.22.patch, HADOOP-6332.0.22.patch, HADOOP-6332.0.22.patch, 
> HADOOP-6332.0.22.patch, HADOOP-6332.patch, HADOOP-6332.patch, 
> HADOOP-6332.patch
>
>
> Hadoop would benefit from having a large-scale, automated, test-framework. 
> This jira is meant to be a master-jira to track relevant work.
> 
> The proposal is a junit-based, large-scale test framework which would run 
> against _real_ clusters.
> There are several pieces we need to achieve this goal:
> # A set of utilities we can use in junit-based tests to work with real, 
> large-scale hadoop clusters. E.g. utilities to bring up to deploy, start & 
> stop clusters, bring down tasktrackers, datanodes, entire racks of both etc.
> # Enhanced control-ability and inspect-ability of the various components in 
> the system e.g. daemons such as namenode, jobtracker should expose their 
> data-structures for query/manipulation etc. Tests would be much more relevant 
> if we could for e.g. query for specific states of the jobtracker, scheduler 
> etc. Clearly these apis should _not_ be part of the production clusters - 
> hence the proposal is to use aspectj to weave these new apis to 
> debug-deployments.
> 
> Related note: we should break up our tests into at least 3 categories:
> # src/test/unit -> Real unit tests using mock objects (e.g. HDFS-669 & 
> MAPREDUCE-1050).
> # src/test/integration -> Current junit tests with Mini* clusters etc.
> # src/test/system -> HADOOP-6332 and it's children

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HADOOP-6332) Large-scale Automated Test Framework

2010-05-12 Thread Konstantin Boudnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Boudnik updated HADOOP-6332:
---

Status: Open  (was: Patch Available)

> Large-scale Automated Test Framework
> 
>
> Key: HADOOP-6332
> URL: https://issues.apache.org/jira/browse/HADOOP-6332
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: test
>Affects Versions: 0.22.0
>Reporter: Arun C Murthy
>Assignee: Konstantin Boudnik
> Fix For: 0.22.0
>
> Attachments: 6332-phase2.fix1.patch, 6332-phase2.fix2.patch, 
> 6332-phase2.patch, 6332.patch, 6332.patch, 6332.patch, 6332_v1.patch, 
> 6332_v2.patch, HADOOP-6332-MR.patch, HADOOP-6332-MR.patch, 
> HADOOP-6332.0.22.patch, HADOOP-6332.0.22.patch, HADOOP-6332.0.22.patch, 
> HADOOP-6332.0.22.patch, HADOOP-6332.0.22.patch, HADOOP-6332.0.22.patch, 
> HADOOP-6332.0.22.patch, HADOOP-6332.patch, HADOOP-6332.patch
>
>
> Hadoop would benefit from having a large-scale, automated, test-framework. 
> This jira is meant to be a master-jira to track relevant work.
> 
> The proposal is a junit-based, large-scale test framework which would run 
> against _real_ clusters.
> There are several pieces we need to achieve this goal:
> # A set of utilities we can use in junit-based tests to work with real, 
> large-scale hadoop clusters. E.g. utilities to bring up to deploy, start & 
> stop clusters, bring down tasktrackers, datanodes, entire racks of both etc.
> # Enhanced control-ability and inspect-ability of the various components in 
> the system e.g. daemons such as namenode, jobtracker should expose their 
> data-structures for query/manipulation etc. Tests would be much more relevant 
> if we could for e.g. query for specific states of the jobtracker, scheduler 
> etc. Clearly these apis should _not_ be part of the production clusters - 
> hence the proposal is to use aspectj to weave these new apis to 
> debug-deployments.
> 
> Related note: we should break up our tests into at least 3 categories:
> # src/test/unit -> Real unit tests using mock objects (e.g. HDFS-669 & 
> MAPREDUCE-1050).
> # src/test/integration -> Current junit tests with Mini* clusters etc.
> # src/test/system -> HADOOP-6332 and it's children

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HADOOP-6752) Remote cluster control functionality needs JavaDocs improvement

2010-05-12 Thread Konstantin Boudnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6752?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12866652#action_12866652
 ] 

Konstantin Boudnik commented on HADOOP-6752:


Ok, I think I am confused. This JIRA is about adding JavaDocs to the public 
APIs of the code which is part of HADOOP-6332's patches. The issue Vinay has 
found is in that code. And needed to be tracked separately from this particular 
JIRA: we can't mix different issues in the same patch.

> Remote cluster control functionality needs JavaDocs improvement
> ---
>
> Key: HADOOP-6752
> URL: https://issues.apache.org/jira/browse/HADOOP-6752
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: 0.22.0
>Reporter: Konstantin Boudnik
>Assignee: Balaji Rajagopalan
> Attachments: hadoop-6572.patch
>
>
> Herriot has remote cluster control API. The functionality works fairly well, 
> however, JavaDocs are missed here and there. This has to be fixed.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HADOOP-6752) Remote cluster control functionality needs JavaDocs improvement

2010-05-12 Thread Konstantin Boudnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6752?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12866649#action_12866649
 ] 

Konstantin Boudnik commented on HADOOP-6752:


Vinay, I think this comment belongs to a different JIRA where the exception 
filtering has been done. If such a JIRA doesn't exist (somehow I can find it 
right now; was it committed anywhere yet?) Balaji should know where this 
functionality has been implemented initially and the comment (along with patch 
modification) clearly belongs there.

Also, in your comment above you're altering {{filePattern}} which I believe 
contains the list of files to be grepped.

> Remote cluster control functionality needs JavaDocs improvement
> ---
>
> Key: HADOOP-6752
> URL: https://issues.apache.org/jira/browse/HADOOP-6752
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: 0.22.0
>Reporter: Konstantin Boudnik
>Assignee: Balaji Rajagopalan
> Attachments: hadoop-6572.patch
>
>
> Herriot has remote cluster control API. The functionality works fairly well, 
> however, JavaDocs are missed here and there. This has to be fixed.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HADOOP-6752) Remote cluster control functionality needs JavaDocs improvement

2010-05-12 Thread Konstantin Boudnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Boudnik updated HADOOP-6752:
---

Summary: Remote cluster control functionality needs JavaDocs improvement  
(was: Remote cluster control functionality needs JavaDocs improvement; 
exceptionList doesn't work property)

Reverting: the bug is unrelated to this JIRA :(

> Remote cluster control functionality needs JavaDocs improvement
> ---
>
> Key: HADOOP-6752
> URL: https://issues.apache.org/jira/browse/HADOOP-6752
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: 0.22.0
>Reporter: Konstantin Boudnik
>Assignee: Balaji Rajagopalan
> Attachments: hadoop-6572.patch
>
>
> Herriot has remote cluster control API. The functionality works fairly well, 
> however, JavaDocs are missed here and there. This has to be fixed.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HADOOP-6752) Remote cluster control functionality needs JavaDocs improvement; exceptionList doesn't work property

2010-05-12 Thread Konstantin Boudnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Boudnik updated HADOOP-6752:
---

Summary: Remote cluster control functionality needs JavaDocs improvement; 
exceptionList doesn't work property  (was: Remote cluster control functionality 
needs some JavaDocs improvement)

I am changing the description of the JIRA because an issue in the core 
functionality was found.

> Remote cluster control functionality needs JavaDocs improvement; 
> exceptionList doesn't work property
> 
>
> Key: HADOOP-6752
> URL: https://issues.apache.org/jira/browse/HADOOP-6752
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: 0.22.0
>Reporter: Konstantin Boudnik
>Assignee: Balaji Rajagopalan
> Attachments: hadoop-6572.patch
>
>
> Herriot has remote cluster control API. The functionality works fairly well, 
> however, JavaDocs are missed here and there. This has to be fixed.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HADOOP-6760) WebServer shouldn't increase port number in case of negative port setting caused by Jetty's race

2010-05-11 Thread Konstantin Boudnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12866373#action_12866373
 ] 

Konstantin Boudnik commented on HADOOP-6760:


Yes, Eli. This seems to be a valid simplification. We are seeing quite a bunch 
of -1 ports on our production clusters. And the workaround for HADOOP-6386 was 
trying to address it. I guess it has done a pretty good job however increasing 
the port was wrong. 

Two workarounds exist for a purpose, actually. First one HADOOP-4744 is about 
getting a negative port as the result of initial {{getLocalPort()}} call. 
However, what we are seeing sometime is that {{getLocalPort()}} can get you a 
positive number and then when you are trying to bind to it you are getting 
{{IllegalArgumentException}} because the port is actually negative It is 
apparently caused by some crazy race in Jetty. Therefore, the workaround #2 
which verifies if allocated port is actually positive and if isn't it engage 
all that voodoo ...

So, I believe your simplification won't address the second issue... Please 
correct me if I'm wrong.

> WebServer shouldn't increase port number in case of negative port setting 
> caused by Jetty's race
> 
>
> Key: HADOOP-6760
> URL: https://issues.apache.org/jira/browse/HADOOP-6760
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 0.20.3
>Reporter: Konstantin Boudnik
>Assignee: Konstantin Boudnik
> Attachments: HADOOP-6760.0.20.patch, HADOOP-6760.patch
>
>
> When a negative port is assigned to a webserver socket (because of a race 
> inside of the Jetty server) the workaround from HADOOP-6386 is increasing the 
> original port number on the next bind attempt. Apparently, this is an 
> incorrect logic and next bind attempt should happen on the same port number 
> if possible.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HADOOP-6760) WebServer shouldn't increase port number in case of negative port setting caused by Jetty's race

2010-05-11 Thread Konstantin Boudnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Boudnik updated HADOOP-6760:
---

Status: Patch Available  (was: Open)

Fix is simple and is ready for verification.

> WebServer shouldn't increase port number in case of negative port setting 
> caused by Jetty's race
> 
>
> Key: HADOOP-6760
> URL: https://issues.apache.org/jira/browse/HADOOP-6760
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 0.20.3
>Reporter: Konstantin Boudnik
>Assignee: Konstantin Boudnik
> Attachments: HADOOP-6760.0.20.patch, HADOOP-6760.patch
>
>
> When a negative port is assigned to a webserver socket (because of a race 
> inside of the Jetty server) the workaround from HADOOP-6386 is increasing the 
> original port number on the next bind attempt. Apparently, this is an 
> incorrect logic and next bind attempt should happen on the same port number 
> if possible.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HADOOP-6760) WebServer shouldn't increase port number in case of negative port setting caused by Jetty's race

2010-05-11 Thread Konstantin Boudnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Boudnik updated HADOOP-6760:
---

Attachment: HADOOP-6760.0.20.patch

same for 0.20 source tree

> WebServer shouldn't increase port number in case of negative port setting 
> caused by Jetty's race
> 
>
> Key: HADOOP-6760
> URL: https://issues.apache.org/jira/browse/HADOOP-6760
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 0.20.3
>Reporter: Konstantin Boudnik
>Assignee: Konstantin Boudnik
> Attachments: HADOOP-6760.0.20.patch, HADOOP-6760.patch
>
>
> When a negative port is assigned to a webserver socket (because of a race 
> inside of the Jetty server) the workaround from HADOOP-6386 is increasing the 
> original port number on the next bind attempt. Apparently, this is an 
> incorrect logic and next bind attempt should happen on the same port number 
> if possible.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HADOOP-6760) WebServer shouldn't increase port number in case of negative port setting caused by Jetty's race

2010-05-11 Thread Konstantin Boudnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Boudnik updated HADOOP-6760:
---

Attachment: HADOOP-6760.patch

> WebServer shouldn't increase port number in case of negative port setting 
> caused by Jetty's race
> 
>
> Key: HADOOP-6760
> URL: https://issues.apache.org/jira/browse/HADOOP-6760
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 0.20.3
>Reporter: Konstantin Boudnik
>Assignee: Konstantin Boudnik
> Attachments: HADOOP-6760.patch
>
>
> When a negative port is assigned to a webserver socket (because of a race 
> inside of the Jetty server) the workaround from HADOOP-6386 is increasing the 
> original port number on the next bind attempt. Apparently, this is an 
> incorrect logic and next bind attempt should happen on the same port number 
> if possible.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (HADOOP-6760) WebServer shouldn't increase port number in case of negative port setting caused by Jetty's race

2010-05-11 Thread Konstantin Boudnik (JIRA)
WebServer shouldn't increase port number in case of negative port setting 
caused by Jetty's race


 Key: HADOOP-6760
 URL: https://issues.apache.org/jira/browse/HADOOP-6760
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.20.3
Reporter: Konstantin Boudnik
Assignee: Konstantin Boudnik


When a negative port is assigned to a webserver socket (because of a race 
inside of the Jetty server) the workaround from HADOOP-6386 is increasing the 
original port number on the next bind attempt. Apparently, this is an incorrect 
logic and next bind attempt should happen on the same port number if possible.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HADOOP-6332) Large-scale Automated Test Framework

2010-05-10 Thread Konstantin Boudnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Boudnik updated HADOOP-6332:
---

Status: Patch Available  (was: Open)

Verification for the patch with {{mvn:install}} support

> Large-scale Automated Test Framework
> 
>
> Key: HADOOP-6332
> URL: https://issues.apache.org/jira/browse/HADOOP-6332
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: test
>Affects Versions: 0.22.0
>Reporter: Arun C Murthy
>Assignee: Konstantin Boudnik
> Fix For: 0.22.0
>
> Attachments: 6332-phase2.fix1.patch, 6332-phase2.fix2.patch, 
> 6332-phase2.patch, 6332.patch, 6332.patch, 6332.patch, 6332_v1.patch, 
> 6332_v2.patch, HADOOP-6332-MR.patch, HADOOP-6332-MR.patch, 
> HADOOP-6332.0.22.patch, HADOOP-6332.0.22.patch, HADOOP-6332.0.22.patch, 
> HADOOP-6332.0.22.patch, HADOOP-6332.0.22.patch, HADOOP-6332.0.22.patch, 
> HADOOP-6332.0.22.patch, HADOOP-6332.patch, HADOOP-6332.patch
>
>
> Hadoop would benefit from having a large-scale, automated, test-framework. 
> This jira is meant to be a master-jira to track relevant work.
> 
> The proposal is a junit-based, large-scale test framework which would run 
> against _real_ clusters.
> There are several pieces we need to achieve this goal:
> # A set of utilities we can use in junit-based tests to work with real, 
> large-scale hadoop clusters. E.g. utilities to bring up to deploy, start & 
> stop clusters, bring down tasktrackers, datanodes, entire racks of both etc.
> # Enhanced control-ability and inspect-ability of the various components in 
> the system e.g. daemons such as namenode, jobtracker should expose their 
> data-structures for query/manipulation etc. Tests would be much more relevant 
> if we could for e.g. query for specific states of the jobtracker, scheduler 
> etc. Clearly these apis should _not_ be part of the production clusters - 
> hence the proposal is to use aspectj to weave these new apis to 
> debug-deployments.
> 
> Related note: we should break up our tests into at least 3 categories:
> # src/test/unit -> Real unit tests using mock objects (e.g. HDFS-669 & 
> MAPREDUCE-1050).
> # src/test/integration -> Current junit tests with Mini* clusters etc.
> # src/test/system -> HADOOP-6332 and it's children

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HADOOP-6332) Large-scale Automated Test Framework

2010-05-10 Thread Konstantin Boudnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Boudnik updated HADOOP-6332:
---

Attachment: HADOOP-6332.0.22.patch

This patch also adds a capability to mvn-install Herriot artifacts locally with 
id {{hadoop-core-system}}. Now it can be pulled with internal resolver into 
HDFS and MR subprojects.

Clearly, the Maven deployment will have to be added at some point.

> Large-scale Automated Test Framework
> 
>
> Key: HADOOP-6332
> URL: https://issues.apache.org/jira/browse/HADOOP-6332
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: test
>Affects Versions: 0.22.0
>Reporter: Arun C Murthy
>Assignee: Konstantin Boudnik
> Fix For: 0.22.0
>
> Attachments: 6332-phase2.fix1.patch, 6332-phase2.fix2.patch, 
> 6332-phase2.patch, 6332.patch, 6332.patch, 6332.patch, 6332_v1.patch, 
> 6332_v2.patch, HADOOP-6332-MR.patch, HADOOP-6332-MR.patch, 
> HADOOP-6332.0.22.patch, HADOOP-6332.0.22.patch, HADOOP-6332.0.22.patch, 
> HADOOP-6332.0.22.patch, HADOOP-6332.0.22.patch, HADOOP-6332.0.22.patch, 
> HADOOP-6332.0.22.patch, HADOOP-6332.patch, HADOOP-6332.patch
>
>
> Hadoop would benefit from having a large-scale, automated, test-framework. 
> This jira is meant to be a master-jira to track relevant work.
> 
> The proposal is a junit-based, large-scale test framework which would run 
> against _real_ clusters.
> There are several pieces we need to achieve this goal:
> # A set of utilities we can use in junit-based tests to work with real, 
> large-scale hadoop clusters. E.g. utilities to bring up to deploy, start & 
> stop clusters, bring down tasktrackers, datanodes, entire racks of both etc.
> # Enhanced control-ability and inspect-ability of the various components in 
> the system e.g. daemons such as namenode, jobtracker should expose their 
> data-structures for query/manipulation etc. Tests would be much more relevant 
> if we could for e.g. query for specific states of the jobtracker, scheduler 
> etc. Clearly these apis should _not_ be part of the production clusters - 
> hence the proposal is to use aspectj to weave these new apis to 
> debug-deployments.
> 
> Related note: we should break up our tests into at least 3 categories:
> # src/test/unit -> Real unit tests using mock objects (e.g. HDFS-669 & 
> MAPREDUCE-1050).
> # src/test/integration -> Current junit tests with Mini* clusters etc.
> # src/test/system -> HADOOP-6332 and it's children

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HADOOP-6332) Large-scale Automated Test Framework

2010-05-10 Thread Konstantin Boudnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Boudnik updated HADOOP-6332:
---

Status: Open  (was: Patch Available)

> Large-scale Automated Test Framework
> 
>
> Key: HADOOP-6332
> URL: https://issues.apache.org/jira/browse/HADOOP-6332
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: test
>Affects Versions: 0.22.0
>Reporter: Arun C Murthy
>Assignee: Konstantin Boudnik
> Fix For: 0.22.0
>
> Attachments: 6332-phase2.fix1.patch, 6332-phase2.fix2.patch, 
> 6332-phase2.patch, 6332.patch, 6332.patch, 6332.patch, 6332_v1.patch, 
> 6332_v2.patch, HADOOP-6332-MR.patch, HADOOP-6332-MR.patch, 
> HADOOP-6332.0.22.patch, HADOOP-6332.0.22.patch, HADOOP-6332.0.22.patch, 
> HADOOP-6332.0.22.patch, HADOOP-6332.0.22.patch, HADOOP-6332.0.22.patch, 
> HADOOP-6332.patch, HADOOP-6332.patch
>
>
> Hadoop would benefit from having a large-scale, automated, test-framework. 
> This jira is meant to be a master-jira to track relevant work.
> 
> The proposal is a junit-based, large-scale test framework which would run 
> against _real_ clusters.
> There are several pieces we need to achieve this goal:
> # A set of utilities we can use in junit-based tests to work with real, 
> large-scale hadoop clusters. E.g. utilities to bring up to deploy, start & 
> stop clusters, bring down tasktrackers, datanodes, entire racks of both etc.
> # Enhanced control-ability and inspect-ability of the various components in 
> the system e.g. daemons such as namenode, jobtracker should expose their 
> data-structures for query/manipulation etc. Tests would be much more relevant 
> if we could for e.g. query for specific states of the jobtracker, scheduler 
> etc. Clearly these apis should _not_ be part of the production clusters - 
> hence the proposal is to use aspectj to weave these new apis to 
> debug-deployments.
> 
> Related note: we should break up our tests into at least 3 categories:
> # src/test/unit -> Real unit tests using mock objects (e.g. HDFS-669 & 
> MAPREDUCE-1050).
> # src/test/integration -> Current junit tests with Mini* clusters etc.
> # src/test/system -> HADOOP-6332 and it's children

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HADOOP-6332) Large-scale Automated Test Framework

2010-05-10 Thread Konstantin Boudnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Boudnik updated HADOOP-6332:
---

Status: Patch Available  (was: Open)

Run verification one more time.

> Large-scale Automated Test Framework
> 
>
> Key: HADOOP-6332
> URL: https://issues.apache.org/jira/browse/HADOOP-6332
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: test
>Affects Versions: 0.22.0
>Reporter: Arun C Murthy
>Assignee: Konstantin Boudnik
> Fix For: 0.22.0
>
> Attachments: 6332-phase2.fix1.patch, 6332-phase2.fix2.patch, 
> 6332-phase2.patch, 6332.patch, 6332.patch, 6332.patch, 6332_v1.patch, 
> 6332_v2.patch, HADOOP-6332-MR.patch, HADOOP-6332-MR.patch, 
> HADOOP-6332.0.22.patch, HADOOP-6332.0.22.patch, HADOOP-6332.0.22.patch, 
> HADOOP-6332.0.22.patch, HADOOP-6332.0.22.patch, HADOOP-6332.0.22.patch, 
> HADOOP-6332.patch, HADOOP-6332.patch
>
>
> Hadoop would benefit from having a large-scale, automated, test-framework. 
> This jira is meant to be a master-jira to track relevant work.
> 
> The proposal is a junit-based, large-scale test framework which would run 
> against _real_ clusters.
> There are several pieces we need to achieve this goal:
> # A set of utilities we can use in junit-based tests to work with real, 
> large-scale hadoop clusters. E.g. utilities to bring up to deploy, start & 
> stop clusters, bring down tasktrackers, datanodes, entire racks of both etc.
> # Enhanced control-ability and inspect-ability of the various components in 
> the system e.g. daemons such as namenode, jobtracker should expose their 
> data-structures for query/manipulation etc. Tests would be much more relevant 
> if we could for e.g. query for specific states of the jobtracker, scheduler 
> etc. Clearly these apis should _not_ be part of the production clusters - 
> hence the proposal is to use aspectj to weave these new apis to 
> debug-deployments.
> 
> Related note: we should break up our tests into at least 3 categories:
> # src/test/unit -> Real unit tests using mock objects (e.g. HDFS-669 & 
> MAPREDUCE-1050).
> # src/test/integration -> Current junit tests with Mini* clusters etc.
> # src/test/system -> HADOOP-6332 and it's children

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HADOOP-6332) Large-scale Automated Test Framework

2010-05-10 Thread Konstantin Boudnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Boudnik updated HADOOP-6332:
---

Attachment: HADOOP-6332.0.22.patch

Addressing comments. {{jar-test-system}} is removed from the build. 

Some additional investigation shows that in the current 0.20 implementation 
Herriot build also ships existing functional tests only. This clearly needs to 
be fixed for 0.20 and trunk. But for the common's trunk we don't need to target 
because there's no system tests just for the common component.

> Large-scale Automated Test Framework
> 
>
> Key: HADOOP-6332
> URL: https://issues.apache.org/jira/browse/HADOOP-6332
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: test
>Affects Versions: 0.22.0
>Reporter: Arun C Murthy
>Assignee: Konstantin Boudnik
> Fix For: 0.22.0
>
> Attachments: 6332-phase2.fix1.patch, 6332-phase2.fix2.patch, 
> 6332-phase2.patch, 6332.patch, 6332.patch, 6332.patch, 6332_v1.patch, 
> 6332_v2.patch, HADOOP-6332-MR.patch, HADOOP-6332-MR.patch, 
> HADOOP-6332.0.22.patch, HADOOP-6332.0.22.patch, HADOOP-6332.0.22.patch, 
> HADOOP-6332.0.22.patch, HADOOP-6332.0.22.patch, HADOOP-6332.0.22.patch, 
> HADOOP-6332.patch, HADOOP-6332.patch
>
>
> Hadoop would benefit from having a large-scale, automated, test-framework. 
> This jira is meant to be a master-jira to track relevant work.
> 
> The proposal is a junit-based, large-scale test framework which would run 
> against _real_ clusters.
> There are several pieces we need to achieve this goal:
> # A set of utilities we can use in junit-based tests to work with real, 
> large-scale hadoop clusters. E.g. utilities to bring up to deploy, start & 
> stop clusters, bring down tasktrackers, datanodes, entire racks of both etc.
> # Enhanced control-ability and inspect-ability of the various components in 
> the system e.g. daemons such as namenode, jobtracker should expose their 
> data-structures for query/manipulation etc. Tests would be much more relevant 
> if we could for e.g. query for specific states of the jobtracker, scheduler 
> etc. Clearly these apis should _not_ be part of the production clusters - 
> hence the proposal is to use aspectj to weave these new apis to 
> debug-deployments.
> 
> Related note: we should break up our tests into at least 3 categories:
> # src/test/unit -> Real unit tests using mock objects (e.g. HDFS-669 & 
> MAPREDUCE-1050).
> # src/test/integration -> Current junit tests with Mini* clusters etc.
> # src/test/system -> HADOOP-6332 and it's children

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HADOOP-6332) Large-scale Automated Test Framework

2010-05-10 Thread Konstantin Boudnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Boudnik updated HADOOP-6332:
---

Status: Open  (was: Patch Available)

Need to rerun the verification

> Large-scale Automated Test Framework
> 
>
> Key: HADOOP-6332
> URL: https://issues.apache.org/jira/browse/HADOOP-6332
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: test
>Affects Versions: 0.22.0
>Reporter: Arun C Murthy
>Assignee: Konstantin Boudnik
> Fix For: 0.22.0
>
> Attachments: 6332-phase2.fix1.patch, 6332-phase2.fix2.patch, 
> 6332-phase2.patch, 6332.patch, 6332.patch, 6332.patch, 6332_v1.patch, 
> 6332_v2.patch, HADOOP-6332-MR.patch, HADOOP-6332-MR.patch, 
> HADOOP-6332.0.22.patch, HADOOP-6332.0.22.patch, HADOOP-6332.0.22.patch, 
> HADOOP-6332.0.22.patch, HADOOP-6332.0.22.patch, HADOOP-6332.patch, 
> HADOOP-6332.patch
>
>
> Hadoop would benefit from having a large-scale, automated, test-framework. 
> This jira is meant to be a master-jira to track relevant work.
> 
> The proposal is a junit-based, large-scale test framework which would run 
> against _real_ clusters.
> There are several pieces we need to achieve this goal:
> # A set of utilities we can use in junit-based tests to work with real, 
> large-scale hadoop clusters. E.g. utilities to bring up to deploy, start & 
> stop clusters, bring down tasktrackers, datanodes, entire racks of both etc.
> # Enhanced control-ability and inspect-ability of the various components in 
> the system e.g. daemons such as namenode, jobtracker should expose their 
> data-structures for query/manipulation etc. Tests would be much more relevant 
> if we could for e.g. query for specific states of the jobtracker, scheduler 
> etc. Clearly these apis should _not_ be part of the production clusters - 
> hence the proposal is to use aspectj to weave these new apis to 
> debug-deployments.
> 
> Related note: we should break up our tests into at least 3 categories:
> # src/test/unit -> Real unit tests using mock objects (e.g. HDFS-669 & 
> MAPREDUCE-1050).
> # src/test/integration -> Current junit tests with Mini* clusters etc.
> # src/test/system -> HADOOP-6332 and it's children

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HADOOP-6332) Large-scale Automated Test Framework

2010-05-10 Thread Konstantin Boudnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12865837#action_12865837
 ] 

Konstantin Boudnik commented on HADOOP-6332:


Actually, I'm wrong about having a problem in the original {{jar-test-system}} 
implementation. Looks like in the trunk the {{jar-test}} is implemented 
slightly different which causes this effect. Hmm... 

> Large-scale Automated Test Framework
> 
>
> Key: HADOOP-6332
> URL: https://issues.apache.org/jira/browse/HADOOP-6332
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: test
>Affects Versions: 0.22.0
>Reporter: Arun C Murthy
>Assignee: Konstantin Boudnik
> Fix For: 0.22.0
>
> Attachments: 6332-phase2.fix1.patch, 6332-phase2.fix2.patch, 
> 6332-phase2.patch, 6332.patch, 6332.patch, 6332.patch, 6332_v1.patch, 
> 6332_v2.patch, HADOOP-6332-MR.patch, HADOOP-6332-MR.patch, 
> HADOOP-6332.0.22.patch, HADOOP-6332.0.22.patch, HADOOP-6332.0.22.patch, 
> HADOOP-6332.0.22.patch, HADOOP-6332.0.22.patch, HADOOP-6332.patch, 
> HADOOP-6332.patch
>
>
> Hadoop would benefit from having a large-scale, automated, test-framework. 
> This jira is meant to be a master-jira to track relevant work.
> 
> The proposal is a junit-based, large-scale test framework which would run 
> against _real_ clusters.
> There are several pieces we need to achieve this goal:
> # A set of utilities we can use in junit-based tests to work with real, 
> large-scale hadoop clusters. E.g. utilities to bring up to deploy, start & 
> stop clusters, bring down tasktrackers, datanodes, entire racks of both etc.
> # Enhanced control-ability and inspect-ability of the various components in 
> the system e.g. daemons such as namenode, jobtracker should expose their 
> data-structures for query/manipulation etc. Tests would be much more relevant 
> if we could for e.g. query for specific states of the jobtracker, scheduler 
> etc. Clearly these apis should _not_ be part of the production clusters - 
> hence the proposal is to use aspectj to weave these new apis to 
> debug-deployments.
> 
> Related note: we should break up our tests into at least 3 categories:
> # src/test/unit -> Real unit tests using mock objects (e.g. HDFS-669 & 
> MAPREDUCE-1050).
> # src/test/integration -> Current junit tests with Mini* clusters etc.
> # src/test/system -> HADOOP-6332 and it's children

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HADOOP-6332) Large-scale Automated Test Framework

2010-05-10 Thread Konstantin Boudnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12865836#action_12865836
 ] 

Konstantin Boudnik commented on HADOOP-6332:


bq. system-test.xml need not go in common
While I'm mostly agree that {{system-test.xml}} shouldn't be in common (a 
config file in common shouldn't have any knowledge about upstream 
dependencies), I am reluctant to split it. The problem with the split as I see 
it is that both copies of the file in HDFS and MR will mostly contains the same 
information with some minor differences. However, considering the exposing 
upstream dependencies is worst I will make the split and post new patch shortly.

bq.  jar-test-system ant target
Thanks for catching this one. Looks like we have the same problem in original 
implementation and it has been missed. Will fix it.

> Large-scale Automated Test Framework
> 
>
> Key: HADOOP-6332
> URL: https://issues.apache.org/jira/browse/HADOOP-6332
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: test
>Affects Versions: 0.22.0
>Reporter: Arun C Murthy
>Assignee: Konstantin Boudnik
> Fix For: 0.22.0
>
> Attachments: 6332-phase2.fix1.patch, 6332-phase2.fix2.patch, 
> 6332-phase2.patch, 6332.patch, 6332.patch, 6332.patch, 6332_v1.patch, 
> 6332_v2.patch, HADOOP-6332-MR.patch, HADOOP-6332-MR.patch, 
> HADOOP-6332.0.22.patch, HADOOP-6332.0.22.patch, HADOOP-6332.0.22.patch, 
> HADOOP-6332.0.22.patch, HADOOP-6332.0.22.patch, HADOOP-6332.patch, 
> HADOOP-6332.patch
>
>
> Hadoop would benefit from having a large-scale, automated, test-framework. 
> This jira is meant to be a master-jira to track relevant work.
> 
> The proposal is a junit-based, large-scale test framework which would run 
> against _real_ clusters.
> There are several pieces we need to achieve this goal:
> # A set of utilities we can use in junit-based tests to work with real, 
> large-scale hadoop clusters. E.g. utilities to bring up to deploy, start & 
> stop clusters, bring down tasktrackers, datanodes, entire racks of both etc.
> # Enhanced control-ability and inspect-ability of the various components in 
> the system e.g. daemons such as namenode, jobtracker should expose their 
> data-structures for query/manipulation etc. Tests would be much more relevant 
> if we could for e.g. query for specific states of the jobtracker, scheduler 
> etc. Clearly these apis should _not_ be part of the production clusters - 
> hence the proposal is to use aspectj to weave these new apis to 
> debug-deployments.
> 
> Related note: we should break up our tests into at least 3 categories:
> # src/test/unit -> Real unit tests using mock objects (e.g. HDFS-669 & 
> MAPREDUCE-1050).
> # src/test/integration -> Current junit tests with Mini* clusters etc.
> # src/test/system -> HADOOP-6332 and it's children

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HADOOP-6332) Large-scale Automated Test Framework

2010-05-07 Thread Konstantin Boudnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12865392#action_12865392
 ] 

Konstantin Boudnik commented on HADOOP-6332:


The audit warning is about absence of Apache License boiler plate in tests list 
file. I don't think it is possible to have it there. Besides, similar files in 
HDFS and MR don't have it. Let's punt on this.

> Large-scale Automated Test Framework
> 
>
> Key: HADOOP-6332
> URL: https://issues.apache.org/jira/browse/HADOOP-6332
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: test
>Affects Versions: 0.22.0
>Reporter: Arun C Murthy
>Assignee: Konstantin Boudnik
> Fix For: 0.22.0
>
> Attachments: 6332-phase2.fix1.patch, 6332-phase2.fix2.patch, 
> 6332-phase2.patch, 6332.patch, 6332.patch, 6332.patch, 6332_v1.patch, 
> 6332_v2.patch, HADOOP-6332-MR.patch, HADOOP-6332-MR.patch, 
> HADOOP-6332.0.22.patch, HADOOP-6332.0.22.patch, HADOOP-6332.0.22.patch, 
> HADOOP-6332.0.22.patch, HADOOP-6332.0.22.patch, HADOOP-6332.patch, 
> HADOOP-6332.patch
>
>
> Hadoop would benefit from having a large-scale, automated, test-framework. 
> This jira is meant to be a master-jira to track relevant work.
> 
> The proposal is a junit-based, large-scale test framework which would run 
> against _real_ clusters.
> There are several pieces we need to achieve this goal:
> # A set of utilities we can use in junit-based tests to work with real, 
> large-scale hadoop clusters. E.g. utilities to bring up to deploy, start & 
> stop clusters, bring down tasktrackers, datanodes, entire racks of both etc.
> # Enhanced control-ability and inspect-ability of the various components in 
> the system e.g. daemons such as namenode, jobtracker should expose their 
> data-structures for query/manipulation etc. Tests would be much more relevant 
> if we could for e.g. query for specific states of the jobtracker, scheduler 
> etc. Clearly these apis should _not_ be part of the production clusters - 
> hence the proposal is to use aspectj to weave these new apis to 
> debug-deployments.
> 
> Related note: we should break up our tests into at least 3 categories:
> # src/test/unit -> Real unit tests using mock objects (e.g. HDFS-669 & 
> MAPREDUCE-1050).
> # src/test/integration -> Current junit tests with Mini* clusters etc.
> # src/test/system -> HADOOP-6332 and it's children

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HADOOP-6332) Large-scale Automated Test Framework

2010-05-07 Thread Konstantin Boudnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Boudnik updated HADOOP-6332:
---

Status: Patch Available  (was: Open)

Issues found by test-patch are fixed. Resubmitting.

> Large-scale Automated Test Framework
> 
>
> Key: HADOOP-6332
> URL: https://issues.apache.org/jira/browse/HADOOP-6332
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: test
>Affects Versions: 0.22.0
>Reporter: Arun C Murthy
>Assignee: Konstantin Boudnik
> Fix For: 0.22.0
>
> Attachments: 6332-phase2.fix1.patch, 6332-phase2.fix2.patch, 
> 6332-phase2.patch, 6332.patch, 6332.patch, 6332.patch, 6332_v1.patch, 
> 6332_v2.patch, HADOOP-6332-MR.patch, HADOOP-6332-MR.patch, 
> HADOOP-6332.0.22.patch, HADOOP-6332.0.22.patch, HADOOP-6332.0.22.patch, 
> HADOOP-6332.0.22.patch, HADOOP-6332.0.22.patch, HADOOP-6332.patch, 
> HADOOP-6332.patch
>
>
> Hadoop would benefit from having a large-scale, automated, test-framework. 
> This jira is meant to be a master-jira to track relevant work.
> 
> The proposal is a junit-based, large-scale test framework which would run 
> against _real_ clusters.
> There are several pieces we need to achieve this goal:
> # A set of utilities we can use in junit-based tests to work with real, 
> large-scale hadoop clusters. E.g. utilities to bring up to deploy, start & 
> stop clusters, bring down tasktrackers, datanodes, entire racks of both etc.
> # Enhanced control-ability and inspect-ability of the various components in 
> the system e.g. daemons such as namenode, jobtracker should expose their 
> data-structures for query/manipulation etc. Tests would be much more relevant 
> if we could for e.g. query for specific states of the jobtracker, scheduler 
> etc. Clearly these apis should _not_ be part of the production clusters - 
> hence the proposal is to use aspectj to weave these new apis to 
> debug-deployments.
> 
> Related note: we should break up our tests into at least 3 categories:
> # src/test/unit -> Real unit tests using mock objects (e.g. HDFS-669 & 
> MAPREDUCE-1050).
> # src/test/integration -> Current junit tests with Mini* clusters etc.
> # src/test/system -> HADOOP-6332 and it's children

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HADOOP-6332) Large-scale Automated Test Framework

2010-05-07 Thread Konstantin Boudnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Boudnik updated HADOOP-6332:
---

Attachment: HADOOP-6332.0.22.patch

Addressing audit warning: missed Apache license boiler plate.

> Large-scale Automated Test Framework
> 
>
> Key: HADOOP-6332
> URL: https://issues.apache.org/jira/browse/HADOOP-6332
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: test
>Affects Versions: 0.22.0
>Reporter: Arun C Murthy
>Assignee: Konstantin Boudnik
> Fix For: 0.22.0
>
> Attachments: 6332-phase2.fix1.patch, 6332-phase2.fix2.patch, 
> 6332-phase2.patch, 6332.patch, 6332.patch, 6332.patch, 6332_v1.patch, 
> 6332_v2.patch, HADOOP-6332-MR.patch, HADOOP-6332-MR.patch, 
> HADOOP-6332.0.22.patch, HADOOP-6332.0.22.patch, HADOOP-6332.0.22.patch, 
> HADOOP-6332.0.22.patch, HADOOP-6332.0.22.patch, HADOOP-6332.patch, 
> HADOOP-6332.patch
>
>
> Hadoop would benefit from having a large-scale, automated, test-framework. 
> This jira is meant to be a master-jira to track relevant work.
> 
> The proposal is a junit-based, large-scale test framework which would run 
> against _real_ clusters.
> There are several pieces we need to achieve this goal:
> # A set of utilities we can use in junit-based tests to work with real, 
> large-scale hadoop clusters. E.g. utilities to bring up to deploy, start & 
> stop clusters, bring down tasktrackers, datanodes, entire racks of both etc.
> # Enhanced control-ability and inspect-ability of the various components in 
> the system e.g. daemons such as namenode, jobtracker should expose their 
> data-structures for query/manipulation etc. Tests would be much more relevant 
> if we could for e.g. query for specific states of the jobtracker, scheduler 
> etc. Clearly these apis should _not_ be part of the production clusters - 
> hence the proposal is to use aspectj to weave these new apis to 
> debug-deployments.
> 
> Related note: we should break up our tests into at least 3 categories:
> # src/test/unit -> Real unit tests using mock objects (e.g. HDFS-669 & 
> MAPREDUCE-1050).
> # src/test/integration -> Current junit tests with Mini* clusters etc.
> # src/test/system -> HADOOP-6332 and it's children

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HADOOP-6332) Large-scale Automated Test Framework

2010-05-07 Thread Konstantin Boudnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Boudnik updated HADOOP-6332:
---

Attachment: HADOOP-6332.0.22.patch

Missing tests list file is added.

> Large-scale Automated Test Framework
> 
>
> Key: HADOOP-6332
> URL: https://issues.apache.org/jira/browse/HADOOP-6332
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: test
>Affects Versions: 0.22.0
>Reporter: Arun C Murthy
>Assignee: Konstantin Boudnik
> Fix For: 0.22.0
>
> Attachments: 6332-phase2.fix1.patch, 6332-phase2.fix2.patch, 
> 6332-phase2.patch, 6332.patch, 6332.patch, 6332.patch, 6332_v1.patch, 
> 6332_v2.patch, HADOOP-6332-MR.patch, HADOOP-6332-MR.patch, 
> HADOOP-6332.0.22.patch, HADOOP-6332.0.22.patch, HADOOP-6332.0.22.patch, 
> HADOOP-6332.0.22.patch, HADOOP-6332.patch, HADOOP-6332.patch
>
>
> Hadoop would benefit from having a large-scale, automated, test-framework. 
> This jira is meant to be a master-jira to track relevant work.
> 
> The proposal is a junit-based, large-scale test framework which would run 
> against _real_ clusters.
> There are several pieces we need to achieve this goal:
> # A set of utilities we can use in junit-based tests to work with real, 
> large-scale hadoop clusters. E.g. utilities to bring up to deploy, start & 
> stop clusters, bring down tasktrackers, datanodes, entire racks of both etc.
> # Enhanced control-ability and inspect-ability of the various components in 
> the system e.g. daemons such as namenode, jobtracker should expose their 
> data-structures for query/manipulation etc. Tests would be much more relevant 
> if we could for e.g. query for specific states of the jobtracker, scheduler 
> etc. Clearly these apis should _not_ be part of the production clusters - 
> hence the proposal is to use aspectj to weave these new apis to 
> debug-deployments.
> 
> Related note: we should break up our tests into at least 3 categories:
> # src/test/unit -> Real unit tests using mock objects (e.g. HDFS-669 & 
> MAPREDUCE-1050).
> # src/test/integration -> Current junit tests with Mini* clusters etc.
> # src/test/system -> HADOOP-6332 and it's children

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HADOOP-6332) Large-scale Automated Test Framework

2010-05-07 Thread Konstantin Boudnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Boudnik updated HADOOP-6332:
---

Status: Open  (was: Patch Available)

The patch missed a file

> Large-scale Automated Test Framework
> 
>
> Key: HADOOP-6332
> URL: https://issues.apache.org/jira/browse/HADOOP-6332
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: test
>Affects Versions: 0.22.0
>Reporter: Arun C Murthy
>Assignee: Konstantin Boudnik
> Fix For: 0.22.0
>
> Attachments: 6332-phase2.fix1.patch, 6332-phase2.fix2.patch, 
> 6332-phase2.patch, 6332.patch, 6332.patch, 6332.patch, 6332_v1.patch, 
> 6332_v2.patch, HADOOP-6332-MR.patch, HADOOP-6332-MR.patch, 
> HADOOP-6332.0.22.patch, HADOOP-6332.0.22.patch, HADOOP-6332.0.22.patch, 
> HADOOP-6332.0.22.patch, HADOOP-6332.patch, HADOOP-6332.patch
>
>
> Hadoop would benefit from having a large-scale, automated, test-framework. 
> This jira is meant to be a master-jira to track relevant work.
> 
> The proposal is a junit-based, large-scale test framework which would run 
> against _real_ clusters.
> There are several pieces we need to achieve this goal:
> # A set of utilities we can use in junit-based tests to work with real, 
> large-scale hadoop clusters. E.g. utilities to bring up to deploy, start & 
> stop clusters, bring down tasktrackers, datanodes, entire racks of both etc.
> # Enhanced control-ability and inspect-ability of the various components in 
> the system e.g. daemons such as namenode, jobtracker should expose their 
> data-structures for query/manipulation etc. Tests would be much more relevant 
> if we could for e.g. query for specific states of the jobtracker, scheduler 
> etc. Clearly these apis should _not_ be part of the production clusters - 
> hence the proposal is to use aspectj to weave these new apis to 
> debug-deployments.
> 
> Related note: we should break up our tests into at least 3 categories:
> # src/test/unit -> Real unit tests using mock objects (e.g. HDFS-669 & 
> MAPREDUCE-1050).
> # src/test/integration -> Current junit tests with Mini* clusters etc.
> # src/test/system -> HADOOP-6332 and it's children

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HADOOP-6332) Large-scale Automated Test Framework

2010-05-07 Thread Konstantin Boudnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Boudnik updated HADOOP-6332:
---

   Status: Patch Available  (was: Open)
Affects Version/s: 0.22.0
Fix Version/s: 0.22.0
   (was: 0.21.0)

Patch seems to be ready for verification.

> Large-scale Automated Test Framework
> 
>
> Key: HADOOP-6332
> URL: https://issues.apache.org/jira/browse/HADOOP-6332
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: test
>Affects Versions: 0.22.0
>Reporter: Arun C Murthy
>Assignee: Konstantin Boudnik
> Fix For: 0.22.0
>
> Attachments: 6332-phase2.fix1.patch, 6332-phase2.fix2.patch, 
> 6332-phase2.patch, 6332.patch, 6332.patch, 6332.patch, 6332_v1.patch, 
> 6332_v2.patch, HADOOP-6332-MR.patch, HADOOP-6332-MR.patch, 
> HADOOP-6332.0.22.patch, HADOOP-6332.0.22.patch, HADOOP-6332.0.22.patch, 
> HADOOP-6332.patch, HADOOP-6332.patch
>
>
> Hadoop would benefit from having a large-scale, automated, test-framework. 
> This jira is meant to be a master-jira to track relevant work.
> 
> The proposal is a junit-based, large-scale test framework which would run 
> against _real_ clusters.
> There are several pieces we need to achieve this goal:
> # A set of utilities we can use in junit-based tests to work with real, 
> large-scale hadoop clusters. E.g. utilities to bring up to deploy, start & 
> stop clusters, bring down tasktrackers, datanodes, entire racks of both etc.
> # Enhanced control-ability and inspect-ability of the various components in 
> the system e.g. daemons such as namenode, jobtracker should expose their 
> data-structures for query/manipulation etc. Tests would be much more relevant 
> if we could for e.g. query for specific states of the jobtracker, scheduler 
> etc. Clearly these apis should _not_ be part of the production clusters - 
> hence the proposal is to use aspectj to weave these new apis to 
> debug-deployments.
> 
> Related note: we should break up our tests into at least 3 categories:
> # src/test/unit -> Real unit tests using mock objects (e.g. HDFS-669 & 
> MAPREDUCE-1050).
> # src/test/integration -> Current junit tests with Mini* clusters etc.
> # src/test/system -> HADOOP-6332 and it's children

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HADOOP-6332) Large-scale Automated Test Framework

2010-05-07 Thread Konstantin Boudnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Boudnik updated HADOOP-6332:
---

Attachment: HADOOP-6332.0.22.patch

Herriot artifacts are being produced as expected. 
Pushing them to maven is needed later on.

This patch is ready to be used as a base for HDFS and MR forward patches of 
Herriot.

> Large-scale Automated Test Framework
> 
>
> Key: HADOOP-6332
> URL: https://issues.apache.org/jira/browse/HADOOP-6332
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: test
>Reporter: Arun C Murthy
>Assignee: Konstantin Boudnik
> Fix For: 0.21.0
>
> Attachments: 6332-phase2.fix1.patch, 6332-phase2.fix2.patch, 
> 6332-phase2.patch, 6332.patch, 6332.patch, 6332.patch, 6332_v1.patch, 
> 6332_v2.patch, HADOOP-6332-MR.patch, HADOOP-6332-MR.patch, 
> HADOOP-6332.0.22.patch, HADOOP-6332.0.22.patch, HADOOP-6332.0.22.patch, 
> HADOOP-6332.patch, HADOOP-6332.patch
>
>
> Hadoop would benefit from having a large-scale, automated, test-framework. 
> This jira is meant to be a master-jira to track relevant work.
> 
> The proposal is a junit-based, large-scale test framework which would run 
> against _real_ clusters.
> There are several pieces we need to achieve this goal:
> # A set of utilities we can use in junit-based tests to work with real, 
> large-scale hadoop clusters. E.g. utilities to bring up to deploy, start & 
> stop clusters, bring down tasktrackers, datanodes, entire racks of both etc.
> # Enhanced control-ability and inspect-ability of the various components in 
> the system e.g. daemons such as namenode, jobtracker should expose their 
> data-structures for query/manipulation etc. Tests would be much more relevant 
> if we could for e.g. query for specific states of the jobtracker, scheduler 
> etc. Clearly these apis should _not_ be part of the production clusters - 
> hence the proposal is to use aspectj to weave these new apis to 
> debug-deployments.
> 
> Related note: we should break up our tests into at least 3 categories:
> # src/test/unit -> Real unit tests using mock objects (e.g. HDFS-669 & 
> MAPREDUCE-1050).
> # src/test/integration -> Current junit tests with Mini* clusters etc.
> # src/test/system -> HADOOP-6332 and it's children

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HADOOP-6332) Large-scale Automated Test Framework

2010-05-07 Thread Konstantin Boudnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Boudnik updated HADOOP-6332:
---

Attachment: HADOOP-6332.0.22.patch

In this version of the path all old functionality of the build works as before.
Herriot artifacts aren't produced yet, but this seems to be pretty minor fix.

> Large-scale Automated Test Framework
> 
>
> Key: HADOOP-6332
> URL: https://issues.apache.org/jira/browse/HADOOP-6332
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: test
>Reporter: Arun C Murthy
>Assignee: Konstantin Boudnik
> Fix For: 0.21.0
>
> Attachments: 6332-phase2.fix1.patch, 6332-phase2.fix2.patch, 
> 6332-phase2.patch, 6332.patch, 6332.patch, 6332.patch, 6332_v1.patch, 
> 6332_v2.patch, HADOOP-6332-MR.patch, HADOOP-6332-MR.patch, 
> HADOOP-6332.0.22.patch, HADOOP-6332.0.22.patch, HADOOP-6332.patch, 
> HADOOP-6332.patch
>
>
> Hadoop would benefit from having a large-scale, automated, test-framework. 
> This jira is meant to be a master-jira to track relevant work.
> 
> The proposal is a junit-based, large-scale test framework which would run 
> against _real_ clusters.
> There are several pieces we need to achieve this goal:
> # A set of utilities we can use in junit-based tests to work with real, 
> large-scale hadoop clusters. E.g. utilities to bring up to deploy, start & 
> stop clusters, bring down tasktrackers, datanodes, entire racks of both etc.
> # Enhanced control-ability and inspect-ability of the various components in 
> the system e.g. daemons such as namenode, jobtracker should expose their 
> data-structures for query/manipulation etc. Tests would be much more relevant 
> if we could for e.g. query for specific states of the jobtracker, scheduler 
> etc. Clearly these apis should _not_ be part of the production clusters - 
> hence the proposal is to use aspectj to weave these new apis to 
> debug-deployments.
> 
> Related note: we should break up our tests into at least 3 categories:
> # src/test/unit -> Real unit tests using mock objects (e.g. HDFS-669 & 
> MAPREDUCE-1050).
> # src/test/integration -> Current junit tests with Mini* clusters etc.
> # src/test/system -> HADOOP-6332 and it's children

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HADOOP-6332) Large-scale Automated Test Framework

2010-05-06 Thread Konstantin Boudnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Boudnik updated HADOOP-6332:
---

Attachment: HADOOP-6332.0.22.patch

Very first draft of forward patch for Common's trunk. It works through all four 
patches posted earlier for yahoo-0.20. 

Right now build is passing. However, core tests are broken and no Herriot 
artifacts are being created. Will be fixing these bugs in the next a couple of 
days.

> Large-scale Automated Test Framework
> 
>
> Key: HADOOP-6332
> URL: https://issues.apache.org/jira/browse/HADOOP-6332
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: test
>Reporter: Arun C Murthy
>Assignee: Konstantin Boudnik
> Fix For: 0.21.0
>
> Attachments: 6332-phase2.fix1.patch, 6332-phase2.fix2.patch, 
> 6332-phase2.patch, 6332.patch, 6332.patch, 6332.patch, 6332_v1.patch, 
> 6332_v2.patch, HADOOP-6332-MR.patch, HADOOP-6332-MR.patch, 
> HADOOP-6332.0.22.patch, HADOOP-6332.patch, HADOOP-6332.patch
>
>
> Hadoop would benefit from having a large-scale, automated, test-framework. 
> This jira is meant to be a master-jira to track relevant work.
> 
> The proposal is a junit-based, large-scale test framework which would run 
> against _real_ clusters.
> There are several pieces we need to achieve this goal:
> # A set of utilities we can use in junit-based tests to work with real, 
> large-scale hadoop clusters. E.g. utilities to bring up to deploy, start & 
> stop clusters, bring down tasktrackers, datanodes, entire racks of both etc.
> # Enhanced control-ability and inspect-ability of the various components in 
> the system e.g. daemons such as namenode, jobtracker should expose their 
> data-structures for query/manipulation etc. Tests would be much more relevant 
> if we could for e.g. query for specific states of the jobtracker, scheduler 
> etc. Clearly these apis should _not_ be part of the production clusters - 
> hence the proposal is to use aspectj to weave these new apis to 
> debug-deployments.
> 
> Related note: we should break up our tests into at least 3 categories:
> # src/test/unit -> Real unit tests using mock objects (e.g. HDFS-669 & 
> MAPREDUCE-1050).
> # src/test/integration -> Current junit tests with Mini* clusters etc.
> # src/test/system -> HADOOP-6332 and it's children

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Assigned: (HADOOP-6332) Large-scale Automated Test Framework

2010-05-06 Thread Konstantin Boudnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Boudnik reassigned HADOOP-6332:
--

Assignee: Konstantin Boudnik  (was: Sharad Agarwal)

> Large-scale Automated Test Framework
> 
>
> Key: HADOOP-6332
> URL: https://issues.apache.org/jira/browse/HADOOP-6332
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: test
>Reporter: Arun C Murthy
>Assignee: Konstantin Boudnik
> Fix For: 0.21.0
>
> Attachments: 6332-phase2.fix1.patch, 6332-phase2.fix2.patch, 
> 6332-phase2.patch, 6332.patch, 6332.patch, 6332.patch, 6332_v1.patch, 
> 6332_v2.patch, HADOOP-6332-MR.patch, HADOOP-6332-MR.patch, HADOOP-6332.patch, 
> HADOOP-6332.patch
>
>
> Hadoop would benefit from having a large-scale, automated, test-framework. 
> This jira is meant to be a master-jira to track relevant work.
> 
> The proposal is a junit-based, large-scale test framework which would run 
> against _real_ clusters.
> There are several pieces we need to achieve this goal:
> # A set of utilities we can use in junit-based tests to work with real, 
> large-scale hadoop clusters. E.g. utilities to bring up to deploy, start & 
> stop clusters, bring down tasktrackers, datanodes, entire racks of both etc.
> # Enhanced control-ability and inspect-ability of the various components in 
> the system e.g. daemons such as namenode, jobtracker should expose their 
> data-structures for query/manipulation etc. Tests would be much more relevant 
> if we could for e.g. query for specific states of the jobtracker, scheduler 
> etc. Clearly these apis should _not_ be part of the production clusters - 
> hence the proposal is to use aspectj to weave these new apis to 
> debug-deployments.
> 
> Related note: we should break up our tests into at least 3 categories:
> # src/test/unit -> Real unit tests using mock objects (e.g. HDFS-669 & 
> MAPREDUCE-1050).
> # src/test/integration -> Current junit tests with Mini* clusters etc.
> # src/test/system -> HADOOP-6332 and it's children

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HADOOP-6752) Remote cluster control functionality needs some JavaDocs improvement

2010-05-06 Thread Konstantin Boudnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Boudnik updated HADOOP-6752:
---

Attachment: hadoop-6572.patch

Initial patch sent in by Balaji

> Remote cluster control functionality needs some JavaDocs improvement
> 
>
> Key: HADOOP-6752
> URL: https://issues.apache.org/jira/browse/HADOOP-6752
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: 0.22.0
>Reporter: Konstantin Boudnik
>Assignee: Balaji Rajagopalan
> Attachments: hadoop-6572.patch
>
>
> Herriot has remote cluster control API. The functionality works fairly well, 
> however, JavaDocs are missed here and there. This has to be fixed.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Assigned: (HADOOP-6752) Remote cluster control functionality needs some JavaDocs improvement

2010-05-06 Thread Konstantin Boudnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Boudnik reassigned HADOOP-6752:
--

Assignee: Balaji Rajagopalan

> Remote cluster control functionality needs some JavaDocs improvement
> 
>
> Key: HADOOP-6752
> URL: https://issues.apache.org/jira/browse/HADOOP-6752
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: 0.22.0
>Reporter: Konstantin Boudnik
>Assignee: Balaji Rajagopalan
>
> Herriot has remote cluster control API. The functionality works fairly well, 
> however, JavaDocs are missed here and there. This has to be fixed.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (HADOOP-6752) Remote cluster control functionality needs some JavaDocs improvement

2010-05-06 Thread Konstantin Boudnik (JIRA)
Remote cluster control functionality needs some JavaDocs improvement


 Key: HADOOP-6752
 URL: https://issues.apache.org/jira/browse/HADOOP-6752
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: test
Affects Versions: 0.22.0
Reporter: Konstantin Boudnik


Herriot has remote cluster control API. The functionality works fairly well, 
however, JavaDocs are missed here and there. This has to be fixed.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HADOOP-6332) Large-scale Automated Test Framework

2010-05-05 Thread Konstantin Boudnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Boudnik updated HADOOP-6332:
---

Attachment: 6332-phase2.fix2.patch

Using {{$(something)}} screws up our XML processing :( Has to be fixed. 
This patch is on top of 6332-phase2.fix2.patch. Not to commit here for it will 
be done as a part of forward port patch later.

> Large-scale Automated Test Framework
> 
>
> Key: HADOOP-6332
> URL: https://issues.apache.org/jira/browse/HADOOP-6332
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: test
>Reporter: Arun C Murthy
>Assignee: Sharad Agarwal
> Fix For: 0.21.0
>
> Attachments: 6332-phase2.fix1.patch, 6332-phase2.fix2.patch, 
> 6332-phase2.patch, 6332.patch, 6332.patch, 6332.patch, 6332_v1.patch, 
> 6332_v2.patch, HADOOP-6332-MR.patch, HADOOP-6332-MR.patch, HADOOP-6332.patch, 
> HADOOP-6332.patch
>
>
> Hadoop would benefit from having a large-scale, automated, test-framework. 
> This jira is meant to be a master-jira to track relevant work.
> 
> The proposal is a junit-based, large-scale test framework which would run 
> against _real_ clusters.
> There are several pieces we need to achieve this goal:
> # A set of utilities we can use in junit-based tests to work with real, 
> large-scale hadoop clusters. E.g. utilities to bring up to deploy, start & 
> stop clusters, bring down tasktrackers, datanodes, entire racks of both etc.
> # Enhanced control-ability and inspect-ability of the various components in 
> the system e.g. daemons such as namenode, jobtracker should expose their 
> data-structures for query/manipulation etc. Tests would be much more relevant 
> if we could for e.g. query for specific states of the jobtracker, scheduler 
> etc. Clearly these apis should _not_ be part of the production clusters - 
> hence the proposal is to use aspectj to weave these new apis to 
> debug-deployments.
> 
> Related note: we should break up our tests into at least 3 categories:
> # src/test/unit -> Real unit tests using mock objects (e.g. HDFS-669 & 
> MAPREDUCE-1050).
> # src/test/integration -> Current junit tests with Mini* clusters etc.
> # src/test/system -> HADOOP-6332 and it's children

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HADOOP-6332) Large-scale Automated Test Framework

2010-05-04 Thread Konstantin Boudnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Boudnik updated HADOOP-6332:
---

Attachment: 6332-phase2.fix1.patch

In the secured environment a client should make a privileged RPC call to access 
a FileSystem instance from an NN. Thus the fix.

This patch has to be applied on top of 6332-phase2.patch. Not for the inclusion 
here. 

> Large-scale Automated Test Framework
> 
>
> Key: HADOOP-6332
> URL: https://issues.apache.org/jira/browse/HADOOP-6332
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: test
>Reporter: Arun C Murthy
>Assignee: Sharad Agarwal
> Fix For: 0.21.0
>
> Attachments: 6332-phase2.fix1.patch, 6332-phase2.patch, 6332.patch, 
> 6332.patch, 6332.patch, 6332_v1.patch, 6332_v2.patch, HADOOP-6332-MR.patch, 
> HADOOP-6332-MR.patch, HADOOP-6332.patch, HADOOP-6332.patch
>
>
> Hadoop would benefit from having a large-scale, automated, test-framework. 
> This jira is meant to be a master-jira to track relevant work.
> 
> The proposal is a junit-based, large-scale test framework which would run 
> against _real_ clusters.
> There are several pieces we need to achieve this goal:
> # A set of utilities we can use in junit-based tests to work with real, 
> large-scale hadoop clusters. E.g. utilities to bring up to deploy, start & 
> stop clusters, bring down tasktrackers, datanodes, entire racks of both etc.
> # Enhanced control-ability and inspect-ability of the various components in 
> the system e.g. daemons such as namenode, jobtracker should expose their 
> data-structures for query/manipulation etc. Tests would be much more relevant 
> if we could for e.g. query for specific states of the jobtracker, scheduler 
> etc. Clearly these apis should _not_ be part of the production clusters - 
> hence the proposal is to use aspectj to weave these new apis to 
> debug-deployments.
> 
> Related note: we should break up our tests into at least 3 categories:
> # src/test/unit -> Real unit tests using mock objects (e.g. HDFS-669 & 
> MAPREDUCE-1050).
> # src/test/integration -> Current junit tests with Mini* clusters etc.
> # src/test/system -> HADOOP-6332 and it's children

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HADOOP-6332) Large-scale Automated Test Framework

2010-05-03 Thread Konstantin Boudnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Boudnik updated HADOOP-6332:
---

Attachment: 6332-phase2.patch

This is the second portion of main Herriot functionality including some of the 
tests already linked to the JIRA.

This patch isn't for commit to the Apache 0.20 branch, but is the reference 
material for coming forward port to the trunk (0.22). During the forward port 
process the tests (about 7 of them or so) from this patch will be taken out and 
finally replaced with the patches attached to the linked JIRAs.

> Large-scale Automated Test Framework
> 
>
> Key: HADOOP-6332
> URL: https://issues.apache.org/jira/browse/HADOOP-6332
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: test
>Reporter: Arun C Murthy
>Assignee: Sharad Agarwal
> Fix For: 0.21.0
>
> Attachments: 6332-phase2.patch, 6332.patch, 6332.patch, 6332.patch, 
> 6332_v1.patch, 6332_v2.patch, HADOOP-6332-MR.patch, HADOOP-6332-MR.patch, 
> HADOOP-6332.patch, HADOOP-6332.patch
>
>
> Hadoop would benefit from having a large-scale, automated, test-framework. 
> This jira is meant to be a master-jira to track relevant work.
> 
> The proposal is a junit-based, large-scale test framework which would run 
> against _real_ clusters.
> There are several pieces we need to achieve this goal:
> # A set of utilities we can use in junit-based tests to work with real, 
> large-scale hadoop clusters. E.g. utilities to bring up to deploy, start & 
> stop clusters, bring down tasktrackers, datanodes, entire racks of both etc.
> # Enhanced control-ability and inspect-ability of the various components in 
> the system e.g. daemons such as namenode, jobtracker should expose their 
> data-structures for query/manipulation etc. Tests would be much more relevant 
> if we could for e.g. query for specific states of the jobtracker, scheduler 
> etc. Clearly these apis should _not_ be part of the production clusters - 
> hence the proposal is to use aspectj to weave these new apis to 
> debug-deployments.
> 
> Related note: we should break up our tests into at least 3 categories:
> # src/test/unit -> Real unit tests using mock objects (e.g. HDFS-669 & 
> MAPREDUCE-1050).
> # src/test/integration -> Current junit tests with Mini* clusters etc.
> # src/test/system -> HADOOP-6332 and it's children

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HADOOP-6735) Remove fault injection compilation from default ant compilation and ant test-core.

2010-04-29 Thread Konstantin Boudnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12862381#action_12862381
 ] 

Konstantin Boudnik commented on HADOOP-6735:


Fault injection tests are valuable part of the hadoop testing. E.g. it allowed 
to find many problems in HDFS' append feature. Yes, the compilation time is 
slightly increases but compare to the runtime of all the tests it barely adds a 
percent or less.

Having an optional flag to switch some testing on/off is almost a guarantee 
that this testing will never be run by anyone.

Besides, FI tests are only part of 'test-core' target. You can avoid running it 
by starting 'ant run-test-core' instead of 'ant test'

> Remove fault injection compilation from default ant compilation and ant 
> test-core.
> --
>
> Key: HADOOP-6735
> URL: https://issues.apache.org/jira/browse/HADOOP-6735
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Ravi Phulari
>
> Compiling fault-injection code while running ant tests increases test time by 
> considerable amount of time.  It would be great if by default fi code is not 
> compiled every time ant tests are run.
> We should add flag to run fault injection code on demand.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HADOOP-6733) Create a test for FileSystem API compatibility between releases

2010-04-29 Thread Konstantin Boudnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12862310#action_12862310
 ] 

Konstantin Boudnik commented on HADOOP-6733:


I believe the effort needs to go beyond FileSystem API.

> Create a test for FileSystem API compatibility between releases
> ---
>
> Key: HADOOP-6733
> URL: https://issues.apache.org/jira/browse/HADOOP-6733
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Tom White
>Priority: Blocker
> Fix For: 0.21.0
>
>
> We should have an automated test for checking that programs written against 
> an old version of the FileSystem API still run with a newer version. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HADOOP-6526) Need mapping from long principal names to local OS user names

2010-04-28 Thread Konstantin Boudnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12862041#action_12862041
 ] 

Konstantin Boudnik commented on HADOOP-6526:


[The 
patch|https://issues.apache.org/jira/secure/attachment/12442917/3595485.patch] 
is done on top of the last {{HADOOP-6526-y20.4.patch}} and isn't for commit.

> Need mapping from long principal names to local OS user names
> -
>
> Key: HADOOP-6526
> URL: https://issues.apache.org/jira/browse/HADOOP-6526
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Owen O'Malley
>Assignee: Owen O'Malley
> Attachments: 3595485.patch, c-6526-y20.patch, c-6526-y20.patch, 
> c-6526.patch, HADOOP-6526-y20.2.patch, HADOOP-6526-y20.4.patch
>
>
> We need a configurable mapping from full user names (eg. omal...@apache.org) 
> to local user names (eg. omalley). For many organizations it is sufficient to 
> just use the prefix, however, in the case of shared clusters there may be 
> duplicated prefixes. A configurable mapping will let administrators resolve 
> the issue.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HADOOP-6725) Evaluate HtmlUnit for unit and regression testing webpages

2010-04-28 Thread Konstantin Boudnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12861857#action_12861857
 ] 

Konstantin Boudnik commented on HADOOP-6725:


I meant 'using HtmlUnit' framework.

> Evaluate HtmlUnit for unit and regression testing webpages
> --
>
> Key: HADOOP-6725
> URL: https://issues.apache.org/jira/browse/HADOOP-6725
> Project: Hadoop Common
>  Issue Type: Test
>  Components: test
>Reporter: Jakob Homan
>Priority: Minor
>
> HtmlUnit (http://htmlunit.sourceforge.net/) looks like it may be a good tool 
> to help unit testing and evaluating our various webpages throughout the 
> project.  Currently this is done only occasionally in the code (usually falls 
> to being a manual test during release cycles), and when it is done, usually 
> the code to parse the webpage, etc. is re-written each time.  The framework 
> is Apache licensed, so including it won't be an issue.  If it's found to be 
> useful, new JIRAs for HDFS and MR should be opened.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HADOOP-6725) Evaluate HtmlUnit for unit and regression testing webpages

2010-04-28 Thread Konstantin Boudnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12861856#action_12861856
 ] 

Konstantin Boudnik commented on HADOOP-6725:


As soon as HADOOP-6332 is in the trunk (I really hope to complete the forward 
port work before the end of May) we can start using Herriot framework to 
perform such tests.

> Evaluate HtmlUnit for unit and regression testing webpages
> --
>
> Key: HADOOP-6725
> URL: https://issues.apache.org/jira/browse/HADOOP-6725
> Project: Hadoop Common
>  Issue Type: Test
>  Components: test
>Reporter: Jakob Homan
>Priority: Minor
>
> HtmlUnit (http://htmlunit.sourceforge.net/) looks like it may be a good tool 
> to help unit testing and evaluating our various webpages throughout the 
> project.  Currently this is done only occasionally in the code (usually falls 
> to being a manual test during release cycles), and when it is done, usually 
> the code to parse the webpage, etc. is re-written each time.  The framework 
> is Apache licensed, so including it won't be an issue.  If it's found to be 
> useful, new JIRAs for HDFS and MR should be opened.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HADOOP-6526) Need mapping from long principal names to local OS user names

2010-04-26 Thread Konstantin Boudnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Boudnik updated HADOOP-6526:
---

Attachment: 3595485.patch

Another issue is that the setting affects _all_ tests. This is especially bad 
for tests which are running on an actual cluster but from the source workspace 
i.e. Herriot tests. This settings forces default realm to be set to APACHE.ORG 
which is non-sensical in environments with different realm names.

A better way is to set this property directly in the functional tests requiring 
this config file. Other tests shouldn't be affected.

This is dirty hack to workaround the problem, although we shouldn't be 
modifying the whole build just because of a couple of tests requiring a custom 
config file.



> Need mapping from long principal names to local OS user names
> -
>
> Key: HADOOP-6526
> URL: https://issues.apache.org/jira/browse/HADOOP-6526
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Owen O'Malley
>Assignee: Owen O'Malley
> Attachments: 3595485.patch, c-6526-y20.patch, c-6526-y20.patch, 
> c-6526.patch, HADOOP-6526-y20.2.patch, HADOOP-6526-y20.4.patch
>
>
> We need a configurable mapping from full user names (eg. omal...@apache.org) 
> to local user names (eg. omalley). For many organizations it is sufficient to 
> just use the prefix, however, in the case of shared clusters there may be 
> duplicated prefixes. A configurable mapping will let administrators resolve 
> the issue.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HADOOP-6526) Need mapping from long principal names to local OS user names

2010-04-26 Thread Konstantin Boudnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12861202#action_12861202
 ] 

Konstantin Boudnik commented on HADOOP-6526:


Latest patch introduces {{src/test/krb5.conf}} which is needed by a couple of 
tests only. The use of this configuration file for some tests is enabled by the 
property java.security.krb5.conf. Kerberos has a bug in the implementation of 
the logic around this property (see 
http://bugs.sun.com/view_bug.do?bug_id=6857795)

This badly affects any tests running from under ant environment (i.e. Herriot 
tests (HADOOP-6332)) and on another hand isn't sufficient for Eclipse 
environment.


> Need mapping from long principal names to local OS user names
> -
>
> Key: HADOOP-6526
> URL: https://issues.apache.org/jira/browse/HADOOP-6526
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Owen O'Malley
>Assignee: Owen O'Malley
> Attachments: c-6526-y20.patch, c-6526-y20.patch, c-6526.patch, 
> HADOOP-6526-y20.2.patch, HADOOP-6526-y20.4.patch
>
>
> We need a configurable mapping from full user names (eg. omal...@apache.org) 
> to local user names (eg. omalley). For many organizations it is sufficient to 
> just use the prefix, however, in the case of shared clusters there may be 
> duplicated prefixes. A configurable mapping will let administrators resolve 
> the issue.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HADOOP-6725) Evaluate HtmlUnit for unit and regression testing webpages

2010-04-26 Thread Konstantin Boudnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12861040#action_12861040
 ] 

Konstantin Boudnik commented on HADOOP-6725:


Seems like what we need is [JSPUnit|http://www.jboss.org/jsfunit/] for our 
front-end UI is auto-generated by JSP. JSPUnit runs on top of HtmlUnit 
framework though.

> Evaluate HtmlUnit for unit and regression testing webpages
> --
>
> Key: HADOOP-6725
> URL: https://issues.apache.org/jira/browse/HADOOP-6725
> Project: Hadoop Common
>  Issue Type: Test
>  Components: test
>Reporter: Jakob Homan
>Priority: Minor
>
> HtmlUnit (http://htmlunit.sourceforge.net/) looks like it may be a good tool 
> to help unit testing and evaluating our various webpages throughout the 
> project.  Currently this is done only occasionally in the code (usually falls 
> to being a manual test during release cycles), and when it is done, usually 
> the code to parse the webpage, etc. is re-written each time.  The framework 
> is Apache licensed, so including it won't be an issue.  If it's found to be 
> useful, new JIRAs for HDFS and MR should be opened.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HADOOP-6716) System won't start in non-secure mode when kerb5.conf (edu.mit.kerberos on Mac) is not present

2010-04-21 Thread Konstantin Boudnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12859616#action_12859616
 ] 

Konstantin Boudnik commented on HADOOP-6716:


+ 1 patch looks good. One small nit:
{noformat}
+throw new IllegalArgumentException("Can't get Kerberos 
configuration",ke);
{noformat}
put a whitespace after comma. However, it might be considered like a formatting 
change from the previous state of the code. So, it's up to you.

> System won't start in non-secure mode when kerb5.conf (edu.mit.kerberos on 
> Mac) is not present
> --
>
> Key: HADOOP-6716
> URL: https://issues.apache.org/jira/browse/HADOOP-6716
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Boris Shkolnik
>Assignee: Boris Shkolnik
> Attachments: HADOOP-6716-BP20-3.patch
>
>


-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HADOOP-6716) System won't start in non-secure mode when kerb5.conf (edu.mit.kerberos on Mac) is not present

2010-04-20 Thread Konstantin Boudnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12859182#action_12859182
 ] 

Konstantin Boudnik commented on HADOOP-6716:


I believe it is caused by the fact that Kerberos needs a config file to merely 
instantiate its {{Config}} class. Very unfortunate.

> System won't start in non-secure mode when kerb5.conf (edu.mit.kerberos on 
> Mac) is not present
> --
>
> Key: HADOOP-6716
> URL: https://issues.apache.org/jira/browse/HADOOP-6716
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Boris Shkolnik
>


-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HADOOP-6701) Incorrect exit codes for "dfs -chown", "dfs -chgrp"

2010-04-12 Thread Konstantin Boudnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6701?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12856188#action_12856188
 ] 

Konstantin Boudnik commented on HADOOP-6701:


Looks like the patch is incomplete because it the case with {{chmod}}. The 
other two seem intact to me. 

>  Incorrect exit codes for "dfs -chown", "dfs -chgrp"
> 
>
> Key: HADOOP-6701
> URL: https://issues.apache.org/jira/browse/HADOOP-6701
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 0.19.1, 0.20.0, 0.20.1, 0.20.2
>Reporter: Ravi Phulari
>Assignee: Ravi Phulari
>Priority: Minor
> Fix For: 0.20.3, 0.21.0, 0.22.0
>
> Attachments: HADOOP-6701-trunk.patch, HADOOP-6701.patch
>
>
> r...@localhost:~$ hadoop dfs -chgrp abcd /; echo $?
> chgrp: changing ownership of
> 'hdfs://localhost/':org.apache.hadoop.security.AccessControlException: 
> Permission denied
> 0
> r...@localhost:~$ hadoop dfs -chown  abcd /; echo $?
> chown: changing ownership of
> 'hdfs://localhost/':org.apache.hadoop.security.AccessControlException: 
> Permission denied
> 0
> r...@localhost:~$ hadoop dfs -chmod 755 /DOESNTEXIST; echo $?
> chmod: could not get status for '/DOESNTEXIST': File does not exist: 
> /DOESNTEXIST
> 0
> -
> Exit codes for both of the above invocations should be non-zero to indicate 
> that the command failed.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
https://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] Updated: (HADOOP-6666) Introduce common logging mechanism to mark begin and end of test cases execution

2010-03-30 Thread Konstantin Boudnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Boudnik updated HADOOP-:
---

Environment: Requires at least JUnit v. 4.8.1

> Introduce common logging mechanism to mark begin and end of test cases 
> execution
> 
>
> Key: HADOOP-
> URL: https://issues.apache.org/jira/browse/HADOOP-
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: test
> Environment: Requires at least JUnit v. 4.8.1
>Reporter: Konstantin Boudnik
>
> It is pretty hard to diagnose a test problem (especially in Hudson) when all 
> you have is a very long log file for all your tests output in one place.
> ZOOKEEPER-724 seems to have a nice solution for this problem.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (HADOOP-6666) Introduce common logging mechanism to mark begin and end of test cases execution

2010-03-30 Thread Konstantin Boudnik (JIRA)
Introduce common logging mechanism to mark begin and end of test cases execution


 Key: HADOOP-
 URL: https://issues.apache.org/jira/browse/HADOOP-
 Project: Hadoop Common
  Issue Type: Improvement
  Components: test
Reporter: Konstantin Boudnik


It is pretty hard to diagnose a test problem (especially in Hudson) when all 
you have is a very long log file for all your tests output in one place.

ZOOKEEPER-724 seems to have a nice solution for this problem.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (HADOOP-6655) SLA related changes in hadoop-policy.xml have misleading property descriptions

2010-03-23 Thread Konstantin Boudnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Boudnik resolved HADOOP-6655.


Resolution: Invalid

I'm proposing to close this JIRA because I've misread the test in the config 
file. Nevermind.

> SLA related changes in hadoop-policy.xml have misleading property descriptions
> --
>
> Key: HADOOP-6655
> URL: https://issues.apache.org/jira/browse/HADOOP-6655
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 0.20.2
>Reporter: Konstantin Boudnik
>
> In the patch introduced my HADOOP-4348 proposed modifications of 
> {{hadoop-policy.xml}} read on more than one occasion:
> {noformat}
> +The ACL is a comma-separated list of user and group names. The user and 
> +group list is separated by a blank. For e.g. "alice,bob users,wheel". 
> {noformat}
> It is either should read "separated by a semicolon" or the given example has 
> to be changed.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (HADOOP-6655) SLA related changes in hadoop-policy.xml have misleading property descriptions

2010-03-22 Thread Konstantin Boudnik (JIRA)
SLA related changes in hadoop-policy.xml have misleading property descriptions
--

 Key: HADOOP-6655
 URL: https://issues.apache.org/jira/browse/HADOOP-6655
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.20.2
Reporter: Konstantin Boudnik


In the patch introduced my HADOOP-4348 proposed modifications of 
{{hadoop-policy.xml}} read on more than one occasion:
{noformat}
+The ACL is a comma-separated list of user and group names. The user and 
+group list is separated by a blank. For e.g. "alice,bob users,wheel". 
{noformat}

It is either should read "separated by a semicolon" or the given example has to 
be changed.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HADOOP-6566) Hadoop daemons should not start up if the ownership/permissions on the directories used at runtime are misconfigured

2010-03-17 Thread Konstantin Boudnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12846708#action_12846708
 ] 

Konstantin Boudnik commented on HADOOP-6566:


+1 on the patch. Looks good.

> Hadoop daemons should not start up if the ownership/permissions on the 
> directories used at runtime are misconfigured
> 
>
> Key: HADOOP-6566
> URL: https://issues.apache.org/jira/browse/HADOOP-6566
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: security
>Affects Versions: 0.22.0
>Reporter: Devaraj Das
>Assignee: Arun C Murthy
> Fix For: 0.22.0
>
> Attachments: hadoop-6566-trunk-v1.patch, hadoop-6566-trunk-v2.patch, 
> hadoop-6566-trunk-v3.patch, hadoop-6566-trunk-v4.patch, 
> hadoop-6566-y20s-d1.patch, HADOOP-6566_yhadoop20.patch, 
> HADOOP-6566_yhadoop20.patch, HADOOP-6566_yhadoop20.patch, 
> HADOOP-6566_yhadoop20.patch
>
>
> The Hadoop daemons (like datanode, namenode) should refuse to start up if the 
> ownership/permissions on directories they use at runtime are misconfigured or 
> they are not as expected. For example, the local directory where the 
> filesystem image is stored should be owned by the user running the namenode 
> process and should be only readable by that user. We can provide this feature 
> in common and HDFS and MapReduce can use the same.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HADOOP-6566) Hadoop daemons should not start up if the ownership/permissions on the directories used at runtime are misconfigured

2010-03-17 Thread Konstantin Boudnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12846596#action_12846596
 ] 

Konstantin Boudnik commented on HADOOP-6566:


Great! Thanks for addressing this. And the last one, I suppose: can these two 
different patches be joined together to make the scope of modification clearer?


> Hadoop daemons should not start up if the ownership/permissions on the 
> directories used at runtime are misconfigured
> 
>
> Key: HADOOP-6566
> URL: https://issues.apache.org/jira/browse/HADOOP-6566
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: security
>Affects Versions: 0.22.0
>Reporter: Devaraj Das
>Assignee: Arun C Murthy
> Fix For: 0.22.0
>
> Attachments: hadoop-6566-trunk-v1.patch, hadoop-6566-trunk-v2.patch, 
> hadoop-6566-trunk-v3.patch, hadoop-6566-trunk-v4.patch, 
> hadoop-6566-y20s-d1.patch, HADOOP-6566_yhadoop20.patch, 
> HADOOP-6566_yhadoop20.patch, HADOOP-6566_yhadoop20.patch, 
> HADOOP-6566_yhadoop20.patch
>
>
> The Hadoop daemons (like datanode, namenode) should refuse to start up if the 
> ownership/permissions on directories they use at runtime are misconfigured or 
> they are not as expected. For example, the local directory where the 
> filesystem image is stored should be owned by the user running the namenode 
> process and should be only readable by that user. We can provide this feature 
> in common and HDFS and MapReduce can use the same.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HADOOP-6566) Hadoop daemons should not start up if the ownership/permissions on the directories used at runtime are misconfigured

2010-03-17 Thread Konstantin Boudnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12846548#action_12846548
 ] 

Konstantin Boudnik commented on HADOOP-6566:


Patch seems to be good and the little Mockito extension is neat. Although, it 
adds up a different way of wotking with mocks, which might be confusing 
somewhat.

However, I still can see failing test if a user umask settings are different 
from expected. I again suggest to make changes in MiniDFSCluster to make sure 
that it creates service directories with correct permissions. Otherwise, this 
is the change which introduce an implicit environment assumption i.e. bad idea!

> Hadoop daemons should not start up if the ownership/permissions on the 
> directories used at runtime are misconfigured
> 
>
> Key: HADOOP-6566
> URL: https://issues.apache.org/jira/browse/HADOOP-6566
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: security
>Affects Versions: 0.22.0
>Reporter: Devaraj Das
>Assignee: Arun C Murthy
> Fix For: 0.22.0
>
> Attachments: hadoop-6566-trunk-v1.patch, hadoop-6566-trunk-v2.patch, 
> hadoop-6566-trunk-v3.patch, HADOOP-6566_yhadoop20.patch, 
> HADOOP-6566_yhadoop20.patch, HADOOP-6566_yhadoop20.patch, 
> HADOOP-6566_yhadoop20.patch
>
>
> The Hadoop daemons (like datanode, namenode) should refuse to start up if the 
> ownership/permissions on directories they use at runtime are misconfigured or 
> they are not as expected. For example, the local directory where the 
> filesystem image is stored should be owned by the user running the namenode 
> process and should be only readable by that user. We can provide this feature 
> in common and HDFS and MapReduce can use the same.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HADOOP-5404) GenericOptionsParser should parse generic options even if they appear after Tool-specific options

2010-03-16 Thread Konstantin Boudnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-5404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12846013#action_12846013
 ] 

Konstantin Boudnik commented on HADOOP-5404:


It is one bad bug and has to be fixed.

> GenericOptionsParser should parse generic options even if they appear after 
> Tool-specific options
> -
>
> Key: HADOOP-5404
> URL: https://issues.apache.org/jira/browse/HADOOP-5404
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Affects Versions: 0.19.1
> Environment: All
>Reporter: Milind Bhandarkar
>
> Currently, when GenericOptionsParser encounters an unrecognized option, it 
> stops processing command-line arguments, and returns the rest to the specific 
> Tool. This forces users to remember the order of arguments, and leads to 
> errors such as following:
> org.apache.commons.cli.UnrecognizedOptionException: Unrecognized option:
> -Dmapred.reduce.tasks=4
> at org.apache.commons.cli.Parser.processOption(Parser.java:368)
> at org.apache.commons.cli.Parser.parse(Parser.java:185)
> at org.apache.commons.cli.Parser.parse(Parser.java:70)
> at
> MyTool.run(MyTool.java.java:290)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
> at
> MyTool.main(MyTool.java:19)
> In Hadoop-streaming as well, -D parameters should appear before 
> streaming-specific arguments, such as -mapper, -reducer etc.
> If GenericOptionsParser were to scan the entire command-line, ignoring 
> unrecognized (tool-specific) options, and returning all unrecognized options 
> back to the tool, this problem would be solved.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HADOOP-6566) Hadoop daemons should not start up if the ownership/permissions on the directories used at runtime are misconfigured

2010-03-11 Thread Konstantin Boudnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12844158#action_12844158
 ] 

Konstantin Boudnik commented on HADOOP-6566:


I still don't see my comment above addressed in any way. This new check 
implicitly relies on a certain environment settings and will fail if they 
aren't set properly.

This requirement either:
- has to be documented
- a special environment setting routing in MiniDFSCluster needs to be 
implemented 

> Hadoop daemons should not start up if the ownership/permissions on the 
> directories used at runtime are misconfigured
> 
>
> Key: HADOOP-6566
> URL: https://issues.apache.org/jira/browse/HADOOP-6566
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: security
>Affects Versions: 0.22.0
>Reporter: Devaraj Das
>Assignee: Arun C Murthy
> Fix For: 0.22.0
>
> Attachments: hadoop-6566-trunk-v1.patch, hadoop-6566-trunk-v2.patch, 
> HADOOP-6566_yhadoop20.patch, HADOOP-6566_yhadoop20.patch, 
> HADOOP-6566_yhadoop20.patch, HADOOP-6566_yhadoop20.patch
>
>
> The Hadoop daemons (like datanode, namenode) should refuse to start up if the 
> ownership/permissions on directories they use at runtime are misconfigured or 
> they are not as expected. For example, the local directory where the 
> filesystem image is stored should be owned by the user running the namenode 
> process and should be only readable by that user. We can provide this feature 
> in common and HDFS and MapReduce can use the same.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HADOOP-6332) Large-scale Automated Test Framework

2010-03-08 Thread Konstantin Boudnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Boudnik updated HADOOP-6332:
---

Attachment: 6332.patch

A tiny inconsistency in the build.xml has been discovered. Fixed.

> Large-scale Automated Test Framework
> 
>
> Key: HADOOP-6332
> URL: https://issues.apache.org/jira/browse/HADOOP-6332
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: test
>Reporter: Arun C Murthy
>Assignee: Sharad Agarwal
> Fix For: 0.21.0
>
> Attachments: 6332.patch, 6332.patch, 6332.patch, 6332_v1.patch, 
> 6332_v2.patch, HADOOP-6332-MR.patch, HADOOP-6332-MR.patch, HADOOP-6332.patch, 
> HADOOP-6332.patch
>
>
> Hadoop would benefit from having a large-scale, automated, test-framework. 
> This jira is meant to be a master-jira to track relevant work.
> 
> The proposal is a junit-based, large-scale test framework which would run 
> against _real_ clusters.
> There are several pieces we need to achieve this goal:
> # A set of utilities we can use in junit-based tests to work with real, 
> large-scale hadoop clusters. E.g. utilities to bring up to deploy, start & 
> stop clusters, bring down tasktrackers, datanodes, entire racks of both etc.
> # Enhanced control-ability and inspect-ability of the various components in 
> the system e.g. daemons such as namenode, jobtracker should expose their 
> data-structures for query/manipulation etc. Tests would be much more relevant 
> if we could for e.g. query for specific states of the jobtracker, scheduler 
> etc. Clearly these apis should _not_ be part of the production clusters - 
> hence the proposal is to use aspectj to weave these new apis to 
> debug-deployments.
> 
> Related note: we should break up our tests into at least 3 categories:
> # src/test/unit -> Real unit tests using mock objects (e.g. HDFS-669 & 
> MAPREDUCE-1050).
> # src/test/integration -> Current junit tests with Mini* clusters etc.
> # src/test/system -> HADOOP-6332 and it's children

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HADOOP-6332) Large-scale Automated Test Framework

2010-03-05 Thread Konstantin Boudnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Boudnik updated HADOOP-6332:
---

Attachment: 6332.patch

This is patch for y20-security which might have conflicts with current 
0.20-branch.
We'll be proving a forward port patch for the trunk soon.

> Large-scale Automated Test Framework
> 
>
> Key: HADOOP-6332
> URL: https://issues.apache.org/jira/browse/HADOOP-6332
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: test
>Reporter: Arun C Murthy
>Assignee: Sharad Agarwal
> Fix For: 0.21.0
>
> Attachments: 6332.patch, 6332.patch, 6332_v1.patch, 6332_v2.patch, 
> HADOOP-6332-MR.patch, HADOOP-6332-MR.patch, HADOOP-6332.patch, 
> HADOOP-6332.patch
>
>
> Hadoop would benefit from having a large-scale, automated, test-framework. 
> This jira is meant to be a master-jira to track relevant work.
> 
> The proposal is a junit-based, large-scale test framework which would run 
> against _real_ clusters.
> There are several pieces we need to achieve this goal:
> # A set of utilities we can use in junit-based tests to work with real, 
> large-scale hadoop clusters. E.g. utilities to bring up to deploy, start & 
> stop clusters, bring down tasktrackers, datanodes, entire racks of both etc.
> # Enhanced control-ability and inspect-ability of the various components in 
> the system e.g. daemons such as namenode, jobtracker should expose their 
> data-structures for query/manipulation etc. Tests would be much more relevant 
> if we could for e.g. query for specific states of the jobtracker, scheduler 
> etc. Clearly these apis should _not_ be part of the production clusters - 
> hence the proposal is to use aspectj to weave these new apis to 
> debug-deployments.
> 
> Related note: we should break up our tests into at least 3 categories:
> # src/test/unit -> Real unit tests using mock objects (e.g. HDFS-669 & 
> MAPREDUCE-1050).
> # src/test/integration -> Current junit tests with Mini* clusters etc.
> # src/test/system -> HADOOP-6332 and it's children

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HADOOP-6566) Hadoop daemons should not start up if the ownership/permissions on the directories used at runtime are misconfigured

2010-03-05 Thread Konstantin Boudnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6566?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Boudnik updated HADOOP-6566:
---

Hadoop Flags: [Incompatible change, Reviewed]  (was: [Reviewed])

Marking this as 'incompatible' for it might be affected by user env. settings 
and fail the tests.

> Hadoop daemons should not start up if the ownership/permissions on the 
> directories used at runtime are misconfigured
> 
>
> Key: HADOOP-6566
> URL: https://issues.apache.org/jira/browse/HADOOP-6566
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: security
>Reporter: Devaraj Das
>Assignee: Arun C Murthy
> Fix For: 0.22.0
>
> Attachments: HADOOP-6566_yhadoop20.patch, 
> HADOOP-6566_yhadoop20.patch, HADOOP-6566_yhadoop20.patch, 
> HADOOP-6566_yhadoop20.patch
>
>
> The Hadoop daemons (like datanode, namenode) should refuse to start up if the 
> ownership/permissions on directories they use at runtime are misconfigured or 
> they are not as expected. For example, the local directory where the 
> filesystem image is stored should be owned by the user running the namenode 
> process and should be only readable by that user. We can provide this feature 
> in common and HDFS and MapReduce can use the same.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HADOOP-6566) Hadoop daemons should not start up if the ownership/permissions on the directories used at runtime are misconfigured

2010-03-05 Thread Konstantin Boudnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12842003#action_12842003
 ] 

Konstantin Boudnik commented on HADOOP-6566:


A nasty side effect of this modification is that now a user's environment 
settings affect the result of test test. Which clearly shouldn't happen. I.e. I 
have umask=002 and all of sadden my test runs start to fail because 
dfs.data.dir has incorrect permissions.

Ideally, test environment should guarantee that environment is properly set 
before making any assumptions or assertions.

> Hadoop daemons should not start up if the ownership/permissions on the 
> directories used at runtime are misconfigured
> 
>
> Key: HADOOP-6566
> URL: https://issues.apache.org/jira/browse/HADOOP-6566
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: security
>Reporter: Devaraj Das
>Assignee: Arun C Murthy
> Fix For: 0.22.0
>
> Attachments: HADOOP-6566_yhadoop20.patch, 
> HADOOP-6566_yhadoop20.patch, HADOOP-6566_yhadoop20.patch, 
> HADOOP-6566_yhadoop20.patch
>
>
> The Hadoop daemons (like datanode, namenode) should refuse to start up if the 
> ownership/permissions on directories they use at runtime are misconfigured or 
> they are not as expected. For example, the local directory where the 
> filesystem image is stored should be owned by the user running the namenode 
> process and should be only readable by that user. We can provide this feature 
> in common and HDFS and MapReduce can use the same.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HADOOP-6609) Deadlock in DFSClient#getBlockLocations even with the security disabled

2010-03-03 Thread Konstantin Boudnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12841031#action_12841031
 ] 

Konstantin Boudnik commented on HADOOP-6609:


I have ran the job which used to timeout because of the deadlock a few times 
and it is running Ok now. All the data is being written properly and correctly.

Thanks for the fix, Owen.

+1 on the patch.

> Deadlock in DFSClient#getBlockLocations even with the security disabled
> ---
>
> Key: HADOOP-6609
> URL: https://issues.apache.org/jira/browse/HADOOP-6609
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Hairong Kuang
>Assignee: Owen O'Malley
> Attachments: c-6609.patch, c-6609.patch
>
>
> Here is the stack trace:
> "IPC Client (47) connection to XX" daemon
> prio=10 tid=0x2aaae0369c00 nid=0x655b waiting for monitor entry 
> [0x4181f000..0x4181fb80]
>java.lang.Thread.State: BLOCKED (on object monitor)
> at org.apache.hadoop.io.UTF8.readChars(UTF8.java:210)
> - waiting to lock <0x2aaab3eaee50> (a 
> org.apache.hadoop.io.DataOutputBuffer)
> at org.apache.hadoop.io.UTF8.readString(UTF8.java:203)
> at org.apache.hadoop.io.ObjectWritable.readObject(ObjectWritable.java:179)
> at org.apache.hadoop.io.ObjectWritable.readFields(ObjectWritable.java:66)
> at org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:638)
> at org.apache.hadoop.ipc.Client$Connection.run(Client.java:573)
> "IPC Client (47) connection to /0.0.0.0:50030 from job_201002262308_0007"
> daemon prio=10 tid=0x2aaae0272800 nid=0x6556 waiting for monitor entry 
> [0x4131a000..0x4131ad00]
>java.lang.Thread.State: BLOCKED (on object monitor)
> at org.apache.hadoop.io.UTF8.readChars(UTF8.java:210) 
> - waiting to lock <0x2aaab3eaee50> (a 
> org.apache.hadoop.io.DataOutputBuffer)
> at org.apache.hadoop.io.UTF8.readString(UTF8.java:203)
> at org.apache.hadoop.io.ObjectWritable.readObject(ObjectWritable.java:179)
> at org.apache.hadoop.io.ObjectWritable.readFields(ObjectWritable.java:66)
> at org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:638)
> at org.apache.hadoop.ipc.Client$Connection.run(Client.java:573)
> "main" prio=10 tid=0x46c17800 nid=0x6544 in Object.wait() 
> [0x40207000..0x40209ec0]
>java.lang.Thread.State: WAITING (on object monitor)
> at java.lang.Object.wait(Native Method) 
> - waiting on <0x2aaacee6bc38> (a org.apache.hadoop.ipc.Client$Call)
> at java.lang.Object.wait(Object.java:485)
> at org.apache.hadoop.ipc.Client.call(Client.java:854) - locked 
> <0x2aaacee6bc38> (a org.apache.hadoop.ipc.Client$Call)
> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:223)
> at $Proxy2.getBlockLocations(Unknown Source)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
> at $Proxy2.getBlockLocations(Unknown Source)
> at org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSClient.java:333)
> at org.apache.hadoop.hdfs.DFSClient.access$2(DFSClient.java:330)
> at 
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.getBlockAt(DFSClient.java:1606)
>  
> - locked <0x2aaacecb8258> (a 
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream)
> at 
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:1704)
> - locked <0x2aaacecb8258> (a 
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream)
> at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1856)
> - locked <0x2aaacecb8258> (a 
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream)
> at java.io.DataInputStream.readFully(DataInputStream.java:178)
> at 
> org.apache.hadoop.io.DataOutputBuffer$Buffer.write(DataOutputBuffer.java:63)
> at org.apache.hadoop.io.DataOutputBuffer.write(DataOutputBuffer.java:101)
> at org.apache.hadoop.io.UTF8.readChars(UTF8.java:211)
> - locked <0x2aaab3eaee50> (a org.apache.hadoop.io.DataOutputBuffer)
> at org.apache.hadoop.io.UTF8.readString(UTF8.java:203)
> at org.apache.hadoop.mapred.FileSplit.readFields(FileSplit.java:90)
> at 
> org.apache.hadoop.io.serializer.WritableSerialization$WritableDeserializer.deserialize(WritableSerialization.java:67)
> at 
> org.apache.hadoop.io.serializer.WritableSerialization$WritableDeserializer.deserialize(WritableSerialization.java:1)
> at org.apache.hadoop.mapred.MapTask.getSplitDetails(MapTask.java:341)
> at o

[jira] Commented: (HADOOP-6609) Deadlock in DFSClient#getBlockLocations even with the security disabled

2010-03-03 Thread Konstantin Boudnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12841032#action_12841032
 ] 

Konstantin Boudnik commented on HADOOP-6609:


And BTW: I have ran the full test suite to verify the patch - just one specific 
cluster test.

> Deadlock in DFSClient#getBlockLocations even with the security disabled
> ---
>
> Key: HADOOP-6609
> URL: https://issues.apache.org/jira/browse/HADOOP-6609
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Hairong Kuang
>Assignee: Owen O'Malley
> Attachments: c-6609.patch, c-6609.patch
>
>
> Here is the stack trace:
> "IPC Client (47) connection to XX" daemon
> prio=10 tid=0x2aaae0369c00 nid=0x655b waiting for monitor entry 
> [0x4181f000..0x4181fb80]
>java.lang.Thread.State: BLOCKED (on object monitor)
> at org.apache.hadoop.io.UTF8.readChars(UTF8.java:210)
> - waiting to lock <0x2aaab3eaee50> (a 
> org.apache.hadoop.io.DataOutputBuffer)
> at org.apache.hadoop.io.UTF8.readString(UTF8.java:203)
> at org.apache.hadoop.io.ObjectWritable.readObject(ObjectWritable.java:179)
> at org.apache.hadoop.io.ObjectWritable.readFields(ObjectWritable.java:66)
> at org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:638)
> at org.apache.hadoop.ipc.Client$Connection.run(Client.java:573)
> "IPC Client (47) connection to /0.0.0.0:50030 from job_201002262308_0007"
> daemon prio=10 tid=0x2aaae0272800 nid=0x6556 waiting for monitor entry 
> [0x4131a000..0x4131ad00]
>java.lang.Thread.State: BLOCKED (on object monitor)
> at org.apache.hadoop.io.UTF8.readChars(UTF8.java:210) 
> - waiting to lock <0x2aaab3eaee50> (a 
> org.apache.hadoop.io.DataOutputBuffer)
> at org.apache.hadoop.io.UTF8.readString(UTF8.java:203)
> at org.apache.hadoop.io.ObjectWritable.readObject(ObjectWritable.java:179)
> at org.apache.hadoop.io.ObjectWritable.readFields(ObjectWritable.java:66)
> at org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:638)
> at org.apache.hadoop.ipc.Client$Connection.run(Client.java:573)
> "main" prio=10 tid=0x46c17800 nid=0x6544 in Object.wait() 
> [0x40207000..0x40209ec0]
>java.lang.Thread.State: WAITING (on object monitor)
> at java.lang.Object.wait(Native Method) 
> - waiting on <0x2aaacee6bc38> (a org.apache.hadoop.ipc.Client$Call)
> at java.lang.Object.wait(Object.java:485)
> at org.apache.hadoop.ipc.Client.call(Client.java:854) - locked 
> <0x2aaacee6bc38> (a org.apache.hadoop.ipc.Client$Call)
> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:223)
> at $Proxy2.getBlockLocations(Unknown Source)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
> at $Proxy2.getBlockLocations(Unknown Source)
> at org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSClient.java:333)
> at org.apache.hadoop.hdfs.DFSClient.access$2(DFSClient.java:330)
> at 
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.getBlockAt(DFSClient.java:1606)
>  
> - locked <0x2aaacecb8258> (a 
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream)
> at 
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:1704)
> - locked <0x2aaacecb8258> (a 
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream)
> at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1856)
> - locked <0x2aaacecb8258> (a 
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream)
> at java.io.DataInputStream.readFully(DataInputStream.java:178)
> at 
> org.apache.hadoop.io.DataOutputBuffer$Buffer.write(DataOutputBuffer.java:63)
> at org.apache.hadoop.io.DataOutputBuffer.write(DataOutputBuffer.java:101)
> at org.apache.hadoop.io.UTF8.readChars(UTF8.java:211)
> - locked <0x2aaab3eaee50> (a org.apache.hadoop.io.DataOutputBuffer)
> at org.apache.hadoop.io.UTF8.readString(UTF8.java:203)
> at org.apache.hadoop.mapred.FileSplit.readFields(FileSplit.java:90)
> at 
> org.apache.hadoop.io.serializer.WritableSerialization$WritableDeserializer.deserialize(WritableSerialization.java:67)
> at 
> org.apache.hadoop.io.serializer.WritableSerialization$WritableDeserializer.deserialize(WritableSerialization.java:1)
> at org.apache.hadoop.mapred.MapTask.getSplitDetails(MapTask.java:341)
> at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:357)
> at org.apache.hadoop.mapred.MapTask.run(Map

[jira] Updated: (HADOOP-6609) Deadlock in DFSClient#getBlockLocations even with the security disabled

2010-03-03 Thread Konstantin Boudnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Boudnik updated HADOOP-6609:
---

Summary: Deadlock in DFSClient#getBlockLocations even with the security 
disabled  (was: Deadlock in DFSClient#getBlockLocations with the security 
enabled)

This problem happens on vanilla cluster with the security off, However, the 
executed code belongs to the secured Hadoop. Thus, I'm changing the title of 
the JIRA.

> Deadlock in DFSClient#getBlockLocations even with the security disabled
> ---
>
> Key: HADOOP-6609
> URL: https://issues.apache.org/jira/browse/HADOOP-6609
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Hairong Kuang
>
> Here is the stack trace:
> "IPC Client (47) connection to XX" daemon
> prio=10 tid=0x2aaae0369c00 nid=0x655b waiting for monitor entry 
> [0x4181f000..0x4181fb80]
>java.lang.Thread.State: BLOCKED (on object monitor)
> at org.apache.hadoop.io.UTF8.readChars(UTF8.java:210)
> - waiting to lock <0x2aaab3eaee50> (a 
> org.apache.hadoop.io.DataOutputBuffer)
> at org.apache.hadoop.io.UTF8.readString(UTF8.java:203)
> at org.apache.hadoop.io.ObjectWritable.readObject(ObjectWritable.java:179)
> at org.apache.hadoop.io.ObjectWritable.readFields(ObjectWritable.java:66)
> at org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:638)
> at org.apache.hadoop.ipc.Client$Connection.run(Client.java:573)
> "IPC Client (47) connection to /0.0.0.0:50030 from job_201002262308_0007"
> daemon prio=10 tid=0x2aaae0272800 nid=0x6556 waiting for monitor entry 
> [0x4131a000..0x4131ad00]
>java.lang.Thread.State: BLOCKED (on object monitor)
> at org.apache.hadoop.io.UTF8.readChars(UTF8.java:210) 
> - waiting to lock <0x2aaab3eaee50> (a 
> org.apache.hadoop.io.DataOutputBuffer)
> at org.apache.hadoop.io.UTF8.readString(UTF8.java:203)
> at org.apache.hadoop.io.ObjectWritable.readObject(ObjectWritable.java:179)
> at org.apache.hadoop.io.ObjectWritable.readFields(ObjectWritable.java:66)
> at org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:638)
> at org.apache.hadoop.ipc.Client$Connection.run(Client.java:573)
> "main" prio=10 tid=0x46c17800 nid=0x6544 in Object.wait() 
> [0x40207000..0x40209ec0]
>java.lang.Thread.State: WAITING (on object monitor)
> at java.lang.Object.wait(Native Method) 
> - waiting on <0x2aaacee6bc38> (a org.apache.hadoop.ipc.Client$Call)
> at java.lang.Object.wait(Object.java:485)
> at org.apache.hadoop.ipc.Client.call(Client.java:854) - locked 
> <0x2aaacee6bc38> (a org.apache.hadoop.ipc.Client$Call)
> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:223)
> at $Proxy2.getBlockLocations(Unknown Source)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
> at $Proxy2.getBlockLocations(Unknown Source)
> at org.apache.hadoop.hdfs.DFSClient.callGetBlockLocations(DFSClient.java:333)
> at org.apache.hadoop.hdfs.DFSClient.access$2(DFSClient.java:330)
> at 
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.getBlockAt(DFSClient.java:1606)
>  
> - locked <0x2aaacecb8258> (a 
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream)
> at 
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:1704)
> - locked <0x2aaacecb8258> (a 
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream)
> at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1856)
> - locked <0x2aaacecb8258> (a 
> org.apache.hadoop.hdfs.DFSClient$DFSInputStream)
> at java.io.DataInputStream.readFully(DataInputStream.java:178)
> at 
> org.apache.hadoop.io.DataOutputBuffer$Buffer.write(DataOutputBuffer.java:63)
> at org.apache.hadoop.io.DataOutputBuffer.write(DataOutputBuffer.java:101)
> at org.apache.hadoop.io.UTF8.readChars(UTF8.java:211)
> - locked <0x2aaab3eaee50> (a org.apache.hadoop.io.DataOutputBuffer)
> at org.apache.hadoop.io.UTF8.readString(UTF8.java:203)
> at org.apache.hadoop.mapred.FileSplit.readFields(FileSplit.java:90)
> at 
> org.apache.hadoop.io.serializer.WritableSerialization$WritableDeserializer.deserialize(WritableSerialization.java:67)
> at 
> org.apache.hadoop.io.serializer.WritableSerialization$WritableDeserializer.deserialize(WritableSerialization.java:1)
> at org.apache.hadoop.mapred.MapTask.getSplitDetails(MapTask.java:341)
> at org.apache.hadoop.mapr

[jira] Assigned: (HADOOP-6332) Large-scale Automated Test Framework

2010-02-24 Thread Konstantin Boudnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Boudnik reassigned HADOOP-6332:
--

Assignee: Sharad Agarwal  (was: Arun C Murthy)

> Large-scale Automated Test Framework
> 
>
> Key: HADOOP-6332
> URL: https://issues.apache.org/jira/browse/HADOOP-6332
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: test
>Reporter: Arun C Murthy
>Assignee: Sharad Agarwal
> Fix For: 0.21.0
>
> Attachments: 6332.patch, 6332_v1.patch, 6332_v2.patch, 
> HADOOP-6332-MR.patch, HADOOP-6332-MR.patch, HADOOP-6332.patch, 
> HADOOP-6332.patch
>
>
> Hadoop would benefit from having a large-scale, automated, test-framework. 
> This jira is meant to be a master-jira to track relevant work.
> 
> The proposal is a junit-based, large-scale test framework which would run 
> against _real_ clusters.
> There are several pieces we need to achieve this goal:
> # A set of utilities we can use in junit-based tests to work with real, 
> large-scale hadoop clusters. E.g. utilities to bring up to deploy, start & 
> stop clusters, bring down tasktrackers, datanodes, entire racks of both etc.
> # Enhanced control-ability and inspect-ability of the various components in 
> the system e.g. daemons such as namenode, jobtracker should expose their 
> data-structures for query/manipulation etc. Tests would be much more relevant 
> if we could for e.g. query for specific states of the jobtracker, scheduler 
> etc. Clearly these apis should _not_ be part of the production clusters - 
> hence the proposal is to use aspectj to weave these new apis to 
> debug-deployments.
> 
> Related note: we should break up our tests into at least 3 categories:
> # src/test/unit -> Real unit tests using mock objects (e.g. HDFS-669 & 
> MAPREDUCE-1050).
> # src/test/integration -> Current junit tests with Mini* clusters etc.
> # src/test/system -> HADOOP-6332 and it's children

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HADOOP-6332) Large-scale Automated Test Framework

2010-02-24 Thread Konstantin Boudnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12837968#action_12837968
 ] 

Konstantin Boudnik commented on HADOOP-6332:


@Stephen: the main reason to use code injection is to completely hide testing 
handles from any chance of misusing by a stranger. Apparently many of the 
contracts (interfaces, APIs) we are interested in a course of testing either 
unveil internal states of key Hadoop components or allow to perform 
'undesirable' actions such as killing a job, a tasktracker, or a datanode it'd 
be unwise to keep them in the A-grade production code. Therefore, code 
injection seems to be the right technique for this. 

Next version of the patch is coming any minute now. It will be clear that all 
interfaces exposed to test are defined statically. Their implementation is 
injected though, which shouldn't concern anyone but framework developers.

Now, a particular implementation of injection doesn't really matter. We 
could've go with ASM or BCEL for the purpose. It happens that we have readily 
available AspectJ providing high-level language capabilities, Eclipse 
integration, etc. That explain the choice of the framework.

As for an extra burden for future contributors: instrumentation is used for 
internal framework mechanics and shouldn't be exposed to the test developers. 
Thus, if one simply want to develop a cluster test she/he can do it from a 
vanilla Eclipse without AJDT installed. Or from IDEA (which I personally prefer 
and use all the time, except when I need to develop/fix some aspects). Or from 
vim (not like I suggest to do it :-)


> Large-scale Automated Test Framework
> 
>
> Key: HADOOP-6332
> URL: https://issues.apache.org/jira/browse/HADOOP-6332
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: test
>Reporter: Arun C Murthy
>Assignee: Arun C Murthy
> Fix For: 0.21.0
>
> Attachments: 6332.patch, 6332_v1.patch, 6332_v2.patch, 
> HADOOP-6332-MR.patch, HADOOP-6332-MR.patch, HADOOP-6332.patch, 
> HADOOP-6332.patch
>
>
> Hadoop would benefit from having a large-scale, automated, test-framework. 
> This jira is meant to be a master-jira to track relevant work.
> 
> The proposal is a junit-based, large-scale test framework which would run 
> against _real_ clusters.
> There are several pieces we need to achieve this goal:
> # A set of utilities we can use in junit-based tests to work with real, 
> large-scale hadoop clusters. E.g. utilities to bring up to deploy, start & 
> stop clusters, bring down tasktrackers, datanodes, entire racks of both etc.
> # Enhanced control-ability and inspect-ability of the various components in 
> the system e.g. daemons such as namenode, jobtracker should expose their 
> data-structures for query/manipulation etc. Tests would be much more relevant 
> if we could for e.g. query for specific states of the jobtracker, scheduler 
> etc. Clearly these apis should _not_ be part of the production clusters - 
> hence the proposal is to use aspectj to weave these new apis to 
> debug-deployments.
> 
> Related note: we should break up our tests into at least 3 categories:
> # src/test/unit -> Real unit tests using mock objects (e.g. HDFS-669 & 
> MAPREDUCE-1050).
> # src/test/integration -> Current junit tests with Mini* clusters etc.
> # src/test/system -> HADOOP-6332 and it's children

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (HADOOP-6587) CLONE -JUnit tests should never depend on anything in conf

2010-02-22 Thread Konstantin Boudnik (JIRA)
CLONE -JUnit tests should never depend on anything in conf
--

 Key: HADOOP-6587
 URL: https://issues.apache.org/jira/browse/HADOOP-6587
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 0.21.0, 0.22.0
Reporter: Konstantin Boudnik
Assignee: Anatoli Fomenko
Priority: Blocker
 Fix For: 0.21.0, 0.22.0


The recent change to mapred-queues.xml that causes many mapreduce tests to 
break unless you delete conf/mapred-queues.xml out of your build tree is bad. 
We need to make sure that nothing in conf is used in the unit tests. One 
potential solution is to copy the templates into build/test/conf and use that 
instead.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HADOOP-6575) Tests do not run on 0.20 branch

2010-02-18 Thread Konstantin Boudnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6575?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12835332#action_12835332
 ] 

Konstantin Boudnik commented on HADOOP-6575:


+1 on the patch.

> Tests do not run on 0.20 branch
> ---
>
> Key: HADOOP-6575
> URL: https://issues.apache.org/jira/browse/HADOOP-6575
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Chris Douglas
> Fix For: 0.20.2
>
> Attachments: C6575-0.patch
>
>
> HADOOP-6506 introduced a call to the fault injection tests:
> {noformat}
> +
> {noformat}
> which do not exist in 0.20

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HADOOP-6575) Tests do not run on 0.20 branch

2010-02-18 Thread Konstantin Boudnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6575?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12835295#action_12835295
 ] 

Konstantin Boudnik commented on HADOOP-6575:


The HADOOP-6506 was intended to be a part a bigger FI overhaul into 0.20... 
Which hadn't happen on time :-( *cries*

> Tests do not run on 0.20 branch
> ---
>
> Key: HADOOP-6575
> URL: https://issues.apache.org/jira/browse/HADOOP-6575
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Chris Douglas
> Fix For: 0.20.2
>
> Attachments: C6575-0.patch
>
>
> HADOOP-6506 introduced a call to the fault injection tests:
> {noformat}
> +
> {noformat}
> which do not exist in 0.20

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HADOOP-6542) Add a -Dno-docs option to build.xml

2010-02-04 Thread Konstantin Boudnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12829875#action_12829875
 ] 

Konstantin Boudnik commented on HADOOP-6542:


Looks good to me except that indentation is inconsistent in the last change:
{noforma}
-   Add a -Dno-docs option to build.xml
> ---
>
> Key: HADOOP-6542
> URL: https://issues.apache.org/jira/browse/HADOOP-6542
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Tsz Wo (Nicholas), SZE
> Fix For: 0.20.2, 0.21.0, 0.22.0
>
> Attachments: c6542_20100204_0.20.patch
>
>
> "ant tar" took a long time to generate all the forrest docs and javadocs.  
> These docs are not always necessary.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (HADOOP-6530) AspectJ jar files need to be added to Eclipse .classpath file

2010-02-02 Thread Konstantin Boudnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Boudnik resolved HADOOP-6530.


Resolution: Not A Problem

Has been incorporated to the latest ydist patch of HADOOP-6204

> AspectJ jar files need to be added to Eclipse .classpath file
> -
>
> Key: HADOOP-6530
> URL: https://issues.apache.org/jira/browse/HADOOP-6530
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 0.20.2
>Reporter: Konstantin Boudnik
>Assignee: Konstantin Boudnik
> Attachments: HADOOP-6530.patch
>
>
> Newly added AspectJ jar files need to be added to Eclipse .classpath file 
> 'cause the difference makes test-patch to fail with the following message:
> {noformat}
>  -1 Eclipse classpath. The patch causes the Eclipse classpath to differ from 
> the contents of the lib directories.
> {noformat}

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (HADOOP-6533) Fail the fault-inject build if any advices are mis-bound

2010-02-02 Thread Konstantin Boudnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6533?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Boudnik resolved HADOOP-6533.


Resolution: Not A Problem

Has been incorporated to the latest ydist patch of HADOOP-6204

> Fail the fault-inject build if any advices are mis-bound
> 
>
> Key: HADOOP-6533
> URL: https://issues.apache.org/jira/browse/HADOOP-6533
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Konstantin Boudnik
>Assignee: Konstantin Boudnik
> Attachments: HADOOP-6533.patch
>
>


-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (HADOOP-6529) Exclude fault injection tests from normal tests execution

2010-02-02 Thread Konstantin Boudnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6529?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Boudnik resolved HADOOP-6529.


Resolution: Not A Problem

Has been incorporated to the latest ydist patch of HADOOP-6204

> Exclude fault injection tests from normal tests execution
> -
>
> Key: HADOOP-6529
> URL: https://issues.apache.org/jira/browse/HADOOP-6529
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 0.20.2
>Reporter: Konstantin Boudnik
>Assignee: Konstantin Boudnik
> Attachments: HADOOP-6529.patch
>
>
> The way junit task is configured is to look for all tests under src/test 
> directory. It will include and try to run tests in src/test/aop folder during 
> execution of normal test-core target. Such attempt will clearly fail.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HADOOP-6204) Implementing aspects development and fault injeciton framework for Hadoop

2010-02-02 Thread Konstantin Boudnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12828862#action_12828862
 ] 

Konstantin Boudnik commented on HADOOP-6204:


Local run of test-patch for the latest version of 0.20 branch's patch
{noformat}
+1 overall.

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 18 new or modified tests.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 findbugs.  The patch does not introduce any new Findbugs warnings.

+1 Eclipse classpath. The patch retains Eclipse classpath integrity.
{noformat}

> Implementing aspects development and fault injeciton framework for Hadoop
> -
>
> Key: HADOOP-6204
> URL: https://issues.apache.org/jira/browse/HADOOP-6204
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, test
>Reporter: Konstantin Boudnik
>Assignee: Konstantin Boudnik
> Fix For: 0.21.0, 0.22.0
>
> Attachments: HADOOP-6204-ydist.patch, HADOOP-6204-ydist.patch, 
> HADOOP-6204-ydist.patch, HADOOP-6204-ydist.patch, hadoop-6204-ydist.patch, 
> HADOOP-6204.patch, HADOOP-6204.patch, HADOOP-6204.patch, HADOOP-6204.patch, 
> HADOOP-6204.patch, HADOOP-6204.patch, HADOOP-6204.patch, HADOOP-6204.patch, 
> HADOOP-6204.patch, HADOOP-6204.patch.indirect, HADOOP-6204.patch.withmacros, 
> HADOOP-6204_0.20.patch, HADOOP-6204_0.20.patch, HADOOP-6204_0.20.patch, 
> HADOOP-6204_0.20.patch, HADOOP-6204_0.20.patch, HADOOP-6204_0.20.patch, 
> HADOOP-6204_0.20.patch
>
>
> Fault injection framework implementation in HDFS (HDFS-435) turns out to be a 
> very useful feature both for error handling testing and for various 
> simulations.
> There's certain demand for this framework, thus it need to be pulled up from 
> HDFS and brought into Common, so other sub-projects will be able to share it 
> if needed.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HADOOP-6204) Implementing aspects development and fault injeciton framework for Hadoop

2010-02-02 Thread Konstantin Boudnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Boudnik updated HADOOP-6204:
---

Attachment: HADOOP-6204-ydist.patch

> Implementing aspects development and fault injeciton framework for Hadoop
> -
>
> Key: HADOOP-6204
> URL: https://issues.apache.org/jira/browse/HADOOP-6204
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, test
>Reporter: Konstantin Boudnik
>Assignee: Konstantin Boudnik
> Fix For: 0.21.0, 0.22.0
>
> Attachments: HADOOP-6204-ydist.patch, HADOOP-6204-ydist.patch, 
> HADOOP-6204-ydist.patch, HADOOP-6204-ydist.patch, hadoop-6204-ydist.patch, 
> HADOOP-6204.patch, HADOOP-6204.patch, HADOOP-6204.patch, HADOOP-6204.patch, 
> HADOOP-6204.patch, HADOOP-6204.patch, HADOOP-6204.patch, HADOOP-6204.patch, 
> HADOOP-6204.patch, HADOOP-6204.patch.indirect, HADOOP-6204.patch.withmacros, 
> HADOOP-6204_0.20.patch, HADOOP-6204_0.20.patch, HADOOP-6204_0.20.patch, 
> HADOOP-6204_0.20.patch, HADOOP-6204_0.20.patch, HADOOP-6204_0.20.patch, 
> HADOOP-6204_0.20.patch
>
>
> Fault injection framework implementation in HDFS (HDFS-435) turns out to be a 
> very useful feature both for error handling testing and for various 
> simulations.
> There's certain demand for this framework, thus it need to be pulled up from 
> HDFS and brought into Common, so other sub-projects will be able to share it 
> if needed.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HADOOP-6204) Implementing aspects development and fault injeciton framework for Hadoop

2010-02-02 Thread Konstantin Boudnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Boudnik updated HADOOP-6204:
---

Attachment: HADOOP-6204-ydist.patch

Seems to be a final version for ydist fixing the invocation of call of 
non-existing target 'run-test-core'

> Implementing aspects development and fault injeciton framework for Hadoop
> -
>
> Key: HADOOP-6204
> URL: https://issues.apache.org/jira/browse/HADOOP-6204
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, test
>Reporter: Konstantin Boudnik
>Assignee: Konstantin Boudnik
> Fix For: 0.21.0, 0.22.0
>
> Attachments: HADOOP-6204-ydist.patch, HADOOP-6204-ydist.patch, 
> HADOOP-6204-ydist.patch, hadoop-6204-ydist.patch, HADOOP-6204.patch, 
> HADOOP-6204.patch, HADOOP-6204.patch, HADOOP-6204.patch, HADOOP-6204.patch, 
> HADOOP-6204.patch, HADOOP-6204.patch, HADOOP-6204.patch, HADOOP-6204.patch, 
> HADOOP-6204.patch.indirect, HADOOP-6204.patch.withmacros, 
> HADOOP-6204_0.20.patch, HADOOP-6204_0.20.patch, HADOOP-6204_0.20.patch, 
> HADOOP-6204_0.20.patch, HADOOP-6204_0.20.patch, HADOOP-6204_0.20.patch, 
> HADOOP-6204_0.20.patch
>
>
> Fault injection framework implementation in HDFS (HDFS-435) turns out to be a 
> very useful feature both for error handling testing and for various 
> simulations.
> There's certain demand for this framework, thus it need to be pulled up from 
> HDFS and brought into Common, so other sub-projects will be able to share it 
> if needed.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HADOOP-6204) Implementing aspects development and fault injeciton framework for Hadoop

2010-02-02 Thread Konstantin Boudnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Boudnik updated HADOOP-6204:
---

Attachment: HADOOP-6204_0.20.patch

And yet another issue is found :-(

> Implementing aspects development and fault injeciton framework for Hadoop
> -
>
> Key: HADOOP-6204
> URL: https://issues.apache.org/jira/browse/HADOOP-6204
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, test
>Reporter: Konstantin Boudnik
>Assignee: Konstantin Boudnik
> Fix For: 0.21.0, 0.22.0
>
> Attachments: HADOOP-6204-ydist.patch, HADOOP-6204-ydist.patch, 
> hadoop-6204-ydist.patch, HADOOP-6204.patch, HADOOP-6204.patch, 
> HADOOP-6204.patch, HADOOP-6204.patch, HADOOP-6204.patch, HADOOP-6204.patch, 
> HADOOP-6204.patch, HADOOP-6204.patch, HADOOP-6204.patch, 
> HADOOP-6204.patch.indirect, HADOOP-6204.patch.withmacros, 
> HADOOP-6204_0.20.patch, HADOOP-6204_0.20.patch, HADOOP-6204_0.20.patch, 
> HADOOP-6204_0.20.patch, HADOOP-6204_0.20.patch, HADOOP-6204_0.20.patch, 
> HADOOP-6204_0.20.patch
>
>
> Fault injection framework implementation in HDFS (HDFS-435) turns out to be a 
> very useful feature both for error handling testing and for various 
> simulations.
> There's certain demand for this framework, thus it need to be pulled up from 
> HDFS and brought into Common, so other sub-projects will be able to share it 
> if needed.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HADOOP-6204) Implementing aspects development and fault injeciton framework for Hadoop

2010-02-02 Thread Konstantin Boudnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Boudnik updated HADOOP-6204:
---

Attachment: (was: HADOOP-6204-ydist.patch)

> Implementing aspects development and fault injeciton framework for Hadoop
> -
>
> Key: HADOOP-6204
> URL: https://issues.apache.org/jira/browse/HADOOP-6204
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, test
>Reporter: Konstantin Boudnik
>Assignee: Konstantin Boudnik
> Fix For: 0.21.0, 0.22.0
>
> Attachments: HADOOP-6204-ydist.patch, HADOOP-6204-ydist.patch, 
> hadoop-6204-ydist.patch, HADOOP-6204.patch, HADOOP-6204.patch, 
> HADOOP-6204.patch, HADOOP-6204.patch, HADOOP-6204.patch, HADOOP-6204.patch, 
> HADOOP-6204.patch, HADOOP-6204.patch, HADOOP-6204.patch, 
> HADOOP-6204.patch.indirect, HADOOP-6204.patch.withmacros, 
> HADOOP-6204_0.20.patch, HADOOP-6204_0.20.patch, HADOOP-6204_0.20.patch, 
> HADOOP-6204_0.20.patch, HADOOP-6204_0.20.patch, HADOOP-6204_0.20.patch
>
>
> Fault injection framework implementation in HDFS (HDFS-435) turns out to be a 
> very useful feature both for error handling testing and for various 
> simulations.
> There's certain demand for this framework, thus it need to be pulled up from 
> HDFS and brought into Common, so other sub-projects will be able to share it 
> if needed.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HADOOP-6204) Implementing aspects development and fault injeciton framework for Hadoop

2010-02-02 Thread Konstantin Boudnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Boudnik updated HADOOP-6204:
---

Attachment: HADOOP-6204-ydist.patch

> Implementing aspects development and fault injeciton framework for Hadoop
> -
>
> Key: HADOOP-6204
> URL: https://issues.apache.org/jira/browse/HADOOP-6204
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, test
>Reporter: Konstantin Boudnik
>Assignee: Konstantin Boudnik
> Fix For: 0.21.0, 0.22.0
>
> Attachments: HADOOP-6204-ydist.patch, HADOOP-6204-ydist.patch, 
> hadoop-6204-ydist.patch, HADOOP-6204.patch, HADOOP-6204.patch, 
> HADOOP-6204.patch, HADOOP-6204.patch, HADOOP-6204.patch, HADOOP-6204.patch, 
> HADOOP-6204.patch, HADOOP-6204.patch, HADOOP-6204.patch, 
> HADOOP-6204.patch.indirect, HADOOP-6204.patch.withmacros, 
> HADOOP-6204_0.20.patch, HADOOP-6204_0.20.patch, HADOOP-6204_0.20.patch, 
> HADOOP-6204_0.20.patch, HADOOP-6204_0.20.patch, HADOOP-6204_0.20.patch
>
>
> Fault injection framework implementation in HDFS (HDFS-435) turns out to be a 
> very useful feature both for error handling testing and for various 
> simulations.
> There's certain demand for this framework, thus it need to be pulled up from 
> HDFS and brought into Common, so other sub-projects will be able to share it 
> if needed.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HADOOP-6204) Implementing aspects development and fault injeciton framework for Hadoop

2010-02-02 Thread Konstantin Boudnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Boudnik updated HADOOP-6204:
---

Attachment: (was: HADOOP-6402-ydist.patch)

> Implementing aspects development and fault injeciton framework for Hadoop
> -
>
> Key: HADOOP-6204
> URL: https://issues.apache.org/jira/browse/HADOOP-6204
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, test
>Reporter: Konstantin Boudnik
>Assignee: Konstantin Boudnik
> Fix For: 0.21.0, 0.22.0
>
> Attachments: HADOOP-6204-ydist.patch, HADOOP-6204-ydist.patch, 
> hadoop-6204-ydist.patch, HADOOP-6204.patch, HADOOP-6204.patch, 
> HADOOP-6204.patch, HADOOP-6204.patch, HADOOP-6204.patch, HADOOP-6204.patch, 
> HADOOP-6204.patch, HADOOP-6204.patch, HADOOP-6204.patch, 
> HADOOP-6204.patch.indirect, HADOOP-6204.patch.withmacros, 
> HADOOP-6204_0.20.patch, HADOOP-6204_0.20.patch, HADOOP-6204_0.20.patch, 
> HADOOP-6204_0.20.patch, HADOOP-6204_0.20.patch, HADOOP-6204_0.20.patch
>
>
> Fault injection framework implementation in HDFS (HDFS-435) turns out to be a 
> very useful feature both for error handling testing and for various 
> simulations.
> There's certain demand for this framework, thus it need to be pulled up from 
> HDFS and brought into Common, so other sub-projects will be able to share it 
> if needed.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HADOOP-6204) Implementing aspects development and fault injeciton framework for Hadoop

2010-02-02 Thread Konstantin Boudnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Boudnik updated HADOOP-6204:
---

Attachment: HADOOP-6402-ydist.patch

Instead of tracking small fixes in the separate sub-tasks I've prepared this 
patch which correlates with branch-0.20 patch and will be committed to ydist.

> Implementing aspects development and fault injeciton framework for Hadoop
> -
>
> Key: HADOOP-6204
> URL: https://issues.apache.org/jira/browse/HADOOP-6204
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, test
>Reporter: Konstantin Boudnik
>Assignee: Konstantin Boudnik
> Fix For: 0.21.0, 0.22.0
>
> Attachments: HADOOP-6204-ydist.patch, HADOOP-6204-ydist.patch, 
> hadoop-6204-ydist.patch, HADOOP-6204.patch, HADOOP-6204.patch, 
> HADOOP-6204.patch, HADOOP-6204.patch, HADOOP-6204.patch, HADOOP-6204.patch, 
> HADOOP-6204.patch, HADOOP-6204.patch, HADOOP-6204.patch, 
> HADOOP-6204.patch.indirect, HADOOP-6204.patch.withmacros, 
> HADOOP-6204_0.20.patch, HADOOP-6204_0.20.patch, HADOOP-6204_0.20.patch, 
> HADOOP-6204_0.20.patch, HADOOP-6204_0.20.patch, HADOOP-6204_0.20.patch, 
> HADOOP-6402-ydist.patch
>
>
> Fault injection framework implementation in HDFS (HDFS-435) turns out to be a 
> very useful feature both for error handling testing and for various 
> simulations.
> There's certain demand for this framework, thus it need to be pulled up from 
> HDFS and brought into Common, so other sub-projects will be able to share it 
> if needed.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HADOOP-6204) Implementing aspects development and fault injeciton framework for Hadoop

2010-02-02 Thread Konstantin Boudnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Boudnik updated HADOOP-6204:
---

Attachment: HADOOP-6204_0.20.patch

Yet another glitch is found in the initial backport patch. This version fixes 
incorrect name of FI-test jar file (for 0.20 branch).

> Implementing aspects development and fault injeciton framework for Hadoop
> -
>
> Key: HADOOP-6204
> URL: https://issues.apache.org/jira/browse/HADOOP-6204
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, test
>Reporter: Konstantin Boudnik
>Assignee: Konstantin Boudnik
> Fix For: 0.21.0, 0.22.0
>
> Attachments: HADOOP-6204-ydist.patch, HADOOP-6204-ydist.patch, 
> hadoop-6204-ydist.patch, HADOOP-6204.patch, HADOOP-6204.patch, 
> HADOOP-6204.patch, HADOOP-6204.patch, HADOOP-6204.patch, HADOOP-6204.patch, 
> HADOOP-6204.patch, HADOOP-6204.patch, HADOOP-6204.patch, 
> HADOOP-6204.patch.indirect, HADOOP-6204.patch.withmacros, 
> HADOOP-6204_0.20.patch, HADOOP-6204_0.20.patch, HADOOP-6204_0.20.patch, 
> HADOOP-6204_0.20.patch, HADOOP-6204_0.20.patch, HADOOP-6204_0.20.patch
>
>
> Fault injection framework implementation in HDFS (HDFS-435) turns out to be a 
> very useful feature both for error handling testing and for various 
> simulations.
> There's certain demand for this framework, thus it need to be pulled up from 
> HDFS and brought into Common, so other sub-projects will be able to share it 
> if needed.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HADOOP-6533) Fail the fault-inject build if any advices are mis-bound

2010-02-02 Thread Konstantin Boudnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6533?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Boudnik updated HADOOP-6533:
---

Attachment: HADOOP-6533.patch

This is the diff between two latest 0.20 branch 
patches([this|https://issues.apache.org/jira/secure/attachment/12434485/HADOOP-6204_0.20.patch]
 and 
[that|https://issues.apache.org/jira/secure/attachment/12434564/HADOOP-6204_0.20.patch].
 This will be included into y!dist release as a separate JIRA and will make it 
to branch-0.20 as a part of the bigger backport patch.

> Fail the fault-inject build if any advices are mis-bound
> 
>
> Key: HADOOP-6533
> URL: https://issues.apache.org/jira/browse/HADOOP-6533
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Konstantin Boudnik
>Assignee: Konstantin Boudnik
> Attachments: HADOOP-6533.patch
>
>


-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (HADOOP-6533) Fail the fault-inject build if any advices are mis-bound

2010-02-02 Thread Konstantin Boudnik (JIRA)
Fail the fault-inject build if any advices are mis-bound


 Key: HADOOP-6533
 URL: https://issues.apache.org/jira/browse/HADOOP-6533
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Konstantin Boudnik
Assignee: Konstantin Boudnik




-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HADOOP-6204) Implementing aspects development and fault injeciton framework for Hadoop

2010-02-02 Thread Konstantin Boudnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Boudnik updated HADOOP-6204:
---

Attachment: HADOOP-6204_0.20.patch

This latest version of the patch includes a feature which has been missed from 
the original patch and has been later addressed by a separate JIRA (HDFS-584).
Simply put, a build has to be fail if some aspects are mis-bound.

> Implementing aspects development and fault injeciton framework for Hadoop
> -
>
> Key: HADOOP-6204
> URL: https://issues.apache.org/jira/browse/HADOOP-6204
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, test
>Reporter: Konstantin Boudnik
>Assignee: Konstantin Boudnik
> Fix For: 0.21.0, 0.22.0
>
> Attachments: HADOOP-6204-ydist.patch, HADOOP-6204-ydist.patch, 
> hadoop-6204-ydist.patch, HADOOP-6204.patch, HADOOP-6204.patch, 
> HADOOP-6204.patch, HADOOP-6204.patch, HADOOP-6204.patch, HADOOP-6204.patch, 
> HADOOP-6204.patch, HADOOP-6204.patch, HADOOP-6204.patch, 
> HADOOP-6204.patch.indirect, HADOOP-6204.patch.withmacros, 
> HADOOP-6204_0.20.patch, HADOOP-6204_0.20.patch, HADOOP-6204_0.20.patch, 
> HADOOP-6204_0.20.patch, HADOOP-6204_0.20.patch
>
>
> Fault injection framework implementation in HDFS (HDFS-435) turns out to be a 
> very useful feature both for error handling testing and for various 
> simulations.
> There's certain demand for this framework, thus it need to be pulled up from 
> HDFS and brought into Common, so other sub-projects will be able to share it 
> if needed.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HADOOP-6529) Exclude fault injection tests from normal tests execution

2010-02-01 Thread Konstantin Boudnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12828411#action_12828411
 ] 

Konstantin Boudnik commented on HADOOP-6529:


Same as in the case of HADOOP-6530 this patch is the diff between ydist 0.20 
and HADOOP-6204 patch for 0.20 branch. No need to commit this one.

> Exclude fault injection tests from normal tests execution
> -
>
> Key: HADOOP-6529
> URL: https://issues.apache.org/jira/browse/HADOOP-6529
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 0.20.2
>Reporter: Konstantin Boudnik
>Assignee: Konstantin Boudnik
> Attachments: HADOOP-6529.patch
>
>
> The way junit task is configured is to look for all tests under src/test 
> directory. It will include and try to run tests in src/test/aop folder during 
> execution of normal test-core target. Such attempt will clearly fail.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HADOOP-6530) AspectJ jar files need to be added to Eclipse .classpath file

2010-02-01 Thread Konstantin Boudnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Boudnik updated HADOOP-6530:
---

Attachment: HADOOP-6530.patch

The diff between ydist 0.20 and HADOOP-6204 patch for 0.20 branch. It doesn't 
need to be committed to 0.20 branch because of that.

> AspectJ jar files need to be added to Eclipse .classpath file
> -
>
> Key: HADOOP-6530
> URL: https://issues.apache.org/jira/browse/HADOOP-6530
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 0.20.2
>Reporter: Konstantin Boudnik
>Assignee: Konstantin Boudnik
> Attachments: HADOOP-6530.patch
>
>
> Newly added AspectJ jar files need to be added to Eclipse .classpath file 
> 'cause the difference makes test-patch to fail with the following message:
> {noformat}
>  -1 Eclipse classpath. The patch causes the Eclipse classpath to differ from 
> the contents of the lib directories.
> {noformat}

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HADOOP-6204) Implementing aspects development and fault injeciton framework for Hadoop

2010-02-01 Thread Konstantin Boudnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Boudnik updated HADOOP-6204:
---

Attachment: HADOOP-6204_0.20.patch

This patch includes the update for missing jar files in eclipse .classpath file.
The results of local test-patch verification are below:
{noformat}
+1 overall.

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 18 new or modified tests.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 findbugs.  The patch does not introduce any new Findbugs warnings.

+1 Eclipse classpath. The patch retains Eclipse classpath integrity.
{noformat}
 

> Implementing aspects development and fault injeciton framework for Hadoop
> -
>
> Key: HADOOP-6204
> URL: https://issues.apache.org/jira/browse/HADOOP-6204
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, test
>Reporter: Konstantin Boudnik
>Assignee: Konstantin Boudnik
> Fix For: 0.21.0, 0.22.0
>
> Attachments: HADOOP-6204-ydist.patch, HADOOP-6204-ydist.patch, 
> hadoop-6204-ydist.patch, HADOOP-6204.patch, HADOOP-6204.patch, 
> HADOOP-6204.patch, HADOOP-6204.patch, HADOOP-6204.patch, HADOOP-6204.patch, 
> HADOOP-6204.patch, HADOOP-6204.patch, HADOOP-6204.patch, 
> HADOOP-6204.patch.indirect, HADOOP-6204.patch.withmacros, 
> HADOOP-6204_0.20.patch, HADOOP-6204_0.20.patch, HADOOP-6204_0.20.patch, 
> HADOOP-6204_0.20.patch
>
>
> Fault injection framework implementation in HDFS (HDFS-435) turns out to be a 
> very useful feature both for error handling testing and for various 
> simulations.
> There's certain demand for this framework, thus it need to be pulled up from 
> HDFS and brought into Common, so other sub-projects will be able to share it 
> if needed.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (HADOOP-6386) NameNode's HttpServer can't instantiate InetSocketAddress: IllegalArgumentException is thrown

2010-01-31 Thread Konstantin Boudnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Boudnik resolved HADOOP-6386.


Resolution: Fixed

The fix has been delivered via HADOOP-6428


> NameNode's HttpServer can't instantiate InetSocketAddress: 
> IllegalArgumentException is thrown
> -
>
> Key: HADOOP-6386
> URL: https://issues.apache.org/jira/browse/HADOOP-6386
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 0.20.2, 0.21.0, 0.22.0
> Environment: Apache Hudson build machine
>Reporter: Konstantin Boudnik
>Assignee: Konstantin Boudnik
>Priority: Blocker
> Fix For: 0.20.2, 0.21.0, 0.22.0
>
> Attachments: HADOOP-6386-0.20.patch, HADOOP-6386-0.20.patch, 
> HADOOP-6386-0.20.patch, HADOOP-6386.patch, HADOOP-6386.patch, 
> HADOOP-6386.patch, HADOOP-6386.patch, HADOOP-6386.patch, hdfs-771.patch, 
> hdfs-771.patch, testEditLog.html
>
>
> In an execution of a tests the following exception has been thrown:
> {noformat}
> Error Message
> port out of range:-1
> Stacktrace
> java.lang.IllegalArgumentException: port out of range:-1
>   at java.net.InetSocketAddress.(InetSocketAddress.java:118)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:371)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.activate(NameNode.java:313)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:304)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:410)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:404)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1211)
>   at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:287)
>   at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:131)
>   at 
> org.apache.hadoop.hdfs.server.namenode.TestEditLog.testEditLog(TestEditLog.java:92)
> {noformat}

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HADOOP-6528) Jetty returns -1 resulting in Hadoop masters / slaves to fail during startup.

2010-01-31 Thread Konstantin Boudnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12827994#action_12827994
 ] 

Konstantin Boudnik commented on HADOOP-6528:


This problem has been attacked for the second time in HADOOP-6386 (and follow 
up HADOOP-6428).
Now the code which produces it has two workarounds to guarantee that negative 
port can't happen _at all_. What happens however - and now we have a clear 
indication of it - is a race condition inside of the Jetty server. The port 
value is positive inside of {{NameNode.startHttpServer}} but is negative when 
{{listener.getLocalPort}} is called from outside of webserver context.


> Jetty returns -1 resulting in Hadoop masters / slaves to fail during startup.
> -
>
> Key: HADOOP-6528
> URL: https://issues.apache.org/jira/browse/HADOOP-6528
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Hemanth Yamijala
> Attachments: jetty-server-failure.log
>
>
> A recent test failure on Hudson seems to indicate that Jetty's 
> Server.getConnectors()[0].getLocalPort() is returning -1 in the 
> HttpServer.getPort() method. When this happens, Hadoop masters / slaves that 
> use Jetty fail to startup correctly.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HADOOP-6529) Exclude fault injection tests from normal tests execution

2010-01-31 Thread Konstantin Boudnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6529?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Boudnik updated HADOOP-6529:
---

Attachment: HADOOP-6529.patch

It seems to be easier to track this issue in a separate JIRA because 
HADOOP-6204 is getting too complicated with all additional patches for 
different branches, etc.

> Exclude fault injection tests from normal tests execution
> -
>
> Key: HADOOP-6529
> URL: https://issues.apache.org/jira/browse/HADOOP-6529
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 0.20.2
>Reporter: Konstantin Boudnik
>Assignee: Konstantin Boudnik
> Attachments: HADOOP-6529.patch
>
>
> The way junit task is configured is to look for all tests under src/test 
> directory. It will include and try to run tests in src/test/aop folder during 
> execution of normal test-core target. Such attempt will clearly fail.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HADOOP-6204) Implementing aspects development and fault injeciton framework for Hadoop

2010-01-31 Thread Konstantin Boudnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Boudnik updated HADOOP-6204:
---

Attachment: (was: wrongTestsPackaging.patch)

> Implementing aspects development and fault injeciton framework for Hadoop
> -
>
> Key: HADOOP-6204
> URL: https://issues.apache.org/jira/browse/HADOOP-6204
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, test
>Reporter: Konstantin Boudnik
>Assignee: Konstantin Boudnik
> Fix For: 0.21.0, 0.22.0
>
> Attachments: HADOOP-6204-ydist.patch, HADOOP-6204-ydist.patch, 
> hadoop-6204-ydist.patch, HADOOP-6204.patch, HADOOP-6204.patch, 
> HADOOP-6204.patch, HADOOP-6204.patch, HADOOP-6204.patch, HADOOP-6204.patch, 
> HADOOP-6204.patch, HADOOP-6204.patch, HADOOP-6204.patch, 
> HADOOP-6204.patch.indirect, HADOOP-6204.patch.withmacros, 
> HADOOP-6204_0.20.patch, HADOOP-6204_0.20.patch, HADOOP-6204_0.20.patch
>
>
> Fault injection framework implementation in HDFS (HDFS-435) turns out to be a 
> very useful feature both for error handling testing and for various 
> simulations.
> There's certain demand for this framework, thus it need to be pulled up from 
> HDFS and brought into Common, so other sub-projects will be able to share it 
> if needed.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HADOOP-6204) Implementing aspects development and fault injeciton framework for Hadoop

2010-01-31 Thread Konstantin Boudnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Boudnik updated HADOOP-6204:
---

Attachment: (was: wrongTestsPackaging.patch)

> Implementing aspects development and fault injeciton framework for Hadoop
> -
>
> Key: HADOOP-6204
> URL: https://issues.apache.org/jira/browse/HADOOP-6204
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, test
>Reporter: Konstantin Boudnik
>Assignee: Konstantin Boudnik
> Fix For: 0.21.0, 0.22.0
>
> Attachments: HADOOP-6204-ydist.patch, HADOOP-6204-ydist.patch, 
> hadoop-6204-ydist.patch, HADOOP-6204.patch, HADOOP-6204.patch, 
> HADOOP-6204.patch, HADOOP-6204.patch, HADOOP-6204.patch, HADOOP-6204.patch, 
> HADOOP-6204.patch, HADOOP-6204.patch, HADOOP-6204.patch, 
> HADOOP-6204.patch.indirect, HADOOP-6204.patch.withmacros, 
> HADOOP-6204_0.20.patch, HADOOP-6204_0.20.patch, HADOOP-6204_0.20.patch
>
>
> Fault injection framework implementation in HDFS (HDFS-435) turns out to be a 
> very useful feature both for error handling testing and for various 
> simulations.
> There's certain demand for this framework, thus it need to be pulled up from 
> HDFS and brought into Common, so other sub-projects will be able to share it 
> if needed.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (HADOOP-6530) AspectJ jar files need to be added to Eclipse .classpath file

2010-01-31 Thread Konstantin Boudnik (JIRA)
AspectJ jar files need to be added to Eclipse .classpath file
-

 Key: HADOOP-6530
 URL: https://issues.apache.org/jira/browse/HADOOP-6530
 Project: Hadoop Common
  Issue Type: Sub-task
Affects Versions: 0.20.2
Reporter: Konstantin Boudnik
Assignee: Konstantin Boudnik


Newly added AspectJ jar files need to be added to Eclipse .classpath file 
'cause the difference makes test-patch to fail with the following message:
{noformat}
 -1 Eclipse classpath. The patch causes the Eclipse classpath to differ from 
the contents of the lib directories.
{noformat}

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (HADOOP-6529) Exclude fault injection tests from normal tests execution

2010-01-31 Thread Konstantin Boudnik (JIRA)
Exclude fault injection tests from normal tests execution
-

 Key: HADOOP-6529
 URL: https://issues.apache.org/jira/browse/HADOOP-6529
 Project: Hadoop Common
  Issue Type: Sub-task
Affects Versions: 0.20.2
Reporter: Konstantin Boudnik
Assignee: Konstantin Boudnik


The way junit task is configured is to look for all tests under src/test 
directory. It will include and try to run tests in src/test/aop folder during 
execution of normal test-core target. Such attempt will clearly fail.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HADOOP-6204) Implementing aspects development and fault injeciton framework for Hadoop

2010-01-30 Thread Konstantin Boudnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Boudnik updated HADOOP-6204:
---

Attachment: wrongTestsPackaging.patch

> Implementing aspects development and fault injeciton framework for Hadoop
> -
>
> Key: HADOOP-6204
> URL: https://issues.apache.org/jira/browse/HADOOP-6204
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, test
>Reporter: Konstantin Boudnik
>Assignee: Konstantin Boudnik
> Fix For: 0.21.0, 0.22.0
>
> Attachments: HADOOP-6204-ydist.patch, HADOOP-6204-ydist.patch, 
> hadoop-6204-ydist.patch, HADOOP-6204.patch, HADOOP-6204.patch, 
> HADOOP-6204.patch, HADOOP-6204.patch, HADOOP-6204.patch, HADOOP-6204.patch, 
> HADOOP-6204.patch, HADOOP-6204.patch, HADOOP-6204.patch, 
> HADOOP-6204.patch.indirect, HADOOP-6204.patch.withmacros, 
> HADOOP-6204_0.20.patch, HADOOP-6204_0.20.patch, HADOOP-6204_0.20.patch, 
> wrongTestsPackaging.patch, wrongTestsPackaging.patch
>
>
> Fault injection framework implementation in HDFS (HDFS-435) turns out to be a 
> very useful feature both for error handling testing and for various 
> simulations.
> There's certain demand for this framework, thus it need to be pulled up from 
> HDFS and brought into Common, so other sub-projects will be able to share it 
> if needed.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HADOOP-6204) Implementing aspects development and fault injeciton framework for Hadoop

2010-01-30 Thread Konstantin Boudnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Boudnik updated HADOOP-6204:
---

Attachment: HADOOP-6204_0.20.patch

> Implementing aspects development and fault injeciton framework for Hadoop
> -
>
> Key: HADOOP-6204
> URL: https://issues.apache.org/jira/browse/HADOOP-6204
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, test
>Reporter: Konstantin Boudnik
>Assignee: Konstantin Boudnik
> Fix For: 0.21.0, 0.22.0
>
> Attachments: HADOOP-6204-ydist.patch, HADOOP-6204-ydist.patch, 
> hadoop-6204-ydist.patch, HADOOP-6204.patch, HADOOP-6204.patch, 
> HADOOP-6204.patch, HADOOP-6204.patch, HADOOP-6204.patch, HADOOP-6204.patch, 
> HADOOP-6204.patch, HADOOP-6204.patch, HADOOP-6204.patch, 
> HADOOP-6204.patch.indirect, HADOOP-6204.patch.withmacros, 
> HADOOP-6204_0.20.patch, HADOOP-6204_0.20.patch, HADOOP-6204_0.20.patch, 
> wrongTestsPackaging.patch
>
>
> Fault injection framework implementation in HDFS (HDFS-435) turns out to be a 
> very useful feature both for error handling testing and for various 
> simulations.
> There's certain demand for this framework, thus it need to be pulled up from 
> HDFS and brought into Common, so other sub-projects will be able to share it 
> if needed.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HADOOP-6204) Implementing aspects development and fault injeciton framework for Hadoop

2010-01-30 Thread Konstantin Boudnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Boudnik updated HADOOP-6204:
---

Attachment: HADOOP-6204_0.20.patch

Excluding AOP directories from test lookup process. Previous attempt to limit 
the lookup to {{${test.src.dir}}/org}} failed

> Implementing aspects development and fault injeciton framework for Hadoop
> -
>
> Key: HADOOP-6204
> URL: https://issues.apache.org/jira/browse/HADOOP-6204
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, test
>Reporter: Konstantin Boudnik
>Assignee: Konstantin Boudnik
> Fix For: 0.21.0, 0.22.0
>
> Attachments: HADOOP-6204-ydist.patch, HADOOP-6204-ydist.patch, 
> hadoop-6204-ydist.patch, HADOOP-6204.patch, HADOOP-6204.patch, 
> HADOOP-6204.patch, HADOOP-6204.patch, HADOOP-6204.patch, HADOOP-6204.patch, 
> HADOOP-6204.patch, HADOOP-6204.patch, HADOOP-6204.patch, 
> HADOOP-6204.patch.indirect, HADOOP-6204.patch.withmacros, 
> HADOOP-6204_0.20.patch, HADOOP-6204_0.20.patch, wrongTestsPackaging.patch
>
>
> Fault injection framework implementation in HDFS (HDFS-435) turns out to be a 
> very useful feature both for error handling testing and for various 
> simulations.
> There's certain demand for this framework, thus it need to be pulled up from 
> HDFS and brought into Common, so other sub-projects will be able to share it 
> if needed.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (HADOOP-6524) Contrib tests are failing Clover'ed build

2010-01-30 Thread Konstantin Boudnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12806654#action_12806654
 ] 

Konstantin Boudnik commented on HADOOP-6524:


Oops, I haven't seen your comment Todd. Sorry. I think your patch is better 
than my head-down solution. So, please feel free to refit your the patch and 
commit it.

> Contrib tests are failing Clover'ed build
> -
>
> Key: HADOOP-6524
> URL: https://issues.apache.org/jira/browse/HADOOP-6524
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.20.0
>Reporter: Konstantin Boudnik
>Assignee: Konstantin Boudnik
> Fix For: 0.20.2
>
> Attachments: HADOOP-6524.patch, runWithClover.sh
>
>
> When {{test-contrib}} target is executed on a build instrumented with Clover 
> all tests there are failing. Apparently {{clover.jar}} isn't included into 
> contrib tests classpath.
> Also, {{HdfsProxy}} test is failing because {{commons-cli}} jar isn't pulled 
> by Ivy.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (HADOOP-6524) Contrib tests are failing Clover'ed build

2010-01-29 Thread Konstantin Boudnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6524?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Boudnik resolved HADOOP-6524.


   Resolution: Fixed
Fix Version/s: 0.20.2

I've just committed this.

> Contrib tests are failing Clover'ed build
> -
>
> Key: HADOOP-6524
> URL: https://issues.apache.org/jira/browse/HADOOP-6524
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.20.0
>Reporter: Konstantin Boudnik
>Assignee: Konstantin Boudnik
> Fix For: 0.20.2
>
> Attachments: HADOOP-6524.patch, runWithClover.sh
>
>
> When {{test-contrib}} target is executed on a build instrumented with Clover 
> all tests there are failing. Apparently {{clover.jar}} isn't included into 
> contrib tests classpath.
> Also, {{HdfsProxy}} test is failing because {{commons-cli}} jar isn't pulled 
> by Ivy.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (HADOOP-6524) Contrib tests are failing Clover'ed build

2010-01-29 Thread Konstantin Boudnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6524?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Boudnik updated HADOOP-6524:
---

Attachment: runWithClover.sh

There's no way to test the fix via test-patch process. However, a curious mind 
can run this script it Clover is available. 

> Contrib tests are failing Clover'ed build
> -
>
> Key: HADOOP-6524
> URL: https://issues.apache.org/jira/browse/HADOOP-6524
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 0.20.0
>Reporter: Konstantin Boudnik
>Assignee: Konstantin Boudnik
> Attachments: HADOOP-6524.patch, runWithClover.sh
>
>
> When {{test-contrib}} target is executed on a build instrumented with Clover 
> all tests there are failing. Apparently {{clover.jar}} isn't included into 
> contrib tests classpath.
> Also, {{HdfsProxy}} test is failing because {{commons-cli}} jar isn't pulled 
> by Ivy.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



<    2   3   4   5   6   7   8   9   10   >