[jira] [Commented] (HADOOP-11905) Abstraction for LocalDirAllocator

2015-05-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11905?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14530079#comment-14530079
 ] 

Hadoop QA commented on HADOOP-11905:


\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  14m 38s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   7m 27s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 36s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 22s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   1m  4s | The applied patch generated  6 
new checkstyle issues (total was 93, now 99). |
| {color:red}-1{color} | whitespace |   0m  0s | The patch has 1  line(s) that 
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 34s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 32s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   1m 39s | The patch does not introduce 
any new Findbugs (version 2.0.3) warnings. |
| {color:green}+1{color} | common tests |  21m 59s | Tests passed in 
hadoop-common. |
| | |  58m 56s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/1273/0001-Abstraction-for-local-disk-path-allocation.patch
 |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / a583a40 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HADOOP-Build/6502/artifact/patchprocess/diffcheckstylehadoop-common.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6502/artifact/patchprocess/whitespace.txt
 |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6502/artifact/patchprocess/testrun_hadoop-common.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6502/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf905.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6502/console |


This message was automatically generated.

> Abstraction for LocalDirAllocator
> -
>
> Key: HADOOP-11905
> URL: https://issues.apache.org/jira/browse/HADOOP-11905
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.5.2
>Reporter: Kannan Rajah
>Assignee: Kannan Rajah
>  Labels: BB2015-05-TBR
> Fix For: 2.7.1
>
> Attachments: 0001-Abstraction-for-local-disk-path-allocation.patch
>
>
> There are 2 abstractions used to write data to local disk.
> LocalDirAllocator: Allocate paths from a set of configured local directories.
> LocalFileSystem/RawLocalFileSystem: Read/write using java.io.* and java.nio.*
> In the current implementation, local disk is managed by guest OS and not 
> HDFS. The proposal is to provide a new abstraction that encapsulates the 
> above 2 abstractions and hides who manages the local disks. This enables us 
> to provide an alternate implementation where a DFS can manage the local disks 
> and it can be accessed using HDFS APIs. This means the DFS maintains a 
> namespace for node local directories and can create paths that are guaranteed 
> to be present on a specific node.
> Here is an example use case for Shuffle: When a mapper writes intermediate 
> data using this new implementation, it will continue write to local disk. 
> When a reducer needs to access data from a remote node, it can use HDFS APIs 
> with a path that points to that node’s local namespace instead of having to 
> use HTTP server to transfer the data across nodes.
> New Abstractions
> 1. LocalDiskPathAllocator
> Interface to get file/directory paths from the local disk namespace.
> This contains all the APIs that are currently supported by LocalDirAllocator. 
> So we just need to change LocalDirAllocator to implement this new interface.
> 2. LocalDiskUtil
> Helper class to get a handle to LocalDiskPathAllocator and the FileSystem
> that is used to manage those paths.
> By default, it will return LocalDirAllocator and LocalFileSystem.
> A supporting DFS can return DFSLocalDirAllocator and an instance of DFS.
> 3. DFSLocalDirAllocator
> This is a generic implementation. An allocator is created for a specific 
> node. It uses Configuration object to get user conf

[jira] [Created] (HADOOP-11927) libcrypto needs libdl when compile native code

2015-05-06 Thread Xianyin Xin (JIRA)
Xianyin Xin created HADOOP-11927:


 Summary: libcrypto needs libdl when compile native code
 Key: HADOOP-11927
 URL: https://issues.apache.org/jira/browse/HADOOP-11927
 Project: Hadoop Common
  Issue Type: Bug
  Components: tools
 Environment: SUSE Linux Enterprise Server 11 SP3  (x86_64)
Reporter: Xianyin Xin
Assignee: Xianyin Xin


When compile hadoop with native support, we encounter compile error that 
"undefined reference to `dlopen'" when link libcrypto. We'd better link libdl 
explicitly in CMakeList of hadoop-pips.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11927) libcrypto needs libdl when compile native code

2015-05-06 Thread Xianyin Xin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xianyin Xin updated HADOOP-11927:
-
Attachment: HADOOP-11927-001.patch

Pre patch for review.

> libcrypto needs libdl when compile native code
> --
>
> Key: HADOOP-11927
> URL: https://issues.apache.org/jira/browse/HADOOP-11927
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
> Environment: SUSE Linux Enterprise Server 11 SP3  (x86_64)
>Reporter: Xianyin Xin
>Assignee: Xianyin Xin
> Attachments: HADOOP-11927-001.patch
>
>
> When compile hadoop with native support, we encounter compile error that 
> "undefined reference to `dlopen'" when link libcrypto. We'd better link libdl 
> explicitly in CMakeList of hadoop-pips.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11920) Refactor some codes for erasure coders

2015-05-06 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HADOOP-11920:
---
Attachment: HADOOP-11920-HDFS-7285-02.patch

Attaching the same patch with branch name to run jenkins

> Refactor some codes for erasure coders
> --
>
> Key: HADOOP-11920
> URL: https://issues.apache.org/jira/browse/HADOOP-11920
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HADOOP-11920-HDFS-7285-02.patch, HADOOP-11920-v1.patch, 
> HADOOP-11920-v2.patch
>
>
> While working on native erasure coders and also HADOOP-11847, it was found in 
> some chances better to refine a little bit of codes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11920) Refactor some codes for erasure coders

2015-05-06 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HADOOP-11920:
---
Target Version/s: HDFS-7285
  Status: Patch Available  (was: Open)

> Refactor some codes for erasure coders
> --
>
> Key: HADOOP-11920
> URL: https://issues.apache.org/jira/browse/HADOOP-11920
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HADOOP-11920-HDFS-7285-02.patch, HADOOP-11920-v1.patch, 
> HADOOP-11920-v2.patch
>
>
> While working on native erasure coders and also HADOOP-11847, it was found in 
> some chances better to refine a little bit of codes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11920) Refactor some codes for erasure coders

2015-05-06 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14530146#comment-14530146
 ] 

Vinayakumar B commented on HADOOP-11920:


Latest patch looks good. 
One small nit
1. In RawErasureDecoder javadoc should be updated for {{decode(ECChunk\[\] 
inputs, int\[\] erasedIndexes, ECChunk\[\] outputs)}} also.

> Refactor some codes for erasure coders
> --
>
> Key: HADOOP-11920
> URL: https://issues.apache.org/jira/browse/HADOOP-11920
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HADOOP-11920-HDFS-7285-02.patch, HADOOP-11920-v1.patch, 
> HADOOP-11920-v2.patch
>
>
> While working on native erasure coders and also HADOOP-11847, it was found in 
> some chances better to refine a little bit of codes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11920) Refactor some codes for erasure coders

2015-05-06 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14530177#comment-14530177
 ] 

Kai Zheng commented on HADOOP-11920:


Thanks Vinay for the review and help. I will wait for the Jenkins building 
output to update the patch.

> Refactor some codes for erasure coders
> --
>
> Key: HADOOP-11920
> URL: https://issues.apache.org/jira/browse/HADOOP-11920
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HADOOP-11920-HDFS-7285-02.patch, HADOOP-11920-v1.patch, 
> HADOOP-11920-v2.patch
>
>
> While working on native erasure coders and also HADOOP-11847, it was found in 
> some chances better to refine a little bit of codes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11847) Enhance raw coder allowing to read least required inputs in decoding

2015-05-06 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14530193#comment-14530193
 ] 

Kai Zheng commented on HADOOP-11847:


Hi [~hitliuyi],

Thanks for your good thoughts about the decoder API. It's refined as below. How 
do you like of this? Thanks.
{code}
  /**
   * Decode with inputs and erasedIndexes, generates outputs.
   * How to prepare for inputs:
   * 1. Create an array containing parity units + data units;
   * 2. Set null in the array locations specified via erasedIndexes to indicate
   *they're erased and no data are to read from;
   * 3. Set null in the array locations for extra redundant items, as they're 
not
   *necessary to read when decoding. For example in RS-6-3, if only 1 unit
   *is really erased, then we have 2 extra items as redundant. They can be
   *set as null to indicate no data will be used from them.
   *
   * For an example using RS (6, 3), assuming sources (d0, d1, d2, d3, d4, d5)
   * and parities (p0, p1, p2), d2 being erased. We can and may want to use only
   * 6 units like (d1, d3, d4, d5, p0, p2) to recover d2. We will have:
   * inputs = [p0, null(p1), p2, null(d0), d1, null(d2), d3, d4, d5]
   * erasedIndexes = [5] // index of d2 into inputs array
   * outputs = [a-writable-buffer]
   *
   * @param inputs inputs to read data from
   * @param erasedIndexes indexes of erased units into inputs array
   * @param outputs outputs to write into for data generated according to
   *erasedIndexes
   */
public void decode(ByteBuffer[] inputs, int[] erasedIndexes, ByteBuffer[] 
outputs);
{code}
The impact from the caller's point of view:
It requires to provide input buffers using NULL to indicate not to read or 
erased;
It requires to provide erasedIndexes to be for the ones that's really erased 
and  to be taken care of.

> Enhance raw coder allowing to read least required inputs in decoding
> 
>
> Key: HADOOP-11847
> URL: https://issues.apache.org/jira/browse/HADOOP-11847
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io
>Reporter: Kai Zheng
>Assignee: Kai Zheng
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-11847-HDFS-7285-v3.patch, 
> HADOOP-11847-HDFS-7285-v4.patch, HADOOP-11847-v1.patch, HADOOP-11847-v2.patch
>
>
> This is to enhance raw erasure coder to allow only reading least required 
> inputs while decoding. It will also refine and document the relevant APIs for 
> better understanding and usage. When using least required inputs, it may add 
> computating overhead but will possiblly outperform overall since less network 
> traffic and disk IO are involved.
> This is something planned to do but just got reminded by [~zhz]' s question 
> raised in HDFS-7678, also copied here:
> bq.Kai Zheng I have a question about decoding: in a (6+3) schema, if block #2 
> is missing, and I want to repair it with blocks 0, 1, 3, 4, 5, 8, how should 
> I construct the inputs to RawErasureDecoder#decode?
> With this work, hopefully the answer to above question would be obvious.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11887) Introduce Intel ISA-L erasure coding library for the native support

2015-05-06 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HADOOP-11887:
---
Attachment: HADOOP-11887-v2.patch

Updated patch according to review comments.

> Introduce Intel ISA-L erasure coding library for the native support
> ---
>
> Key: HADOOP-11887
> URL: https://issues.apache.org/jira/browse/HADOOP-11887
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HADOOP-11887-v1.patch, HADOOP-11887-v2.patch
>
>
> This is to introduce Intel ISA-L erasure coding library for the native 
> support, via dynamic loading mechanism (dynamic module, like *.so in *nix and 
> *.dll on Windows).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11887) Introduce Intel ISA-L erasure coding library for the native support

2015-05-06 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14530207#comment-14530207
 ] 

Kai Zheng commented on HADOOP-11887:


Hi [~cmccabe],
I updated the patch according to your review comments. It avoided changing into 
{{org_apache_hadoop.h}} by providing its own versions in {{erasure_code.c}}. I 
would really love to keep the tests from ISA-L library to verify the 
integration before bunmping into Java side. All your other points are carefully 
addressed. Would you help review one more time? Thanks!

> Introduce Intel ISA-L erasure coding library for the native support
> ---
>
> Key: HADOOP-11887
> URL: https://issues.apache.org/jira/browse/HADOOP-11887
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HADOOP-11887-v1.patch, HADOOP-11887-v2.patch
>
>
> This is to introduce Intel ISA-L erasure coding library for the native 
> support, via dynamic loading mechanism (dynamic module, like *.so in *nix and 
> *.dll on Windows).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HADOOP-10356) Corrections in winutils/chmod.c

2015-05-06 Thread Kiran Kumar M R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar M R reassigned HADOOP-10356:


Assignee: Kiran Kumar M R

> Corrections in winutils/chmod.c
> ---
>
> Key: HADOOP-10356
> URL: https://issues.apache.org/jira/browse/HADOOP-10356
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: bin
> Environment: Windows
>Reporter: René Nyffenegger
>Assignee: Kiran Kumar M R
>Priority: Trivial
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-10356.patch
>
>
> There are two small things in winutils/chmod.c:
>   pathName should be pointer to a constant WSTR
>   the declartion
>   LPWSTR pathName = NULL;
>   seems to be wrong.
> LPCWSTR pathName = NULL;
>should b used instead.
>   
>I believe the fragment
>   switch (c)
>   {
>   case NULL:
>to be wrong as pointers are not permitted as
>case values.
>   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11923) test-patch whitespace checker doesn't flag new files

2015-05-06 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14530244#comment-14530244
 ] 

Steve Loughran commented on HADOOP-11923:
-

I don't see why we should care about whitespace in patch; we can just lay down 
that whoever commits the patch needs to merge it in with

{code}
git apply --whitespace=fix  
{code}

& let git sort things out. 

> test-patch whitespace checker doesn't flag new files
> 
>
> Key: HADOOP-11923
> URL: https://issues.apache.org/jira/browse/HADOOP-11923
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Sean Busbey
>Priority: Critical
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-11923.patch
>
>
> The whitespace plugin for test-patch only examines new files. So when a patch 
> comes in with trailing whitespace on new files it doesn't flag things as a 
> problem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11927) libcrypto needs libdl when compile native code

2015-05-06 Thread Dmitry Sivachenko (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14530258#comment-14530258
 ] 

Dmitry Sivachenko commented on HADOOP-11927:


Don't forget other OSes (like FreeBSD) which do not have libdl at all (dlopen() 
is in libc)

> libcrypto needs libdl when compile native code
> --
>
> Key: HADOOP-11927
> URL: https://issues.apache.org/jira/browse/HADOOP-11927
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
> Environment: SUSE Linux Enterprise Server 11 SP3  (x86_64)
>Reporter: Xianyin Xin
>Assignee: Xianyin Xin
> Attachments: HADOOP-11927-001.patch
>
>
> When compile hadoop with native support, we encounter compile error that 
> "undefined reference to `dlopen'" when link libcrypto. We'd better link libdl 
> explicitly in CMakeList of hadoop-pips.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11920) Refactor some codes for erasure coders

2015-05-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14530270#comment-14530270
 ] 

Hadoop QA commented on HADOOP-11920:


\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  14m 41s | Pre-patch HDFS-7285 compilation 
is healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 2 new or modified test files. |
| {color:green}+1{color} | javac |   7m 31s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 40s | There were no new javadoc 
warning messages. |
| {color:red}-1{color} | release audit |   1m  8s | The applied patch generated 
15 release audit warnings. |
| {color:red}-1{color} | checkstyle |   1m  3s | The applied patch generated  5 
new checkstyle issues (total was 112, now 112). |
| {color:green}+1{color} | whitespace |   0m  2s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 33s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 32s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   1m 40s | The patch does not introduce 
any new Findbugs (version 2.0.3) warnings. |
| {color:green}+1{color} | common tests |  23m 43s | Tests passed in 
hadoop-common. |
| | |  61m 38s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12730778/HADOOP-11920-HDFS-7285-02.patch
 |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | HDFS-7285 / 850d7fa |
| Release Audit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6503/artifact/patchprocess/patchReleaseAuditProblems.txt
 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HADOOP-Build/6503/artifact/patchprocess/diffcheckstylehadoop-common.txt
 |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6503/artifact/patchprocess/testrun_hadoop-common.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6503/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf909.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6503/console |


This message was automatically generated.

> Refactor some codes for erasure coders
> --
>
> Key: HADOOP-11920
> URL: https://issues.apache.org/jira/browse/HADOOP-11920
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HADOOP-11920-HDFS-7285-02.patch, HADOOP-11920-v1.patch, 
> HADOOP-11920-v2.patch
>
>
> While working on native erasure coders and also HADOOP-11847, it was found in 
> some chances better to refine a little bit of codes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HADOOP-6419) Change RPC layer to support SASL based mutual authentication

2015-05-06 Thread Liang Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6419?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Liang Chen reassigned HADOOP-6419:
--

Assignee: Liang Chen  (was: Kan Zhang)

> Change RPC layer to support SASL based mutual authentication
> 
>
> Key: HADOOP-6419
> URL: https://issues.apache.org/jira/browse/HADOOP-6419
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: security
>Reporter: Kan Zhang
>Assignee: Liang Chen
> Fix For: 0.21.0
>
> Attachments: 6419-bp20-jobsubmitprotocol.patch, 
> HADOOP-6419-0.20-15.patch, c6419-26.patch, c6419-39.patch, c6419-45.patch, 
> c6419-66.patch, c6419-67.patch, c6419-69.patch, c6419-70.patch, 
> c6419-72.patch, c6419-73.patch, c6419-75.patch
>
>
> The authentication mechanism to use will be SASL DIGEST-MD5 (see RFC- and 
> RFC-2831) or SASL GSSAPI/Kerberos. Since J2SE 5, Sun provides a SASL 
> implementation by default. Both our delegation token and job token can be 
> used as credentials for SASL DIGEST-MD5 authentication.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-6419) Change RPC layer to support SASL based mutual authentication

2015-05-06 Thread Liang Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6419?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Liang Chen updated HADOOP-6419:
---
Assignee: Kan Zhang  (was: Liang Chen)

> Change RPC layer to support SASL based mutual authentication
> 
>
> Key: HADOOP-6419
> URL: https://issues.apache.org/jira/browse/HADOOP-6419
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: security
>Reporter: Kan Zhang
>Assignee: Kan Zhang
> Fix For: 0.21.0
>
> Attachments: 6419-bp20-jobsubmitprotocol.patch, 
> HADOOP-6419-0.20-15.patch, c6419-26.patch, c6419-39.patch, c6419-45.patch, 
> c6419-66.patch, c6419-67.patch, c6419-69.patch, c6419-70.patch, 
> c6419-72.patch, c6419-73.patch, c6419-75.patch
>
>
> The authentication mechanism to use will be SASL DIGEST-MD5 (see RFC- and 
> RFC-2831) or SASL GSSAPI/Kerberos. Since J2SE 5, Sun provides a SASL 
> implementation by default. Both our delegation token and job token can be 
> used as credentials for SASL DIGEST-MD5 authentication.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10356) Corrections in winutils/chmod.c

2015-05-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14530352#comment-14530352
 ] 

Hadoop QA commented on HADOOP-10356:


\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |   5m  8s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   7m 27s | There were no new javac warning 
messages. |
| {color:green}+1{color} | release audit |   0m 19s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 31s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 31s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | common tests |  22m 20s | Tests passed in 
hadoop-common. |
| | |  37m 33s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12630265/HADOOP-10356.patch |
| Optional Tests | javac unit |
| git revision | trunk / a583a40 |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6504/artifact/patchprocess/testrun_hadoop-common.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6504/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf907.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6504/console |


This message was automatically generated.

> Corrections in winutils/chmod.c
> ---
>
> Key: HADOOP-10356
> URL: https://issues.apache.org/jira/browse/HADOOP-10356
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: bin
> Environment: Windows
>Reporter: René Nyffenegger
>Assignee: Kiran Kumar M R
>Priority: Trivial
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-10356.patch
>
>
> There are two small things in winutils/chmod.c:
>   pathName should be pointer to a constant WSTR
>   the declartion
>   LPWSTR pathName = NULL;
>   seems to be wrong.
> LPCWSTR pathName = NULL;
>should b used instead.
>   
>I believe the fragment
>   switch (c)
>   {
>   case NULL:
>to be wrong as pointers are not permitted as
>case values.
>   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11921) Enhance tests for erasure coders

2015-05-06 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14530365#comment-14530365
 ] 

Uma Maheswara Rao G commented on HADOOP-11921:
--

Hi Kai,

some nit:

{code}
 // Erase the copied sources
ECChunk[] erasedChunks = eraseAndReturnChunks(clonedDataChunks);
{code}
Actually returned chunks are not erased chunks right? they are cloned input 
chunks instead. Am I right? So, need to change variable name to backupChunks?
May be for better readbility, clone them separately and just do erseChunks in 
eraseAndReturnChunks API?

Other than this changes looks good to me.

> Enhance tests for erasure coders
> 
>
> Key: HADOOP-11921
> URL: https://issues.apache.org/jira/browse/HADOOP-11921
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HADOOP-11921-v1.patch
>
>
> While working on native coders, it was found better to enhance the tests for 
> erasure coders to:
> * Test if erasure coders can be repeatedly reused;
> * Test if erasure coders can be called with two buffer types (bytes array 
> version and direct bytebuffer version).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11912) Extra configuration key used in TraceUtils should respect prefix

2015-05-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14530457#comment-14530457
 ] 

Hudson commented on HADOOP-11912:
-

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #186 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/186/])
HADOOP-11912. Extra configuration key used in TraceUtils should respect prefix 
(Masatake Iwasaki via Colin P. McCabe) (cmccabe: rev 
90b384564875bb353224630e501772b46d4ca9c5)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/tracing/TraceUtils.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/tracing/TestTraceUtils.java
* hadoop-common-project/hadoop-common/CHANGES.txt


> Extra configuration key used in TraceUtils should respect prefix
> 
>
> Key: HADOOP-11912
> URL: https://issues.apache.org/jira/browse/HADOOP-11912
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-11912.001.patch
>
>
> HDFS-8213 added prefix handling to configuration used by tracing but extra 
> key value pairs in configuration returned by TraceUtils#wrapHadoopConf does 
> not respect this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11917) test-patch.sh should work with ${BASEDIR}/patchprocess setups

2015-05-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14530462#comment-14530462
 ] 

Hudson commented on HADOOP-11917:
-

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #186 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/186/])
HADOOP-11917. test-patch.sh should work with ${BASEDIR}/patchprocess setups 
(aw) (aw: rev d33419ae01c528073f9f00ef1aadf153fed41222)
* pom.xml
* hadoop-common-project/hadoop-common/CHANGES.txt
* .gitignore
* dev-support/test-patch.sh


> test-patch.sh should work with ${BASEDIR}/patchprocess setups
> -
>
> Key: HADOOP-11917
> URL: https://issues.apache.org/jira/browse/HADOOP-11917
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Blocker
> Fix For: 2.8.0
>
> Attachments: HADOOP-11917.01.patch, HADOOP-11917.patch
>
>
> There are a bunch of problems with this kind of setup: configuration and code 
> changes in test-patch.sh required.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11926) test-patch.sh mv does wrong math

2015-05-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14530461#comment-14530461
 ] 

Hudson commented on HADOOP-11926:
-

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #186 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/186/])
HADOOP-11926. test-patch.sh mv does wrong math (aw) (aw: rev 
4402e4c633808556d49854df45683688b6a9ce84)
* hadoop-common-project/hadoop-common/CHANGES.txt
* dev-support/test-patch.sh


> test-patch.sh mv does wrong math
> 
>
> Key: HADOOP-11926
> URL: https://issues.apache.org/jira/browse/HADOOP-11926
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Fix For: 2.8.0
>
> Attachments: HADOOP-11928.00.patch
>
>
> cleanup_and_exit uses the wrong result code check and fails to mv the 
> patchdir when it should, and mv's it when it shouldn't.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11904) test-patch.sh goes into an infinite loop on non-maven builds

2015-05-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14530456#comment-14530456
 ] 

Hudson commented on HADOOP-11904:
-

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #186 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/186/])
HADOOP-11904. test-patch.sh goes into an infinite loop on non-maven builds (aw) 
(aw: rev 3ff91e9e9302d94b0d18cccebd02d3815c06ce90)
* dev-support/test-patch.sh
* hadoop-common-project/hadoop-common/CHANGES.txt


> test-patch.sh goes into an infinite loop on non-maven builds
> 
>
> Key: HADOOP-11904
> URL: https://issues.apache.org/jira/browse/HADOOP-11904
> Project: Hadoop Common
>  Issue Type: Test
>  Components: test
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Critical
> Fix For: 2.8.0
>
> Attachments: HADOOP-11904.patch
>
>
> If post HADOOP-11746 test patch is given a non-maven-based build, it goes 
> into an infinite loop looking for modules pom.xml.  There should be an escape 
> clause after switching branches to see if it is maven based. If it is not 
> maven based, then test-patch should either abort or re-exec using that 
> version's test-patch script.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11120) hadoop fs -rmr gives wrong advice

2015-05-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14530489#comment-14530489
 ] 

Hudson commented on HADOOP-11120:
-

SUCCESS: Integrated in Hadoop-Yarn-trunk #919 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/919/])
HADOOP-11120. hadoop fs -rmr gives wrong advice. Contributed by Juliet 
Houghland. (wang: rev 05adc76ace6bf28e4a3ff874044c2c41e3eba63f)
* hadoop-common-project/hadoop-common/src/test/resources/testConf.xml
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Delete.java


> hadoop fs -rmr gives wrong advice
> -
>
> Key: HADOOP-11120
> URL: https://issues.apache.org/jira/browse/HADOOP-11120
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Allen Wittenauer
>Assignee: Juliet Hougland
> Fix For: 2.8.0
>
> Attachments: HADOOP-11120.patch, Screen Shot 2014-09-24 at 3.02.21 
> PM.png
>
>
> Typing bin/hadoop fs -rmr /a?
> gives the output:
> rmr: DEPRECATED: Please use 'rm -r' instead.
> Typing bin/hadoop fs rm -r /a?
> gives the output:
> rm: Unknown command



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11917) test-patch.sh should work with ${BASEDIR}/patchprocess setups

2015-05-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14530495#comment-14530495
 ] 

Hudson commented on HADOOP-11917:
-

SUCCESS: Integrated in Hadoop-Yarn-trunk #919 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/919/])
HADOOP-11917. test-patch.sh should work with ${BASEDIR}/patchprocess setups 
(aw) (aw: rev d33419ae01c528073f9f00ef1aadf153fed41222)
* .gitignore
* pom.xml
* hadoop-common-project/hadoop-common/CHANGES.txt
* dev-support/test-patch.sh


> test-patch.sh should work with ${BASEDIR}/patchprocess setups
> -
>
> Key: HADOOP-11917
> URL: https://issues.apache.org/jira/browse/HADOOP-11917
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Blocker
> Fix For: 2.8.0
>
> Attachments: HADOOP-11917.01.patch, HADOOP-11917.patch
>
>
> There are a bunch of problems with this kind of setup: configuration and code 
> changes in test-patch.sh required.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11912) Extra configuration key used in TraceUtils should respect prefix

2015-05-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14530491#comment-14530491
 ] 

Hudson commented on HADOOP-11912:
-

SUCCESS: Integrated in Hadoop-Yarn-trunk #919 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/919/])
HADOOP-11912. Extra configuration key used in TraceUtils should respect prefix 
(Masatake Iwasaki via Colin P. McCabe) (cmccabe: rev 
90b384564875bb353224630e501772b46d4ca9c5)
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/tracing/TestTraceUtils.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/tracing/TraceUtils.java
* hadoop-common-project/hadoop-common/CHANGES.txt


> Extra configuration key used in TraceUtils should respect prefix
> 
>
> Key: HADOOP-11912
> URL: https://issues.apache.org/jira/browse/HADOOP-11912
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-11912.001.patch
>
>
> HDFS-8213 added prefix handling to configuration used by tracing but extra 
> key value pairs in configuration returned by TraceUtils#wrapHadoopConf does 
> not respect this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11911) test-patch should allow configuration of default branch

2015-05-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14530493#comment-14530493
 ] 

Hudson commented on HADOOP-11911:
-

SUCCESS: Integrated in Hadoop-Yarn-trunk #919 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/919/])
HADOOP-11911. test-patch should allow configuration of default branch (Sean 
Busbey via aw) (aw: rev 9b01f81eb874cd63e6b9ae2d09d94fc8bf4fcd7d)
* hadoop-common-project/hadoop-common/CHANGES.txt
* dev-support/test-patch.sh


> test-patch should allow configuration of default branch
> ---
>
> Key: HADOOP-11911
> URL: https://issues.apache.org/jira/browse/HADOOP-11911
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-11911.1.patch, HADOOP-11911.2.patch, 
> HADOOP-11911.3.patch, HADOOP-11911.4.patch
>
>
> right now test-patch.sh forces a default branch of 'trunk'. would be better 
> to allow it to be configurable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11926) test-patch.sh mv does wrong math

2015-05-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14530494#comment-14530494
 ] 

Hudson commented on HADOOP-11926:
-

SUCCESS: Integrated in Hadoop-Yarn-trunk #919 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/919/])
HADOOP-11926. test-patch.sh mv does wrong math (aw) (aw: rev 
4402e4c633808556d49854df45683688b6a9ce84)
* dev-support/test-patch.sh
* hadoop-common-project/hadoop-common/CHANGES.txt


> test-patch.sh mv does wrong math
> 
>
> Key: HADOOP-11926
> URL: https://issues.apache.org/jira/browse/HADOOP-11926
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Fix For: 2.8.0
>
> Attachments: HADOOP-11928.00.patch
>
>
> cleanup_and_exit uses the wrong result code check and fails to mv the 
> patchdir when it should, and mv's it when it shouldn't.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11904) test-patch.sh goes into an infinite loop on non-maven builds

2015-05-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14530490#comment-14530490
 ] 

Hudson commented on HADOOP-11904:
-

SUCCESS: Integrated in Hadoop-Yarn-trunk #919 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/919/])
HADOOP-11904. test-patch.sh goes into an infinite loop on non-maven builds (aw) 
(aw: rev 3ff91e9e9302d94b0d18cccebd02d3815c06ce90)
* dev-support/test-patch.sh
* hadoop-common-project/hadoop-common/CHANGES.txt


> test-patch.sh goes into an infinite loop on non-maven builds
> 
>
> Key: HADOOP-11904
> URL: https://issues.apache.org/jira/browse/HADOOP-11904
> Project: Hadoop Common
>  Issue Type: Test
>  Components: test
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Critical
> Fix For: 2.8.0
>
> Attachments: HADOOP-11904.patch
>
>
> If post HADOOP-11746 test patch is given a non-maven-based build, it goes 
> into an infinite loop looking for modules pom.xml.  There should be an escape 
> clause after switching branches to see if it is maven based. If it is not 
> maven based, then test-patch should either abort or re-exec using that 
> version's test-patch script.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11921) Enhance tests for erasure coders

2015-05-06 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HADOOP-11921:
---
Attachment: HADOOP-11921-v2.patch

Updated the patch according to review comments.

> Enhance tests for erasure coders
> 
>
> Key: HADOOP-11921
> URL: https://issues.apache.org/jira/browse/HADOOP-11921
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HADOOP-11921-v1.patch, HADOOP-11921-v2.patch
>
>
> While working on native coders, it was found better to enhance the tests for 
> erasure coders to:
> * Test if erasure coders can be repeatedly reused;
> * Test if erasure coders can be called with two buffer types (bytes array 
> version and direct bytebuffer version).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11921) Enhance tests for erasure coders

2015-05-06 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14530538#comment-14530538
 ] 

Kai Zheng commented on HADOOP-11921:


Hi Uma,

Thanks for the review and good comments. The patch was updated as you 
suggested. I changed {{eraseAndReturnChunks}} to {{backupAndEraseChunks}}, 
hoping it could be better. Would you help review one more time and see if it 
works? Thanks.

> Enhance tests for erasure coders
> 
>
> Key: HADOOP-11921
> URL: https://issues.apache.org/jira/browse/HADOOP-11921
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HADOOP-11921-v1.patch, HADOOP-11921-v2.patch
>
>
> While working on native coders, it was found better to enhance the tests for 
> erasure coders to:
> * Test if erasure coders can be repeatedly reused;
> * Test if erasure coders can be called with two buffer types (bytes array 
> version and direct bytebuffer version).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11921) Enhance tests for erasure coders

2015-05-06 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14530564#comment-14530564
 ] 

Uma Maheswara Rao G commented on HADOOP-11921:
--

+1, latest patch looks good to me.

> Enhance tests for erasure coders
> 
>
> Key: HADOOP-11921
> URL: https://issues.apache.org/jira/browse/HADOOP-11921
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HADOOP-11921-v1.patch, HADOOP-11921-v2.patch
>
>
> While working on native coders, it was found better to enhance the tests for 
> erasure coders to:
> * Test if erasure coders can be repeatedly reused;
> * Test if erasure coders can be called with two buffer types (bytes array 
> version and direct bytebuffer version).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10924) LocalDistributedCacheManager for concurrent sqoop processes fails to create unique directories

2015-05-06 Thread William Watson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14530567#comment-14530567
 ] 

William Watson commented on HADOOP-10924:
-

Sorry, trying to get back to this, just have been dealing with production 
issues.

> LocalDistributedCacheManager for concurrent sqoop processes fails to create 
> unique directories
> --
>
> Key: HADOOP-10924
> URL: https://issues.apache.org/jira/browse/HADOOP-10924
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: William Watson
>Assignee: William Watson
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-10924.02.patch, 
> HADOOP-10924.03.jobid-plus-uuid.patch
>
>
> Kicking off many sqoop processes in different threads results in:
> {code}
> 2014-08-01 13:47:24 -0400:  INFO - 14/08/01 13:47:22 ERROR tool.ImportTool: 
> Encountered IOException running import job: java.io.IOException: 
> java.util.concurrent.ExecutionException: java.io.IOException: Rename cannot 
> overwrite non empty destination directory 
> /tmp/hadoop-hadoop/mapred/local/1406915233073
> 2014-08-01 13:47:24 -0400:  INFO -at 
> org.apache.hadoop.mapred.LocalDistributedCacheManager.setup(LocalDistributedCacheManager.java:149)
> 2014-08-01 13:47:24 -0400:  INFO -at 
> org.apache.hadoop.mapred.LocalJobRunner$Job.(LocalJobRunner.java:163)
> 2014-08-01 13:47:24 -0400:  INFO -at 
> org.apache.hadoop.mapred.LocalJobRunner.submitJob(LocalJobRunner.java:731)
> 2014-08-01 13:47:24 -0400:  INFO -at 
> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:432)
> 2014-08-01 13:47:24 -0400:  INFO -at 
> org.apache.hadoop.mapreduce.Job$10.run(Job.java:1285)
> 2014-08-01 13:47:24 -0400:  INFO -at 
> org.apache.hadoop.mapreduce.Job$10.run(Job.java:1282)
> 2014-08-01 13:47:24 -0400:  INFO -at 
> java.security.AccessController.doPrivileged(Native Method)
> 2014-08-01 13:47:24 -0400:  INFO -at 
> javax.security.auth.Subject.doAs(Subject.java:415)
> 2014-08-01 13:47:24 -0400:  INFO -at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
> 2014-08-01 13:47:24 -0400:  INFO -at 
> org.apache.hadoop.mapreduce.Job.submit(Job.java:1282)
> 2014-08-01 13:47:24 -0400:  INFO -at 
> org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1303)
> 2014-08-01 13:47:24 -0400:  INFO -at 
> org.apache.sqoop.mapreduce.ImportJobBase.doSubmitJob(ImportJobBase.java:186)
> 2014-08-01 13:47:24 -0400:  INFO -at 
> org.apache.sqoop.mapreduce.ImportJobBase.runJob(ImportJobBase.java:159)
> 2014-08-01 13:47:24 -0400:  INFO -at 
> org.apache.sqoop.mapreduce.ImportJobBase.runImport(ImportJobBase.java:239)
> 2014-08-01 13:47:24 -0400:  INFO -at 
> org.apache.sqoop.manager.SqlManager.importQuery(SqlManager.java:645)
> 2014-08-01 13:47:24 -0400:  INFO -at 
> org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:415)
> 2014-08-01 13:47:24 -0400:  INFO -at 
> org.apache.sqoop.tool.ImportTool.run(ImportTool.java:502)
> 2014-08-01 13:47:24 -0400:  INFO -at 
> org.apache.sqoop.Sqoop.run(Sqoop.java:145)
> 2014-08-01 13:47:24 -0400:  INFO -at 
> org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> 2014-08-01 13:47:24 -0400:  INFO -at 
> org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:181)
> 2014-08-01 13:47:24 -0400:  INFO -at 
> org.apache.sqoop.Sqoop.runTool(Sqoop.java:220)
> 2014-08-01 13:47:24 -0400:  INFO -at 
> org.apache.sqoop.Sqoop.runTool(Sqoop.java:229)
> 2014-08-01 13:47:24 -0400:  INFO -at 
> org.apache.sqoop.Sqoop.main(Sqoop.java:238)
> {code}
> If two are kicked off in the same second. The issue is the following lines of 
> code in the org.apache.hadoop.mapred.LocalDistributedCacheManager class: 
> {code}
> // Generating unique numbers for FSDownload.
> AtomicLong uniqueNumberGenerator =
>new AtomicLong(System.currentTimeMillis());
> {code}
> and 
> {code}
> Long.toString(uniqueNumberGenerator.incrementAndGet())),
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11920) Refactor some codes for erasure coders

2015-05-06 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HADOOP-11920:
---
Attachment: HADOOP-11920-v3.patch

Updated the patch according to review comments and also addressed the issues 
found by Jenkins. 

> Refactor some codes for erasure coders
> --
>
> Key: HADOOP-11920
> URL: https://issues.apache.org/jira/browse/HADOOP-11920
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HADOOP-11920-HDFS-7285-02.patch, HADOOP-11920-v1.patch, 
> HADOOP-11920-v2.patch, HADOOP-11920-v3.patch
>
>
> While working on native erasure coders and also HADOOP-11847, it was found in 
> some chances better to refine a little bit of codes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11927) libcrypto needs libdl when compile native code

2015-05-06 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14530589#comment-14530589
 ] 

Chris Nauroth commented on HADOOP-11927:


See HADOOP-8811 for a similar bug that was fixed in the hadoop-common native 
build.

> libcrypto needs libdl when compile native code
> --
>
> Key: HADOOP-11927
> URL: https://issues.apache.org/jira/browse/HADOOP-11927
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
> Environment: SUSE Linux Enterprise Server 11 SP3  (x86_64)
>Reporter: Xianyin Xin
>Assignee: Xianyin Xin
> Attachments: HADOOP-11927-001.patch
>
>
> When compile hadoop with native support, we encounter compile error that 
> "undefined reference to `dlopen'" when link libcrypto. We'd better link libdl 
> explicitly in CMakeList of hadoop-pips.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11927) libcrypto needs libdl when compile native code

2015-05-06 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-11927:
---
Component/s: native
 build

> libcrypto needs libdl when compile native code
> --
>
> Key: HADOOP-11927
> URL: https://issues.apache.org/jira/browse/HADOOP-11927
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, native, tools
> Environment: SUSE Linux Enterprise Server 11 SP3  (x86_64)
>Reporter: Xianyin Xin
>Assignee: Xianyin Xin
> Attachments: HADOOP-11927-001.patch
>
>
> When compile hadoop with native support, we encounter compile error that 
> "undefined reference to `dlopen'" when link libcrypto. We'd better link libdl 
> explicitly in CMakeList of hadoop-pips.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11921) Enhance tests for erasure coders

2015-05-06 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14530603#comment-14530603
 ] 

Kai Zheng commented on HADOOP-11921:


I've committed to the branch. Thanks Uma for the review!

> Enhance tests for erasure coders
> 
>
> Key: HADOOP-11921
> URL: https://issues.apache.org/jira/browse/HADOOP-11921
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HADOOP-11921-v1.patch, HADOOP-11921-v2.patch
>
>
> While working on native coders, it was found better to enhance the tests for 
> erasure coders to:
> * Test if erasure coders can be repeatedly reused;
> * Test if erasure coders can be called with two buffer types (bytes array 
> version and direct bytebuffer version).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-11921) Enhance tests for erasure coders

2015-05-06 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng resolved HADOOP-11921.

  Resolution: Fixed
Target Version/s: HDFS-7285
Hadoop Flags: Reviewed

> Enhance tests for erasure coders
> 
>
> Key: HADOOP-11921
> URL: https://issues.apache.org/jira/browse/HADOOP-11921
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HADOOP-11921-v1.patch, HADOOP-11921-v2.patch
>
>
> While working on native coders, it was found better to enhance the tests for 
> erasure coders to:
> * Test if erasure coders can be repeatedly reused;
> * Test if erasure coders can be called with two buffer types (bytes array 
> version and direct bytebuffer version).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10356) Corrections in winutils/chmod.c

2015-05-06 Thread Kiran Kumar M R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar M R updated HADOOP-10356:
-
Labels: BB2015-05-RFC  (was: BB2015-05-TBR)

> Corrections in winutils/chmod.c
> ---
>
> Key: HADOOP-10356
> URL: https://issues.apache.org/jira/browse/HADOOP-10356
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: bin
> Environment: Windows
>Reporter: René Nyffenegger
>Assignee: Kiran Kumar M R
>Priority: Trivial
>  Labels: BB2015-05-RFC
> Attachments: HADOOP-10356.patch
>
>
> There are two small things in winutils/chmod.c:
>   pathName should be pointer to a constant WSTR
>   the declartion
>   LPWSTR pathName = NULL;
>   seems to be wrong.
> LPCWSTR pathName = NULL;
>should b used instead.
>   
>I believe the fragment
>   switch (c)
>   {
>   case NULL:
>to be wrong as pointers are not permitted as
>case values.
>   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10356) Corrections in winutils/chmod.c

2015-05-06 Thread Kiran Kumar M R (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kiran Kumar M R updated HADOOP-10356:
-
Target Version/s: 3.0.0, 2.8.0, 2.7.1

> Corrections in winutils/chmod.c
> ---
>
> Key: HADOOP-10356
> URL: https://issues.apache.org/jira/browse/HADOOP-10356
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: bin
> Environment: Windows
>Reporter: René Nyffenegger
>Assignee: Kiran Kumar M R
>Priority: Trivial
>  Labels: BB2015-05-RFC
> Attachments: HADOOP-10356.patch
>
>
> There are two small things in winutils/chmod.c:
>   pathName should be pointer to a constant WSTR
>   the declartion
>   LPWSTR pathName = NULL;
>   seems to be wrong.
> LPCWSTR pathName = NULL;
>should b used instead.
>   
>I believe the fragment
>   switch (c)
>   {
>   case NULL:
>to be wrong as pointers are not permitted as
>case values.
>   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11928) Test-patch check for @author tags incorrectly flags removal of @author tags

2015-05-06 Thread Sean Busbey (JIRA)
Sean Busbey created HADOOP-11928:


 Summary: Test-patch check for @author tags incorrectly flags 
removal of @author tags
 Key: HADOOP-11928
 URL: https://issues.apache.org/jira/browse/HADOOP-11928
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Sean Busbey


right now the "has @author tags" check incorrectly flags removal of an author 
tag as having one.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10356) Corrections in winutils/chmod.c

2015-05-06 Thread Kiran Kumar M R (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14530637#comment-14530637
 ] 

Kiran Kumar M R commented on HADOOP-10356:
--

Reviewed and verified patch.
{{case NULL:}} is not correct.  It is {{void *}} pointer. 
VC++ compiler is allowing this somehow. This is usually not allowed in gcc. 
So changing this to {{NULL Termination '\0'}}  is correct fix.

Existing patch applies on both trunk and branch-2. No new patch needed.
Compiled and tested this patch on windows. Its working fine.
No new compilation warnings found.

This can be committed.


> Corrections in winutils/chmod.c
> ---
>
> Key: HADOOP-10356
> URL: https://issues.apache.org/jira/browse/HADOOP-10356
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: bin
> Environment: Windows
>Reporter: René Nyffenegger
>Assignee: Kiran Kumar M R
>Priority: Trivial
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-10356.patch
>
>
> There are two small things in winutils/chmod.c:
>   pathName should be pointer to a constant WSTR
>   the declartion
>   LPWSTR pathName = NULL;
>   seems to be wrong.
> LPCWSTR pathName = NULL;
>should b used instead.
>   
>I believe the fragment
>   switch (c)
>   {
>   case NULL:
>to be wrong as pointers are not permitted as
>case values.
>   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11906) test-patch.sh should use 'file' command for patch determinism

2015-05-06 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14530678#comment-14530678
 ] 

Sean Busbey commented on HADOOP-11906:
--

Let's hope 'file' on jenkins does better than the on on OS X 10.8

{code}
$ file -version
file-5.04
magic file from /usr/share/file/magic
$ find ../patches/ -type f -exec file {} \; | cut -d: -f2 | sort | uniq -c | 
sort -n -r
 103  ASCII English text
  82  diff output text
  53  exported SGML document text
  50  ASCII C++ program text
  37  ASCII Java program text
  27  RCS/CVS diff output text
  23  HTML document text
  23  ASCII text
  10  ASCII English text, with very long lines
   7  UTF-8 Unicode English text
   3  ASCII C++ program text, with very long lines
   2  data
   2  PDF document, version 1.4
   2  ASCII Java program text, with very long lines
   1  UTF-8 Unicode English text, with very long lines
   1  Git bundle
   1  ASCII text, with very long lines
   1  ASCII C++ program text, with CRLF, CR, LF line terminators
{code}

> test-patch.sh should use 'file' command for patch determinism
> -
>
> Key: HADOOP-11906
> URL: https://issues.apache.org/jira/browse/HADOOP-11906
> Project: Hadoop Common
>  Issue Type: Test
>Reporter: Allen Wittenauer
>Assignee: Sean Busbey
>
> test-patch.sh currently restricts patches to the extension .patch.  It might 
> be useful to also check if the file command says it is a diff.  This would 
> allow us to determine if files that end in .txt are actually patches.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11906) test-patch.sh should use 'file' command for patch determinism

2015-05-06 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14530682#comment-14530682
 ] 

Sean Busbey commented on HADOOP-11906:
--

For comparison, if I use the heuristic "does the first line look like a patch?" 
I get 417 out of 428:

{code}
$ find ../patches/ -type f -exec head -n 1 {} \; | grep -E "^(From [a-z0-9]* 
Mon Sep 17 00:00:00 2001)|(diff .*)|(Index: .*)$" | wc -l
 417
$ find ../patches/ -type f -exec head -n 1 {} \; | grep -v -E "^(From [a-z0-9]* 
Mon Sep 17 00:00:00 2001)|(diff .*)|(Index: .*)$" | wc -l
  11
{code}

> test-patch.sh should use 'file' command for patch determinism
> -
>
> Key: HADOOP-11906
> URL: https://issues.apache.org/jira/browse/HADOOP-11906
> Project: Hadoop Common
>  Issue Type: Test
>Reporter: Allen Wittenauer
>Assignee: Sean Busbey
>
> test-patch.sh currently restricts patches to the extension .patch.  It might 
> be useful to also check if the file command says it is a diff.  This would 
> allow us to determine if files that end in .txt are actually patches.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11923) test-patch whitespace checker doesn't flag new files

2015-05-06 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14530695#comment-14530695
 ] 

Sean Busbey commented on HADOOP-11923:
--

I think part of the advantage would be signaling to whoever commits that they 
need to use --whitespace=fix.

> test-patch whitespace checker doesn't flag new files
> 
>
> Key: HADOOP-11923
> URL: https://issues.apache.org/jira/browse/HADOOP-11923
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Sean Busbey
>Priority: Critical
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-11923.patch
>
>
> The whitespace plugin for test-patch only examines new files. So when a patch 
> comes in with trailing whitespace on new files it doesn't flag things as a 
> problem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11920) Refactor some codes for erasure coders

2015-05-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14530714#comment-14530714
 ] 

Hadoop QA commented on HADOOP-11920:


\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch |   0m  0s | The patch command could not apply 
the patch during dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12730847/HADOOP-11920-v3.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / a583a40 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6505/console |


This message was automatically generated.

> Refactor some codes for erasure coders
> --
>
> Key: HADOOP-11920
> URL: https://issues.apache.org/jira/browse/HADOOP-11920
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HADOOP-11920-HDFS-7285-02.patch, HADOOP-11920-v1.patch, 
> HADOOP-11920-v2.patch, HADOOP-11920-v3.patch
>
>
> While working on native erasure coders and also HADOOP-11847, it was found in 
> some chances better to refine a little bit of codes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11912) Extra configuration key used in TraceUtils should respect prefix

2015-05-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14530720#comment-14530720
 ] 

Hudson commented on HADOOP-11912:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk #2117 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2117/])
HADOOP-11912. Extra configuration key used in TraceUtils should respect prefix 
(Masatake Iwasaki via Colin P. McCabe) (cmccabe: rev 
90b384564875bb353224630e501772b46d4ca9c5)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/tracing/TestTraceUtils.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/tracing/TraceUtils.java


> Extra configuration key used in TraceUtils should respect prefix
> 
>
> Key: HADOOP-11912
> URL: https://issues.apache.org/jira/browse/HADOOP-11912
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-11912.001.patch
>
>
> HDFS-8213 added prefix handling to configuration used by tracing but extra 
> key value pairs in configuration returned by TraceUtils#wrapHadoopConf does 
> not respect this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11917) test-patch.sh should work with ${BASEDIR}/patchprocess setups

2015-05-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14530724#comment-14530724
 ] 

Hudson commented on HADOOP-11917:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk #2117 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2117/])
HADOOP-11917. test-patch.sh should work with ${BASEDIR}/patchprocess setups 
(aw) (aw: rev d33419ae01c528073f9f00ef1aadf153fed41222)
* dev-support/test-patch.sh
* pom.xml
* hadoop-common-project/hadoop-common/CHANGES.txt
* .gitignore


> test-patch.sh should work with ${BASEDIR}/patchprocess setups
> -
>
> Key: HADOOP-11917
> URL: https://issues.apache.org/jira/browse/HADOOP-11917
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Blocker
> Fix For: 2.8.0
>
> Attachments: HADOOP-11917.01.patch, HADOOP-11917.patch
>
>
> There are a bunch of problems with this kind of setup: configuration and code 
> changes in test-patch.sh required.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11926) test-patch.sh mv does wrong math

2015-05-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14530723#comment-14530723
 ] 

Hudson commented on HADOOP-11926:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk #2117 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2117/])
HADOOP-11926. test-patch.sh mv does wrong math (aw) (aw: rev 
4402e4c633808556d49854df45683688b6a9ce84)
* hadoop-common-project/hadoop-common/CHANGES.txt
* dev-support/test-patch.sh


> test-patch.sh mv does wrong math
> 
>
> Key: HADOOP-11926
> URL: https://issues.apache.org/jira/browse/HADOOP-11926
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Fix For: 2.8.0
>
> Attachments: HADOOP-11928.00.patch
>
>
> cleanup_and_exit uses the wrong result code check and fails to mv the 
> patchdir when it should, and mv's it when it shouldn't.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11904) test-patch.sh goes into an infinite loop on non-maven builds

2015-05-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14530740#comment-14530740
 ] 

Hudson commented on HADOOP-11904:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #176 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/176/])
HADOOP-11904. test-patch.sh goes into an infinite loop on non-maven builds (aw) 
(aw: rev 3ff91e9e9302d94b0d18cccebd02d3815c06ce90)
* dev-support/test-patch.sh
* hadoop-common-project/hadoop-common/CHANGES.txt


> test-patch.sh goes into an infinite loop on non-maven builds
> 
>
> Key: HADOOP-11904
> URL: https://issues.apache.org/jira/browse/HADOOP-11904
> Project: Hadoop Common
>  Issue Type: Test
>  Components: test
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Critical
> Fix For: 2.8.0
>
> Attachments: HADOOP-11904.patch
>
>
> If post HADOOP-11746 test patch is given a non-maven-based build, it goes 
> into an infinite loop looking for modules pom.xml.  There should be an escape 
> clause after switching branches to see if it is maven based. If it is not 
> maven based, then test-patch should either abort or re-exec using that 
> version's test-patch script.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11917) test-patch.sh should work with ${BASEDIR}/patchprocess setups

2015-05-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14530745#comment-14530745
 ] 

Hudson commented on HADOOP-11917:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #176 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/176/])
HADOOP-11917. test-patch.sh should work with ${BASEDIR}/patchprocess setups 
(aw) (aw: rev d33419ae01c528073f9f00ef1aadf153fed41222)
* .gitignore
* pom.xml
* hadoop-common-project/hadoop-common/CHANGES.txt
* dev-support/test-patch.sh


> test-patch.sh should work with ${BASEDIR}/patchprocess setups
> -
>
> Key: HADOOP-11917
> URL: https://issues.apache.org/jira/browse/HADOOP-11917
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Blocker
> Fix For: 2.8.0
>
> Attachments: HADOOP-11917.01.patch, HADOOP-11917.patch
>
>
> There are a bunch of problems with this kind of setup: configuration and code 
> changes in test-patch.sh required.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11911) test-patch should allow configuration of default branch

2015-05-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14530743#comment-14530743
 ] 

Hudson commented on HADOOP-11911:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #176 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/176/])
HADOOP-11911. test-patch should allow configuration of default branch (Sean 
Busbey via aw) (aw: rev 9b01f81eb874cd63e6b9ae2d09d94fc8bf4fcd7d)
* dev-support/test-patch.sh
* hadoop-common-project/hadoop-common/CHANGES.txt


> test-patch should allow configuration of default branch
> ---
>
> Key: HADOOP-11911
> URL: https://issues.apache.org/jira/browse/HADOOP-11911
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-11911.1.patch, HADOOP-11911.2.patch, 
> HADOOP-11911.3.patch, HADOOP-11911.4.patch
>
>
> right now test-patch.sh forces a default branch of 'trunk'. would be better 
> to allow it to be configurable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11912) Extra configuration key used in TraceUtils should respect prefix

2015-05-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14530741#comment-14530741
 ] 

Hudson commented on HADOOP-11912:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #176 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/176/])
HADOOP-11912. Extra configuration key used in TraceUtils should respect prefix 
(Masatake Iwasaki via Colin P. McCabe) (cmccabe: rev 
90b384564875bb353224630e501772b46d4ca9c5)
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/tracing/TestTraceUtils.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/tracing/TraceUtils.java
* hadoop-common-project/hadoop-common/CHANGES.txt


> Extra configuration key used in TraceUtils should respect prefix
> 
>
> Key: HADOOP-11912
> URL: https://issues.apache.org/jira/browse/HADOOP-11912
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-11912.001.patch
>
>
> HDFS-8213 added prefix handling to configuration used by tracing but extra 
> key value pairs in configuration returned by TraceUtils#wrapHadoopConf does 
> not respect this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11926) test-patch.sh mv does wrong math

2015-05-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14530744#comment-14530744
 ] 

Hudson commented on HADOOP-11926:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #176 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/176/])
HADOOP-11926. test-patch.sh mv does wrong math (aw) (aw: rev 
4402e4c633808556d49854df45683688b6a9ce84)
* dev-support/test-patch.sh
* hadoop-common-project/hadoop-common/CHANGES.txt


> test-patch.sh mv does wrong math
> 
>
> Key: HADOOP-11926
> URL: https://issues.apache.org/jira/browse/HADOOP-11926
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Fix For: 2.8.0
>
> Attachments: HADOOP-11928.00.patch
>
>
> cleanup_and_exit uses the wrong result code check and fails to mv the 
> patchdir when it should, and mv's it when it shouldn't.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11120) hadoop fs -rmr gives wrong advice

2015-05-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14530739#comment-14530739
 ] 

Hudson commented on HADOOP-11120:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #176 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/176/])
HADOOP-11120. hadoop fs -rmr gives wrong advice. Contributed by Juliet 
Houghland. (wang: rev 05adc76ace6bf28e4a3ff874044c2c41e3eba63f)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Delete.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* hadoop-common-project/hadoop-common/src/test/resources/testConf.xml


> hadoop fs -rmr gives wrong advice
> -
>
> Key: HADOOP-11120
> URL: https://issues.apache.org/jira/browse/HADOOP-11120
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Allen Wittenauer
>Assignee: Juliet Hougland
> Fix For: 2.8.0
>
> Attachments: HADOOP-11120.patch, Screen Shot 2014-09-24 at 3.02.21 
> PM.png
>
>
> Typing bin/hadoop fs -rmr /a?
> gives the output:
> rmr: DEPRECATED: Please use 'rm -r' instead.
> Typing bin/hadoop fs rm -r /a?
> gives the output:
> rm: Unknown command



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11120) hadoop fs -rmr gives wrong advice

2015-05-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14530718#comment-14530718
 ] 

Hudson commented on HADOOP-11120:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk #2117 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2117/])
HADOOP-11120. hadoop fs -rmr gives wrong advice. Contributed by Juliet 
Houghland. (wang: rev 05adc76ace6bf28e4a3ff874044c2c41e3eba63f)
* hadoop-common-project/hadoop-common/src/test/resources/testConf.xml
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Delete.java
* hadoop-common-project/hadoop-common/CHANGES.txt


> hadoop fs -rmr gives wrong advice
> -
>
> Key: HADOOP-11120
> URL: https://issues.apache.org/jira/browse/HADOOP-11120
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Allen Wittenauer
>Assignee: Juliet Hougland
> Fix For: 2.8.0
>
> Attachments: HADOOP-11120.patch, Screen Shot 2014-09-24 at 3.02.21 
> PM.png
>
>
> Typing bin/hadoop fs -rmr /a?
> gives the output:
> rmr: DEPRECATED: Please use 'rm -r' instead.
> Typing bin/hadoop fs rm -r /a?
> gives the output:
> rm: Unknown command



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11904) test-patch.sh goes into an infinite loop on non-maven builds

2015-05-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14530719#comment-14530719
 ] 

Hudson commented on HADOOP-11904:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk #2117 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2117/])
HADOOP-11904. test-patch.sh goes into an infinite loop on non-maven builds (aw) 
(aw: rev 3ff91e9e9302d94b0d18cccebd02d3815c06ce90)
* hadoop-common-project/hadoop-common/CHANGES.txt
* dev-support/test-patch.sh


> test-patch.sh goes into an infinite loop on non-maven builds
> 
>
> Key: HADOOP-11904
> URL: https://issues.apache.org/jira/browse/HADOOP-11904
> Project: Hadoop Common
>  Issue Type: Test
>  Components: test
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Critical
> Fix For: 2.8.0
>
> Attachments: HADOOP-11904.patch
>
>
> If post HADOOP-11746 test patch is given a non-maven-based build, it goes 
> into an infinite loop looking for modules pom.xml.  There should be an escape 
> clause after switching branches to see if it is maven based. If it is not 
> maven based, then test-patch should either abort or re-exec using that 
> version's test-patch script.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11911) test-patch should allow configuration of default branch

2015-05-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14530722#comment-14530722
 ] 

Hudson commented on HADOOP-11911:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk #2117 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2117/])
HADOOP-11911. test-patch should allow configuration of default branch (Sean 
Busbey via aw) (aw: rev 9b01f81eb874cd63e6b9ae2d09d94fc8bf4fcd7d)
* hadoop-common-project/hadoop-common/CHANGES.txt
* dev-support/test-patch.sh


> test-patch should allow configuration of default branch
> ---
>
> Key: HADOOP-11911
> URL: https://issues.apache.org/jira/browse/HADOOP-11911
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-11911.1.patch, HADOOP-11911.2.patch, 
> HADOOP-11911.3.patch, HADOOP-11911.4.patch
>
>
> right now test-patch.sh forces a default branch of 'trunk'. would be better 
> to allow it to be configurable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11807) add a lint mode to releasedocmaker

2015-05-06 Thread Kengo Seki (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14530760#comment-14530760
 ] 

Kengo Seki commented on HADOOP-11807:
-

Hi [~ramtinb], your patch seems to work fine. I have some comments:

* checkMissingComponent()
** It can be simplified like below. checkMissingAssignee() is also the same.
{code}
def checkMissingComponent(self):
  return len(self.fields['components']) < 1 
{code}

* checkMissingAssignee()
** [PEP 
8|https://www.python.org/dev/peps/pep-0008/#programming-recommendations] says 
"Comparisons to singletons like None should always be done with is or is not, 
never the equality operators". 
   "... is not None" is more desirable than "... != None" (the original source 
has the same problem).

* checkVersionString()
** re.match() applies the pattern at the start of the string, so '^' at the 
head of the pattern can be removed. 
   The outermost and first inner parentheses, and '.*$' may be also unnecessary.

* main()
** Some typos (variable erorrCount, string "componenet")
** It may be better that errorCount and warningCount are formatted as integers, 
not as strings
** I think os._exit() is not so popular and sys.exit() is preferred in general.

> add a lint mode to releasedocmaker
> --
>
> Key: HADOOP-11807
> URL: https://issues.apache.org/jira/browse/HADOOP-11807
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, documentation
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: ramtin
>Priority: Minor
> Attachments: HADOOP-11807.001.patch
>
>
> * check for missing components (error)
> * check for missing assignee (error)
> * check for common version problems (warning)
> * add an error message for missing release notes



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11904) test-patch.sh goes into an infinite loop on non-maven builds

2015-05-06 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14530770#comment-14530770
 ] 

Sean Busbey commented on HADOOP-11904:
--

Ugh. This totally broke my reuse efforts over on NIFI-577. I was relying on the 
plugin system to move into the correct sub-directory to get to poms. I'll file 
a ticket to add plugin points instead of overriding post checkout?

> test-patch.sh goes into an infinite loop on non-maven builds
> 
>
> Key: HADOOP-11904
> URL: https://issues.apache.org/jira/browse/HADOOP-11904
> Project: Hadoop Common
>  Issue Type: Test
>  Components: test
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Critical
> Fix For: 2.8.0
>
> Attachments: HADOOP-11904.patch
>
>
> If post HADOOP-11746 test patch is given a non-maven-based build, it goes 
> into an infinite loop looking for modules pom.xml.  There should be an escape 
> clause after switching branches to see if it is maven based. If it is not 
> maven based, then test-patch should either abort or re-exec using that 
> version's test-patch script.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11929) add test-patch plugin points for customizing build layout

2015-05-06 Thread Sean Busbey (JIRA)
Sean Busbey created HADOOP-11929:


 Summary: add test-patch plugin points for customizing build layout
 Key: HADOOP-11929
 URL: https://issues.apache.org/jira/browse/HADOOP-11929
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Sean Busbey
Assignee: Sean Busbey
Priority: Minor


nothing fancy, just somethign that doesn't have a top level pom.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11929) add test-patch plugin points for customizing build layout

2015-05-06 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14530841#comment-14530841
 ] 

Allen Wittenauer commented on HADOOP-11929:
---

All we really need to do is move the if exists check into a postcheckout 
plugin.  The folks can replace the plugin as needed. 

> add test-patch plugin points for customizing build layout
> -
>
> Key: HADOOP-11929
> URL: https://issues.apache.org/jira/browse/HADOOP-11929
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Minor
>
> nothing fancy, just somethign that doesn't have a top level pom.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11929) add test-patch plugin points for customizing build layout

2015-05-06 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14530850#comment-14530850
 ] 

Sean Busbey commented on HADOOP-11929:
--

there was some additional hackery I had to do, let me find my WIP patch

> add test-patch plugin points for customizing build layout
> -
>
> Key: HADOOP-11929
> URL: https://issues.apache.org/jira/browse/HADOOP-11929
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Minor
>
> nothing fancy, just somethign that doesn't have a top level pom.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11929) add test-patch plugin points for customizing build layout

2015-05-06 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14530857#comment-14530857
 ] 

Sean Busbey commented on HADOOP-11929:
--

it was postapply. I had to move plugin postapply to before the post-apply javac 
checks (because apply needed to be done at the top level).

Oh wait, I have another outstanding local change that adds arguments for maven. 
I could then in postcheckout add to the args {{--file}} and that would probably 
work?

> add test-patch plugin points for customizing build layout
> -
>
> Key: HADOOP-11929
> URL: https://issues.apache.org/jira/browse/HADOOP-11929
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Minor
>
> nothing fancy, just somethign that doesn't have a top level pom.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11930) test-patch in offline mode should tell maven to be in offline mode

2015-05-06 Thread Sean Busbey (JIRA)
Sean Busbey created HADOOP-11930:


 Summary: test-patch in offline mode should tell maven to be in 
offline mode
 Key: HADOOP-11930
 URL: https://issues.apache.org/jira/browse/HADOOP-11930
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Sean Busbey
Assignee: Sean Busbey


when we use --offline for test-patch, we should also flag maven to be offline 
so that it doesn't attempt to talk to the internet.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11120) hadoop fs -rmr gives wrong advice

2015-05-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14530866#comment-14530866
 ] 

Hudson commented on HADOOP-11120:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2135 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2135/])
HADOOP-11120. hadoop fs -rmr gives wrong advice. Contributed by Juliet 
Houghland. (wang: rev 05adc76ace6bf28e4a3ff874044c2c41e3eba63f)
* hadoop-common-project/hadoop-common/CHANGES.txt
* hadoop-common-project/hadoop-common/src/test/resources/testConf.xml
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Delete.java


> hadoop fs -rmr gives wrong advice
> -
>
> Key: HADOOP-11120
> URL: https://issues.apache.org/jira/browse/HADOOP-11120
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Allen Wittenauer
>Assignee: Juliet Hougland
> Fix For: 2.8.0
>
> Attachments: HADOOP-11120.patch, Screen Shot 2014-09-24 at 3.02.21 
> PM.png
>
>
> Typing bin/hadoop fs -rmr /a?
> gives the output:
> rmr: DEPRECATED: Please use 'rm -r' instead.
> Typing bin/hadoop fs rm -r /a?
> gives the output:
> rm: Unknown command



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11917) test-patch.sh should work with ${BASEDIR}/patchprocess setups

2015-05-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14530872#comment-14530872
 ] 

Hudson commented on HADOOP-11917:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2135 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2135/])
HADOOP-11917. test-patch.sh should work with ${BASEDIR}/patchprocess setups 
(aw) (aw: rev d33419ae01c528073f9f00ef1aadf153fed41222)
* dev-support/test-patch.sh
* hadoop-common-project/hadoop-common/CHANGES.txt
* .gitignore
* pom.xml


> test-patch.sh should work with ${BASEDIR}/patchprocess setups
> -
>
> Key: HADOOP-11917
> URL: https://issues.apache.org/jira/browse/HADOOP-11917
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Blocker
> Fix For: 2.8.0
>
> Attachments: HADOOP-11917.01.patch, HADOOP-11917.patch
>
>
> There are a bunch of problems with this kind of setup: configuration and code 
> changes in test-patch.sh required.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11911) test-patch should allow configuration of default branch

2015-05-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14530870#comment-14530870
 ] 

Hudson commented on HADOOP-11911:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2135 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2135/])
HADOOP-11911. test-patch should allow configuration of default branch (Sean 
Busbey via aw) (aw: rev 9b01f81eb874cd63e6b9ae2d09d94fc8bf4fcd7d)
* hadoop-common-project/hadoop-common/CHANGES.txt
* dev-support/test-patch.sh


> test-patch should allow configuration of default branch
> ---
>
> Key: HADOOP-11911
> URL: https://issues.apache.org/jira/browse/HADOOP-11911
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-11911.1.patch, HADOOP-11911.2.patch, 
> HADOOP-11911.3.patch, HADOOP-11911.4.patch
>
>
> right now test-patch.sh forces a default branch of 'trunk'. would be better 
> to allow it to be configurable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11912) Extra configuration key used in TraceUtils should respect prefix

2015-05-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14530868#comment-14530868
 ] 

Hudson commented on HADOOP-11912:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2135 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2135/])
HADOOP-11912. Extra configuration key used in TraceUtils should respect prefix 
(Masatake Iwasaki via Colin P. McCabe) (cmccabe: rev 
90b384564875bb353224630e501772b46d4ca9c5)
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/tracing/TestTraceUtils.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/tracing/TraceUtils.java


> Extra configuration key used in TraceUtils should respect prefix
> 
>
> Key: HADOOP-11912
> URL: https://issues.apache.org/jira/browse/HADOOP-11912
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-11912.001.patch
>
>
> HDFS-8213 added prefix handling to configuration used by tracing but extra 
> key value pairs in configuration returned by TraceUtils#wrapHadoopConf does 
> not respect this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11904) test-patch.sh goes into an infinite loop on non-maven builds

2015-05-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14530867#comment-14530867
 ] 

Hudson commented on HADOOP-11904:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2135 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2135/])
HADOOP-11904. test-patch.sh goes into an infinite loop on non-maven builds (aw) 
(aw: rev 3ff91e9e9302d94b0d18cccebd02d3815c06ce90)
* hadoop-common-project/hadoop-common/CHANGES.txt
* dev-support/test-patch.sh


> test-patch.sh goes into an infinite loop on non-maven builds
> 
>
> Key: HADOOP-11904
> URL: https://issues.apache.org/jira/browse/HADOOP-11904
> Project: Hadoop Common
>  Issue Type: Test
>  Components: test
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Critical
> Fix For: 2.8.0
>
> Attachments: HADOOP-11904.patch
>
>
> If post HADOOP-11746 test patch is given a non-maven-based build, it goes 
> into an infinite loop looking for modules pom.xml.  There should be an escape 
> clause after switching branches to see if it is maven based. If it is not 
> maven based, then test-patch should either abort or re-exec using that 
> version's test-patch script.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11926) test-patch.sh mv does wrong math

2015-05-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14530871#comment-14530871
 ] 

Hudson commented on HADOOP-11926:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2135 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2135/])
HADOOP-11926. test-patch.sh mv does wrong math (aw) (aw: rev 
4402e4c633808556d49854df45683688b6a9ce84)
* hadoop-common-project/hadoop-common/CHANGES.txt
* dev-support/test-patch.sh


> test-patch.sh mv does wrong math
> 
>
> Key: HADOOP-11926
> URL: https://issues.apache.org/jira/browse/HADOOP-11926
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Fix For: 2.8.0
>
> Attachments: HADOOP-11928.00.patch
>
>
> cleanup_and_exit uses the wrong result code check and fails to mv the 
> patchdir when it should, and mv's it when it shouldn't.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11917) test-patch.sh should work with ${BASEDIR}/patchprocess setups

2015-05-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14530899#comment-14530899
 ] 

Hudson commented on HADOOP-11917:
-

SUCCESS: Integrated in Hadoop-Mapreduce-trunk-Java8 #186 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/186/])
HADOOP-11917. test-patch.sh should work with ${BASEDIR}/patchprocess setups 
(aw) (aw: rev d33419ae01c528073f9f00ef1aadf153fed41222)
* dev-support/test-patch.sh
* hadoop-common-project/hadoop-common/CHANGES.txt
* pom.xml
* .gitignore


> test-patch.sh should work with ${BASEDIR}/patchprocess setups
> -
>
> Key: HADOOP-11917
> URL: https://issues.apache.org/jira/browse/HADOOP-11917
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Blocker
> Fix For: 2.8.0
>
> Attachments: HADOOP-11917.01.patch, HADOOP-11917.patch
>
>
> There are a bunch of problems with this kind of setup: configuration and code 
> changes in test-patch.sh required.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11912) Extra configuration key used in TraceUtils should respect prefix

2015-05-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14530895#comment-14530895
 ] 

Hudson commented on HADOOP-11912:
-

SUCCESS: Integrated in Hadoop-Mapreduce-trunk-Java8 #186 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/186/])
HADOOP-11912. Extra configuration key used in TraceUtils should respect prefix 
(Masatake Iwasaki via Colin P. McCabe) (cmccabe: rev 
90b384564875bb353224630e501772b46d4ca9c5)
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/tracing/TestTraceUtils.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/tracing/TraceUtils.java


> Extra configuration key used in TraceUtils should respect prefix
> 
>
> Key: HADOOP-11912
> URL: https://issues.apache.org/jira/browse/HADOOP-11912
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-11912.001.patch
>
>
> HDFS-8213 added prefix handling to configuration used by tracing but extra 
> key value pairs in configuration returned by TraceUtils#wrapHadoopConf does 
> not respect this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11911) test-patch should allow configuration of default branch

2015-05-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14530897#comment-14530897
 ] 

Hudson commented on HADOOP-11911:
-

SUCCESS: Integrated in Hadoop-Mapreduce-trunk-Java8 #186 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/186/])
HADOOP-11911. test-patch should allow configuration of default branch (Sean 
Busbey via aw) (aw: rev 9b01f81eb874cd63e6b9ae2d09d94fc8bf4fcd7d)
* hadoop-common-project/hadoop-common/CHANGES.txt
* dev-support/test-patch.sh


> test-patch should allow configuration of default branch
> ---
>
> Key: HADOOP-11911
> URL: https://issues.apache.org/jira/browse/HADOOP-11911
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-11911.1.patch, HADOOP-11911.2.patch, 
> HADOOP-11911.3.patch, HADOOP-11911.4.patch
>
>
> right now test-patch.sh forces a default branch of 'trunk'. would be better 
> to allow it to be configurable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11926) test-patch.sh mv does wrong math

2015-05-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14530898#comment-14530898
 ] 

Hudson commented on HADOOP-11926:
-

SUCCESS: Integrated in Hadoop-Mapreduce-trunk-Java8 #186 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/186/])
HADOOP-11926. test-patch.sh mv does wrong math (aw) (aw: rev 
4402e4c633808556d49854df45683688b6a9ce84)
* dev-support/test-patch.sh
* hadoop-common-project/hadoop-common/CHANGES.txt


> test-patch.sh mv does wrong math
> 
>
> Key: HADOOP-11926
> URL: https://issues.apache.org/jira/browse/HADOOP-11926
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Fix For: 2.8.0
>
> Attachments: HADOOP-11928.00.patch
>
>
> cleanup_and_exit uses the wrong result code check and fails to mv the 
> patchdir when it should, and mv's it when it shouldn't.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11120) hadoop fs -rmr gives wrong advice

2015-05-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14530893#comment-14530893
 ] 

Hudson commented on HADOOP-11120:
-

SUCCESS: Integrated in Hadoop-Mapreduce-trunk-Java8 #186 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/186/])
HADOOP-11120. hadoop fs -rmr gives wrong advice. Contributed by Juliet 
Houghland. (wang: rev 05adc76ace6bf28e4a3ff874044c2c41e3eba63f)
* hadoop-common-project/hadoop-common/src/test/resources/testConf.xml
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Delete.java
* hadoop-common-project/hadoop-common/CHANGES.txt


> hadoop fs -rmr gives wrong advice
> -
>
> Key: HADOOP-11120
> URL: https://issues.apache.org/jira/browse/HADOOP-11120
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Allen Wittenauer
>Assignee: Juliet Hougland
> Fix For: 2.8.0
>
> Attachments: HADOOP-11120.patch, Screen Shot 2014-09-24 at 3.02.21 
> PM.png
>
>
> Typing bin/hadoop fs -rmr /a?
> gives the output:
> rmr: DEPRECATED: Please use 'rm -r' instead.
> Typing bin/hadoop fs rm -r /a?
> gives the output:
> rm: Unknown command



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11904) test-patch.sh goes into an infinite loop on non-maven builds

2015-05-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14530894#comment-14530894
 ] 

Hudson commented on HADOOP-11904:
-

SUCCESS: Integrated in Hadoop-Mapreduce-trunk-Java8 #186 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/186/])
HADOOP-11904. test-patch.sh goes into an infinite loop on non-maven builds (aw) 
(aw: rev 3ff91e9e9302d94b0d18cccebd02d3815c06ce90)
* hadoop-common-project/hadoop-common/CHANGES.txt
* dev-support/test-patch.sh


> test-patch.sh goes into an infinite loop on non-maven builds
> 
>
> Key: HADOOP-11904
> URL: https://issues.apache.org/jira/browse/HADOOP-11904
> Project: Hadoop Common
>  Issue Type: Test
>  Components: test
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Critical
> Fix For: 2.8.0
>
> Attachments: HADOOP-11904.patch
>
>
> If post HADOOP-11746 test patch is given a non-maven-based build, it goes 
> into an infinite loop looking for modules pom.xml.  There should be an escape 
> clause after switching branches to see if it is maven based. If it is not 
> maven based, then test-patch should either abort or re-exec using that 
> version's test-patch script.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11930) test-patch in offline mode should tell maven to be in offline mode

2015-05-06 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HADOOP-11930:
-
Status: Patch Available  (was: Open)

> test-patch in offline mode should tell maven to be in offline mode
> --
>
> Key: HADOOP-11930
> URL: https://issues.apache.org/jira/browse/HADOOP-11930
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Sean Busbey
>Assignee: Sean Busbey
> Attachments: HADOOP-11930.1.patch
>
>
> when we use --offline for test-patch, we should also flag maven to be offline 
> so that it doesn't attempt to talk to the internet.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11930) test-patch in offline mode should tell maven to be in offline mode

2015-05-06 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HADOOP-11930:
-
Attachment: HADOOP-11930.1.patch

> test-patch in offline mode should tell maven to be in offline mode
> --
>
> Key: HADOOP-11930
> URL: https://issues.apache.org/jira/browse/HADOOP-11930
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Sean Busbey
>Assignee: Sean Busbey
> Attachments: HADOOP-11930.1.patch
>
>
> when we use --offline for test-patch, we should also flag maven to be offline 
> so that it doesn't attempt to talk to the internet.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11930) test-patch in offline mode should tell maven to be in offline mode

2015-05-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14530918#comment-14530918
 ] 

Hadoop QA commented on HADOOP-11930:


(!) A patch to test-patch or smart-apply-patch has been detected. 
Re-executing against the patched versions to perform further tests. 
The console is at 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6506/console in case of 
problems.

> test-patch in offline mode should tell maven to be in offline mode
> --
>
> Key: HADOOP-11930
> URL: https://issues.apache.org/jira/browse/HADOOP-11930
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Sean Busbey
>Assignee: Sean Busbey
> Attachments: HADOOP-11930.1.patch
>
>
> when we use --offline for test-patch, we should also flag maven to be offline 
> so that it doesn't attempt to talk to the internet.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11930) test-patch in offline mode should tell maven to be in offline mode

2015-05-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14530921#comment-14530921
 ] 

Hadoop QA commented on HADOOP-11930:


\\
\\
| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | reexec |   0m  0s | dev-support patch detected. |
| {color:blue}0{color} | pre-patch |   0m  0s | Pre-patch trunk compilation is 
healthy. |
| {color:blue}0{color} | @author |   0m  0s | Skipping @author checks as 
test-patch has been patched. |
| {color:green}+1{color} | release audit |   0m 15s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:blue}0{color} | shellcheck |   0m 15s | Shellcheck was not available. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| | |   0m 27s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12730877/HADOOP-11930.1.patch |
| Optional Tests | shellcheck |
| git revision | trunk / a583a40 |
| Java | 1.7.0_55 |
| uname | Linux asf906.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6506/console |


This message was automatically generated.

> test-patch in offline mode should tell maven to be in offline mode
> --
>
> Key: HADOOP-11930
> URL: https://issues.apache.org/jira/browse/HADOOP-11930
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Sean Busbey
>Assignee: Sean Busbey
> Attachments: HADOOP-11930.1.patch
>
>
> when we use --offline for test-patch, we should also flag maven to be offline 
> so that it doesn't attempt to talk to the internet.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11923) test-patch whitespace checker doesn't flag new files

2015-05-06 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14530926#comment-14530926
 ] 

Allen Wittenauer commented on HADOOP-11923:
---

I suspect there is a much easier fix here.  

{code}
  done < <("${GIT}" diff --unified=0 --no-color)
{code}

should probably be

{code}
  done < <("${GIT}" diff --unified=0 --no-color  ${PATCH_BRANCH})
{code}


> test-patch whitespace checker doesn't flag new files
> 
>
> Key: HADOOP-11923
> URL: https://issues.apache.org/jira/browse/HADOOP-11923
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Sean Busbey
>Priority: Critical
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-11923.patch
>
>
> The whitespace plugin for test-patch only examines new files. So when a patch 
> comes in with trailing whitespace on new files it doesn't flag things as a 
> problem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-6842) "hadoop fs -text" does not give a useful text representation of MapWritable objects

2015-05-06 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14530946#comment-14530946
 ] 

Akira AJISAKA commented on HADOOP-6842:
---

{code}
+  @Override
+  String toString() {
+return instance.toString();
+  }
{code}
This method need to be public.

> "hadoop fs -text" does not give a useful text representation of MapWritable 
> objects
> ---
>
> Key: HADOOP-6842
> URL: https://issues.apache.org/jira/browse/HADOOP-6842
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io
>Affects Versions: 0.20.0
>Reporter: Steven Wong
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-6842.patch
>
>
> If a sequence file contains MapWritable objects, running "hadoop fs -text" on 
> the file prints the following for each MapWritable:
> org.apache.hadoop.io.MapWritable@4f8235ed
> To be more useful, it should print out the contents of the map instead. This 
> can be done by adding a toString method to MapWritable, i.e. something like:
> public String toString() {
> return (new TreeMap(instance)).toString();
> }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HADOOP-6842) "hadoop fs -text" does not give a useful text representation of MapWritable objects

2015-05-06 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6842?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA reassigned HADOOP-6842:
-

Assignee: Akira AJISAKA

> "hadoop fs -text" does not give a useful text representation of MapWritable 
> objects
> ---
>
> Key: HADOOP-6842
> URL: https://issues.apache.org/jira/browse/HADOOP-6842
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io
>Affects Versions: 0.20.0
>Reporter: Steven Wong
>Assignee: Akira AJISAKA
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-6842.patch
>
>
> If a sequence file contains MapWritable objects, running "hadoop fs -text" on 
> the file prints the following for each MapWritable:
> org.apache.hadoop.io.MapWritable@4f8235ed
> To be more useful, it should print out the contents of the map instead. This 
> can be done by adding a toString method to MapWritable, i.e. something like:
> public String toString() {
> return (new TreeMap(instance)).toString();
> }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11764) Hadoop should have the option to use directory other than tmp for extracting and loading leveldbjni

2015-05-06 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14530966#comment-14530966
 ] 

Allen Wittenauer commented on HADOOP-11764:
---

bq. This can also mean *-env.sh.

I'm not sure how you got that from the statement "until it can proven otherwise 
that this can't be handled via *-site.xml."  I specifically mean: push this 
into the XML files and leave the shell environment alone. As we discussed at 
Summit--and you tentatively agreed--populating the system property for leveldb 
from the *-site.xml files could be done at service start time, given that there 
are generic abstractions for actually starting daemons in Hadoop that large 
chunks of the system already use.

It's worth pointing out that putting the leveldb settings in a generic shell 
env will impact end users, and it would have to be done in daemon specific env 
vars.  So no, one can't modify HADOOP_OPTS here.  You'll break apps that use 
their own leveldb bits.




> Hadoop should have the option to use directory other than tmp for extracting 
> and loading leveldbjni
> ---
>
> Key: HADOOP-11764
> URL: https://issues.apache.org/jira/browse/HADOOP-11764
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Anubhav Dhoot
>Assignee: Anubhav Dhoot
> Attachments: YARN-3331.001.patch, YARN-3331.002.patch
>
>
> /tmp can be  required to be noexec in many environments. This causes a 
> problem when  nodemanager tries to load the leveldbjni library which can get 
> unpacked and executed from /tmp.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HADOOP-11764) Hadoop should have the option to use directory other than tmp for extracting and loading leveldbjni

2015-05-06 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14530966#comment-14530966
 ] 

Allen Wittenauer edited comment on HADOOP-11764 at 5/6/15 5:19 PM:
---

bq. This can also mean *-env.sh.

I'm not sure how you got that from the statement "until it can proven otherwise 
that this can't be handled via *-site.xml."  I specifically mean: push this 
into the XML files and leave the shell environment alone. As we discussed at 
Summit\-\-and you tentatively agreed\-\-populating the system property for 
leveldb from the *-site.xml files could be done at service start time, given 
that there are generic abstractions for actually starting daemons in Hadoop 
that large chunks of the system already use.

It's worth pointing out that putting the leveldb settings in a generic shell 
env will impact end users, and it would have to be done in daemon specific env 
vars.  So no, one can't modify HADOOP_OPTS here.  You'll break apps that use 
their own leveldb bits.





was (Author: aw):
bq. This can also mean *-env.sh.

I'm not sure how you got that from the statement "until it can proven otherwise 
that this can't be handled via *-site.xml."  I specifically mean: push this 
into the XML files and leave the shell environment alone. As we discussed at 
Summit--and you tentatively agreed--populating the system property for leveldb 
from the *-site.xml files could be done at service start time, given that there 
are generic abstractions for actually starting daemons in Hadoop that large 
chunks of the system already use.

It's worth pointing out that putting the leveldb settings in a generic shell 
env will impact end users, and it would have to be done in daemon specific env 
vars.  So no, one can't modify HADOOP_OPTS here.  You'll break apps that use 
their own leveldb bits.




> Hadoop should have the option to use directory other than tmp for extracting 
> and loading leveldbjni
> ---
>
> Key: HADOOP-11764
> URL: https://issues.apache.org/jira/browse/HADOOP-11764
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Anubhav Dhoot
>Assignee: Anubhav Dhoot
> Attachments: YARN-3331.001.patch, YARN-3331.002.patch
>
>
> /tmp can be  required to be noexec in many environments. This causes a 
> problem when  nodemanager tries to load the leveldbjni library which can get 
> unpacked and executed from /tmp.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11813) releasedocmaker.py should use today's date instead of unreleased

2015-05-06 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11813:
--
Labels: newbie  (was: BB2015-05-TBR newbie)

> releasedocmaker.py should use today's date instead of unreleased
> 
>
> Key: HADOOP-11813
> URL: https://issues.apache.org/jira/browse/HADOOP-11813
> Project: Hadoop Common
>  Issue Type: Task
>  Components: build
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Darrell Taylor
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-11813.001.patch, HADOOP-11813.patch
>
>
> After discussing with a few folks, it'd be more convenient if releasedocmaker 
> used the current date rather than unreleased when processing a version that 
> JIRA hasn't declared released.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11813) releasedocmaker.py should use today's date instead of unreleased

2015-05-06 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11813:
--
Issue Type: Improvement  (was: Task)

> releasedocmaker.py should use today's date instead of unreleased
> 
>
> Key: HADOOP-11813
> URL: https://issues.apache.org/jira/browse/HADOOP-11813
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Darrell Taylor
>Priority: Minor
>  Labels: newbie
> Fix For: 3.0.0
>
> Attachments: HADOOP-11813.001.patch, HADOOP-11813.patch
>
>
> After discussing with a few folks, it'd be more convenient if releasedocmaker 
> used the current date rather than unreleased when processing a version that 
> JIRA hasn't declared released.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11813) releasedocmaker.py should use today's date instead of unreleased

2015-05-06 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11813:
--
   Resolution: Fixed
Fix Version/s: 3.0.0
 Release Note: Use today instead of 'Unreleased' in releasedocmaker.py when 
--usetoday is given as an option.  (was: Use today instead of 'Unreleased' in 
releasedocmaker.py)
   Status: Resolved  (was: Patch Available)

+1 committing to trunk.

Thanks!!

> releasedocmaker.py should use today's date instead of unreleased
> 
>
> Key: HADOOP-11813
> URL: https://issues.apache.org/jira/browse/HADOOP-11813
> Project: Hadoop Common
>  Issue Type: Task
>  Components: build
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Darrell Taylor
>Priority: Minor
>  Labels: newbie
> Fix For: 3.0.0
>
> Attachments: HADOOP-11813.001.patch, HADOOP-11813.patch
>
>
> After discussing with a few folks, it'd be more convenient if releasedocmaker 
> used the current date rather than unreleased when processing a version that 
> JIRA hasn't declared released.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11931) test-patch javac warning check reporting total number of warnings instead of incremental

2015-05-06 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11931?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HADOOP-11931:
-
Labels: newbie  (was: )

> test-patch javac warning check reporting total number of warnings instead of 
> incremental
> 
>
> Key: HADOOP-11931
> URL: https://issues.apache.org/jira/browse/HADOOP-11931
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Sean Busbey
>  Labels: newbie
>
> javac result should report incremental number of warnings.
> {code}
> 
> 
>Determining number of patched javac warnings.
> 
> 
> [Wed May  6 17:06:15 UTC 2015 DEBUG]: Start clock
> /home/jenkins/tools/maven/latest/bin/mvn clean test -DskipTests 
> -DhadoopPatchProcess -Pnative -Ptest-patch > 
> /jenkins/workspace/PreCommit-NIFI-Build/patchprocess/patchJavacWarnings.txt 
> 2>&1
> There appear to be 25 javac compiler warnings before the patch and 26 javac 
> compiler warnings after applying the patch.
> [Wed May  6 17:11:39 UTC 2015 DEBUG]: Stop clock
> Elapsed time:   5m 24s
> {code}
> followed later by:
> {code}
> |  -1  |  javac  |  5m 24s| The applied patch generated 26 
> |  | || additional warning messages.
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11931) test-patch javac warning check reporting total number of warnings instead of incremental

2015-05-06 Thread Sean Busbey (JIRA)
Sean Busbey created HADOOP-11931:


 Summary: test-patch javac warning check reporting total number of 
warnings instead of incremental
 Key: HADOOP-11931
 URL: https://issues.apache.org/jira/browse/HADOOP-11931
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Sean Busbey


javac result should report incremental number of warnings.

{code}


   Determining number of patched javac warnings.




[Wed May  6 17:06:15 UTC 2015 DEBUG]: Start clock
/home/jenkins/tools/maven/latest/bin/mvn clean test -DskipTests 
-DhadoopPatchProcess -Pnative -Ptest-patch > 
/jenkins/workspace/PreCommit-NIFI-Build/patchprocess/patchJavacWarnings.txt 2>&1
There appear to be 25 javac compiler warnings before the patch and 26 javac 
compiler warnings after applying the patch.
[Wed May  6 17:11:39 UTC 2015 DEBUG]: Stop clock

Elapsed time:   5m 24s
{code}

followed later by:

{code}

|  -1  |  javac  |  5m 24s| The applied patch generated 26 
|  | || additional warning messages.
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11929) add test-patch plugin points for customizing build layout

2015-05-06 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14531011#comment-14531011
 ] 

Allen Wittenauer commented on HADOOP-11929:
---

Doesn't setting MAVEN_OPTS let you pass --file already?

In any case, I'm surprised the preapply javac/javadoc checks don't blow up.  
But I suppose we really don't have much of a choice here but to have a "maven 
starts here" kind of env var to use for all those places where it's assumed 
that BASEDIR is where maven starts.  I don't think plugins are the way to go 
here since it's pretty intrinsic to all kinds of things.  Command line option 
makes more sense to me since it's pretty core.  

But let me go through the code once more and double check my thinking on this 
one.

> add test-patch plugin points for customizing build layout
> -
>
> Key: HADOOP-11929
> URL: https://issues.apache.org/jira/browse/HADOOP-11929
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Minor
>
> nothing fancy, just somethign that doesn't have a top level pom.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-6842) "hadoop fs -text" does not give a useful text representation of MapWritable objects

2015-05-06 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6842?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-6842:
--
Attachment: HADOOP-6842.002.patch

v2 patch
* Make {{toString()}} method to public
* Add a regression test

> "hadoop fs -text" does not give a useful text representation of MapWritable 
> objects
> ---
>
> Key: HADOOP-6842
> URL: https://issues.apache.org/jira/browse/HADOOP-6842
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io
>Affects Versions: 0.20.0
>Reporter: Steven Wong
>Assignee: Akira AJISAKA
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-6842.002.patch, HADOOP-6842.patch
>
>
> If a sequence file contains MapWritable objects, running "hadoop fs -text" on 
> the file prints the following for each MapWritable:
> org.apache.hadoop.io.MapWritable@4f8235ed
> To be more useful, it should print out the contents of the map instead. This 
> can be done by adding a toString method to MapWritable, i.e. something like:
> public String toString() {
> return (new TreeMap(instance)).toString();
> }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11813) releasedocmaker.py should use today's date instead of unreleased

2015-05-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14531016#comment-14531016
 ] 

Hudson commented on HADOOP-11813:
-

FAILURE: Integrated in Hadoop-trunk-Commit #7747 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7747/])
HADOOP-11813. releasedocmaker.py should use today's date instead of unreleased 
(Darrell Taylor via aw) (aw: rev f325522c1423f89dced999a16d49a004b2879743)
* dev-support/releasedocmaker.py
* hadoop-common-project/hadoop-common/pom.xml
* hadoop-common-project/hadoop-common/CHANGES.txt


> releasedocmaker.py should use today's date instead of unreleased
> 
>
> Key: HADOOP-11813
> URL: https://issues.apache.org/jira/browse/HADOOP-11813
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Darrell Taylor
>Priority: Minor
>  Labels: newbie
> Fix For: 3.0.0
>
> Attachments: HADOOP-11813.001.patch, HADOOP-11813.patch
>
>
> After discussing with a few folks, it'd be more convenient if releasedocmaker 
> used the current date rather than unreleased when processing a version that 
> JIRA hasn't declared released.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-6842) "hadoop fs -text" does not give a useful text representation of MapWritable objects

2015-05-06 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6842?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-6842:
--
Target Version/s: 2.8.0

> "hadoop fs -text" does not give a useful text representation of MapWritable 
> objects
> ---
>
> Key: HADOOP-6842
> URL: https://issues.apache.org/jira/browse/HADOOP-6842
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io
>Affects Versions: 0.20.0
>Reporter: Steven Wong
>Assignee: Akira AJISAKA
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-6842.002.patch, HADOOP-6842.patch
>
>
> If a sequence file contains MapWritable objects, running "hadoop fs -text" on 
> the file prints the following for each MapWritable:
> org.apache.hadoop.io.MapWritable@4f8235ed
> To be more useful, it should print out the contents of the map instead. This 
> can be done by adding a toString method to MapWritable, i.e. something like:
> public String toString() {
> return (new TreeMap(instance)).toString();
> }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11929) add test-patch plugin points for customizing build layout

2015-05-06 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14531025#comment-14531025
 ] 

Sean Busbey commented on HADOOP-11929:
--

maybe? MAVEN_OPTS didn't allow setting --offline, so I wouldn't be surprised if 
--file didn't work either.

I had to do it as a plugin and not a cli arg because for NIFI it varies based 
on what the patch touches. (if you're curious, it's [in a feature branch for 
NIFI in my forked 
repo|https://github.com/busbey/incubator-nifi/blob/NIFI-577/dev-support/test-patch.d/nifi-setup.sh#L23])

> add test-patch plugin points for customizing build layout
> -
>
> Key: HADOOP-11929
> URL: https://issues.apache.org/jira/browse/HADOOP-11929
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Minor
>
> nothing fancy, just somethign that doesn't have a top level pom.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11923) test-patch whitespace checker doesn't flag new files

2015-05-06 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14531044#comment-14531044
 ] 

Allen Wittenauer commented on HADOOP-11923:
---

Oh, and to answer this question:

bq. does CHANGED_FILES get used in some way that we can't just alter how it 
finds the list?

No.  It really should be a list of all the files that are touched.  The tricky 
part is that it is attempting to interpret the patch file and there are all 
sorts of ways that could be built.  It all depends upon which diff command 
lines were used. :(

> test-patch whitespace checker doesn't flag new files
> 
>
> Key: HADOOP-11923
> URL: https://issues.apache.org/jira/browse/HADOOP-11923
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Sean Busbey
>Priority: Critical
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-11923.patch
>
>
> The whitespace plugin for test-patch only examines new files. So when a patch 
> comes in with trailing whitespace on new files it doesn't flag things as a 
> problem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11929) add test-patch plugin points for customizing build layout

2015-05-06 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14531053#comment-14531053
 ] 

Allen Wittenauer commented on HADOOP-11929:
---

This is really starting to feel like we're trying to work around not having a 
multi-module maven setup when multi-module maven is probably the thing to do.

> add test-patch plugin points for customizing build layout
> -
>
> Key: HADOOP-11929
> URL: https://issues.apache.org/jira/browse/HADOOP-11929
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Minor
>
> nothing fancy, just somethign that doesn't have a top level pom.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11929) add test-patch plugin points for customizing build layout

2015-05-06 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14531059#comment-14531059
 ] 

Sean Busbey commented on HADOOP-11929:
--

it's really that they should probably have multiple git repositories.

> add test-patch plugin points for customizing build layout
> -
>
> Key: HADOOP-11929
> URL: https://issues.apache.org/jira/browse/HADOOP-11929
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Minor
>
> nothing fancy, just somethign that doesn't have a top level pom.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11929) add test-patch plugin points for customizing build layout

2015-05-06 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14531065#comment-14531065
 ] 

Sean Busbey commented on HADOOP-11929:
--

like, how would we deal with the "Maven Parent Poms" project? [their svn repo 
is similarly laid out|http://svn.apache.org/repos/asf/maven/pom/trunk/]. Maybe 
I should just make a different build job for each of the components that shares 
a repo?

> add test-patch plugin points for customizing build layout
> -
>
> Key: HADOOP-11929
> URL: https://issues.apache.org/jira/browse/HADOOP-11929
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Minor
>
> nothing fancy, just somethign that doesn't have a top level pom.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11929) add test-patch plugin points for customizing build layout

2015-05-06 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14531069#comment-14531069
 ] 

Allen Wittenauer commented on HADOOP-11929:
---

Actually, we need to move this logic:

https://github.com/apache/hadoop/blob/trunk/dev-support/test-patch.sh#L1946

to be a plugin.  (It's always been a code smell to me, but I knew it wouldn't 
impact anyone else...)

I have a suspicion that's very similar to what you're trying to accomplish in 
the NiFi plugin.  

> add test-patch plugin points for customizing build layout
> -
>
> Key: HADOOP-11929
> URL: https://issues.apache.org/jira/browse/HADOOP-11929
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Minor
>
> nothing fancy, just somethign that doesn't have a top level pom.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >