[jira] [Commented] (HADOOP-8896) Javadoc points to Wrong Reader and Writer classes in SequenceFile

2014-08-19 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14101955#comment-14101955
 ] 

Akira AJISAKA commented on HADOOP-8896:
---

Thanks [~rchiang] for the patch.
{code}
 * The {@link Reader} acts as the bridge and can read any of the above 
{code}
Would you fix the above link as well?

> Javadoc points to Wrong Reader and Writer classes in SequenceFile
> -
>
> Key: HADOOP-8896
> URL: https://issues.apache.org/jira/browse/HADOOP-8896
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, io
>Affects Versions: 2.0.1-alpha
>Reporter: Timothy Mann
>Priority: Trivial
>  Labels: sequence-file
> Attachments: HADOOP8896-01.patch
>
>   Original Estimate: 1m
>  Remaining Estimate: 1m
>
> Line 56 of org.apache.hadoop.io.SequenceFile refers to {@link Writer}, {@link 
> Reader} in the javadoc comment describing the class SequenceFile. When the 
> javadoc is built Reader and Writer link to java.io.Reader and java.io.Writer, 
> respectively. However, they should instead refer to {@link 
> SequenceFile.Reader} and {@link SequenceFile.Writer}.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-9913) Document time unit to RpcMetrics.java

2014-08-19 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14101964#comment-14101964
 ] 

Akira AJISAKA commented on HADOOP-9913:
---

bq. Wouldn't this be better as documentation?
Now we have the 
[documentation|http://hadoop.apache.org/docs/r2.5.0/hadoop-project-dist/hadoop-common/Metrics.html]
 which describes the time unit of the metrics, so the priority is very low.
bq. this is an incompatible change?
I don't think this is an incompatible change. [Compatibility 
doc|http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/Compatibility.html#MetricsJMX]
 says 
{quote}
Modifying (eg changing the unit or measurement) or removing existing metrics 
breaks compatibility. Similarly, changes to JMX MBean object names also break 
compatibility.
{quote}
The patch does not modify the unit, the measurement, or the object names. It 
only modifies the description.

> Document time unit to RpcMetrics.java
> -
>
> Key: HADOOP-9913
> URL: https://issues.apache.org/jira/browse/HADOOP-9913
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, metrics
>Affects Versions: 3.0.0, 2.1.0-beta
> Environment: trunk
>Reporter: Akira AJISAKA
>Assignee: Akira AJISAKA
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-9913.2.patch, HADOOP-9913.3.patch, 
> HADOOP-9913.patch
>
>
> In o.a.h.ipc.metrics.RpcMetrics.java, metrics are declared as follows:
> {code}
>@Metric("Queue time") MutableRate rpcQueueTime;
>@Metric("Processsing time") MutableRate rpcProcessingTime;
> {code}
> Since some users may confuse which unit (sec or msec) is correct, they should 
> be documented.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-9913) Document time unit to RpcMetrics.java

2014-08-19 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14101970#comment-14101970
 ] 

Akira AJISAKA commented on HADOOP-9913:
---

bq. Now we have the documentation which describes the time unit of the metrics, 
so the priority is very low.
I'll close this issue as duplicate of HADOOP-6350. Thanks.

> Document time unit to RpcMetrics.java
> -
>
> Key: HADOOP-9913
> URL: https://issues.apache.org/jira/browse/HADOOP-9913
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, metrics
>Affects Versions: 3.0.0, 2.1.0-beta
> Environment: trunk
>Reporter: Akira AJISAKA
>Assignee: Akira AJISAKA
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-9913.2.patch, HADOOP-9913.3.patch, 
> HADOOP-9913.patch
>
>
> In o.a.h.ipc.metrics.RpcMetrics.java, metrics are declared as follows:
> {code}
>@Metric("Queue time") MutableRate rpcQueueTime;
>@Metric("Processsing time") MutableRate rpcProcessingTime;
> {code}
> Since some users may confuse which unit (sec or msec) is correct, they should 
> be documented.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-9913) Document time unit to RpcMetrics.java

2014-08-19 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-9913:
--

Resolution: Duplicate
  Assignee: (was: Akira AJISAKA)
Status: Resolved  (was: Patch Available)

> Document time unit to RpcMetrics.java
> -
>
> Key: HADOOP-9913
> URL: https://issues.apache.org/jira/browse/HADOOP-9913
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, metrics
>Affects Versions: 3.0.0, 2.1.0-beta
> Environment: trunk
>Reporter: Akira AJISAKA
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-9913.2.patch, HADOOP-9913.3.patch, 
> HADOOP-9913.patch
>
>
> In o.a.h.ipc.metrics.RpcMetrics.java, metrics are declared as follows:
> {code}
>@Metric("Queue time") MutableRate rpcQueueTime;
>@Metric("Processsing time") MutableRate rpcProcessingTime;
> {code}
> Since some users may confuse which unit (sec or msec) is correct, they should 
> be documented.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10119) Document hadoop archive -p option

2014-08-19 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14101984#comment-14101984
 ] 

Akira AJISAKA commented on HADOOP-10119:


The issue was fixed as a part of MAPREDUCE-5943.
Thanks [~aw] for the comment!

> Document hadoop archive -p option
> -
>
> Key: HADOOP-10119
> URL: https://issues.apache.org/jira/browse/HADOOP-10119
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.2.0
>Reporter: Akira AJISAKA
>Assignee: Akira AJISAKA
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-10119.patch
>
>
> Now hadoop archive -p (relative parent path) option is required but the 
> option is not documented.
> See 
> http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/CommandsManual.html#archive
>  .



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10119) Document hadoop archive -p option

2014-08-19 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-10119:
---

  Resolution: Duplicate
Assignee: (was: Akira AJISAKA)
Target Version/s:   (was: 2.6.0)
  Status: Resolved  (was: Patch Available)

> Document hadoop archive -p option
> -
>
> Key: HADOOP-10119
> URL: https://issues.apache.org/jira/browse/HADOOP-10119
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.2.0
>Reporter: Akira AJISAKA
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-10119.patch
>
>
> Now hadoop archive -p (relative parent path) option is required but the 
> option is not documented.
> See 
> http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/CommandsManual.html#archive
>  .



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-8642) io.native.lib.available only controls zlib

2014-08-19 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-8642:
--

Target Version/s:   (was: 2.0.2-alpha)
  Status: Open  (was: Patch Available)

> io.native.lib.available only controls zlib
> --
>
> Key: HADOOP-8642
> URL: https://issues.apache.org/jira/browse/HADOOP-8642
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: native
>Affects Versions: 2.0.0-alpha
>Reporter: Eli Collins
>Assignee: Akira AJISAKA
> Attachments: HADOOP-8642.2.patch, HADOOP-8642.patch
>
>
> Per core-default.xml {{io.native.lib.available}} indicates "Should native 
> hadoop libraries, if present, be used" however it looks like it only affects 
> zlib. Since we always load the native library this means we may use native 
> libraries even if io.native.lib.available is set to false.
> Let's make the flag to work as advertised - rather than always load the 
> native hadoop library we only attempt to load the library (and report that 
> native is available) if this flag is set. Since io.native.lib.available 
> defaults to true the default behavior should remain unchanged (except that 
> now we wont actually try to load the library if this flag is disabled).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10365) BufferedOutputStream in FileUtil#unpackEntries() should be closed in finally block

2014-08-19 Thread Sanghyun Yun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sanghyun Yun updated HADOOP-10365:
--

Attachment: HADOOP-10365.patch

I add try-catch-finally block and write outputStream.close() in finally.

> BufferedOutputStream in FileUtil#unpackEntries() should be closed in finally 
> block
> --
>
> Key: HADOOP-10365
> URL: https://issues.apache.org/jira/browse/HADOOP-10365
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Reporter: Ted Yu
>Priority: Minor
> Attachments: HADOOP-10365.patch
>
>
> {code}
> BufferedOutputStream outputStream = new BufferedOutputStream(
> new FileOutputStream(outputFile));
> ...
> outputStream.flush();
> outputStream.close();
> {code}
> outputStream should be closed in finally block.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10365) BufferedOutputStream in FileUtil#unpackEntries() should be closed in finally block

2014-08-19 Thread Sanghyun Yun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sanghyun Yun updated HADOOP-10365:
--

Status: Patch Available  (was: Open)

> BufferedOutputStream in FileUtil#unpackEntries() should be closed in finally 
> block
> --
>
> Key: HADOOP-10365
> URL: https://issues.apache.org/jira/browse/HADOOP-10365
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Reporter: Ted Yu
>Priority: Minor
> Attachments: HADOOP-10365.patch
>
>
> {code}
> BufferedOutputStream outputStream = new BufferedOutputStream(
> new FileOutputStream(outputFile));
> ...
> outputStream.flush();
> outputStream.close();
> {code}
> outputStream should be closed in finally block.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10364) JsonGenerator in Configuration#dumpConfiguration() is not closed

2014-08-19 Thread Sanghyun Yun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sanghyun Yun updated HADOOP-10364:
--

Status: Patch Available  (was: Open)

> JsonGenerator in Configuration#dumpConfiguration() is not closed
> 
>
> Key: HADOOP-10364
> URL: https://issues.apache.org/jira/browse/HADOOP-10364
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Reporter: Ted Yu
> Attachments: HADOOP-10364.patch
>
>
> {code}
> JsonGenerator dumpGenerator = dumpFactory.createJsonGenerator(out);
> {code}
> dumpGenerator is not closed in Configuration#dumpConfiguration()
> Looking at the source code of 
> org.codehaus.jackson.impl.WriterBasedGenerator#close(), there is more than 
> flushing the buffer.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10364) JsonGenerator in Configuration#dumpConfiguration() is not closed

2014-08-19 Thread Sanghyun Yun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sanghyun Yun updated HADOOP-10364:
--

Attachment: HADOOP-10364.patch

I modify "flush()" at the end to "close()".
close function will force flushing of output and close underlying output stream.

> JsonGenerator in Configuration#dumpConfiguration() is not closed
> 
>
> Key: HADOOP-10364
> URL: https://issues.apache.org/jira/browse/HADOOP-10364
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Reporter: Ted Yu
> Attachments: HADOOP-10364.patch
>
>
> {code}
> JsonGenerator dumpGenerator = dumpFactory.createJsonGenerator(out);
> {code}
> dumpGenerator is not closed in Configuration#dumpConfiguration()
> Looking at the source code of 
> org.codehaus.jackson.impl.WriterBasedGenerator#close(), there is more than 
> flushing the buffer.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10363) Closing of SequenceFile.Reader / SequenceFile.Writer in DistCh should check against null

2014-08-19 Thread Sanghyun Yun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sanghyun Yun updated HADOOP-10363:
--

Attachment: HADOOP-10363.patch

I add some codes to check null.

> Closing of SequenceFile.Reader / SequenceFile.Writer in DistCh should check 
> against null
> 
>
> Key: HADOOP-10363
> URL: https://issues.apache.org/jira/browse/HADOOP-10363
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ted Yu
>Priority: Minor
> Attachments: HADOOP-10363.patch
>
>
> Here is related code:
> {code}
>   try {
> for(in = new SequenceFile.Reader(fs, srcs, job); in.next(key, value); 
> ) {
> ...
>   finally {
> in.close();
>   }
> {code}
> {code}
> SequenceFile.Writer opWriter = null;
> try {
>   opWriter = SequenceFile.createWriter(fs, jobconf, opList, Text.class,
>   FileOperation.class, SequenceFile.CompressionType.NONE);
> ...
> } finally {
>   opWriter.close();
> }
> {code}
> If ctor of Reader / Writer throws exception, the close() would be called on 
> null object.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10363) Closing of SequenceFile.Reader / SequenceFile.Writer in DistCh should check against null

2014-08-19 Thread Sanghyun Yun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sanghyun Yun updated HADOOP-10363:
--

Status: Patch Available  (was: Open)

> Closing of SequenceFile.Reader / SequenceFile.Writer in DistCh should check 
> against null
> 
>
> Key: HADOOP-10363
> URL: https://issues.apache.org/jira/browse/HADOOP-10363
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ted Yu
>Priority: Minor
> Attachments: HADOOP-10363.patch
>
>
> Here is related code:
> {code}
>   try {
> for(in = new SequenceFile.Reader(fs, srcs, job); in.next(key, value); 
> ) {
> ...
>   finally {
> in.close();
>   }
> {code}
> {code}
> SequenceFile.Writer opWriter = null;
> try {
>   opWriter = SequenceFile.createWriter(fs, jobconf, opList, Text.class,
>   FileOperation.class, SequenceFile.CompressionType.NONE);
> ...
> } finally {
>   opWriter.close();
> }
> {code}
> If ctor of Reader / Writer throws exception, the close() would be called on 
> null object.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10362) Closing of Reader in HadoopArchives#HArchiveInputFormat#getSplits() should check against null

2014-08-19 Thread Sanghyun Yun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sanghyun Yun updated HADOOP-10362:
--

Status: Patch Available  (was: Open)

> Closing of Reader in HadoopArchives#HArchiveInputFormat#getSplits() should 
> check against null
> -
>
> Key: HADOOP-10362
> URL: https://issues.apache.org/jira/browse/HADOOP-10362
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ted Yu
>Priority: Minor
> Attachments: HADOOP-10362.patch
>
>
> {code}
>   try {
> reader = new SequenceFile.Reader(fs, src, jconf);
> ...
>   finally {
> reader.close();
>   }
> {code}
> If Reader ctor throws exception, the close() method would be called on null 
> object.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10362) Closing of Reader in HadoopArchives#HArchiveInputFormat#getSplits() should check against null

2014-08-19 Thread Sanghyun Yun (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sanghyun Yun updated HADOOP-10362:
--

Attachment: HADOOP-10362.patch

I add some codes to check null.

> Closing of Reader in HadoopArchives#HArchiveInputFormat#getSplits() should 
> check against null
> -
>
> Key: HADOOP-10362
> URL: https://issues.apache.org/jira/browse/HADOOP-10362
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ted Yu
>Priority: Minor
> Attachments: HADOOP-10362.patch
>
>
> {code}
>   try {
> reader = new SequenceFile.Reader(fs, src, jconf);
> ...
>   finally {
> reader.close();
>   }
> {code}
> If Reader ctor throws exception, the close() method would be called on null 
> object.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-8642) io.native.lib.available only controls zlib

2014-08-19 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-8642:
--

Attachment: HADOOP-8642.3.patch

Update the patch to
# discard the change in {{getLoadNativeLibraries(Configuration)}}
# clean up the code

> io.native.lib.available only controls zlib
> --
>
> Key: HADOOP-8642
> URL: https://issues.apache.org/jira/browse/HADOOP-8642
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: native
>Affects Versions: 2.0.0-alpha
>Reporter: Eli Collins
>Assignee: Akira AJISAKA
> Attachments: HADOOP-8642.2.patch, HADOOP-8642.3.patch, 
> HADOOP-8642.patch
>
>
> Per core-default.xml {{io.native.lib.available}} indicates "Should native 
> hadoop libraries, if present, be used" however it looks like it only affects 
> zlib. Since we always load the native library this means we may use native 
> libraries even if io.native.lib.available is set to false.
> Let's make the flag to work as advertised - rather than always load the 
> native hadoop library we only attempt to load the library (and report that 
> native is available) if this flag is set. Since io.native.lib.available 
> defaults to true the default behavior should remain unchanged (except that 
> now we wont actually try to load the library if this flag is disabled).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-8642) io.native.lib.available only controls zlib

2014-08-19 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-8642:
--

Target Version/s: 2.6.0
  Status: Patch Available  (was: Open)

Probably, {{TestNativeCodeLoader}} will be skipped in Jenkins. The test can be 
run manually by {{mvn test -Pnative -Dtest=TestNativeCodeLoader 
-Drequire.test.libhadoop=true}}

> io.native.lib.available only controls zlib
> --
>
> Key: HADOOP-8642
> URL: https://issues.apache.org/jira/browse/HADOOP-8642
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: native
>Affects Versions: 2.0.0-alpha
>Reporter: Eli Collins
>Assignee: Akira AJISAKA
> Attachments: HADOOP-8642.2.patch, HADOOP-8642.3.patch, 
> HADOOP-8642.patch
>
>
> Per core-default.xml {{io.native.lib.available}} indicates "Should native 
> hadoop libraries, if present, be used" however it looks like it only affects 
> zlib. Since we always load the native library this means we may use native 
> libraries even if io.native.lib.available is set to false.
> Let's make the flag to work as advertised - rather than always load the 
> native hadoop library we only attempt to load the library (and report that 
> native is available) if this flag is set. Since io.native.lib.available 
> defaults to true the default behavior should remain unchanged (except that 
> now we wont actually try to load the library if this flag is disabled).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-9902) Shell script rewrite

2014-08-19 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14102154#comment-14102154
 ] 

Allen Wittenauer commented on HADOOP-9902:
--

Jenkins appears to be pretty horked.  Patch clearly applies, there are no tests 
associated with the shell code, and previous versions applied with no 
issues so I'm just going to commit -16.

Thanks all!

> Shell script rewrite
> 
>
> Key: HADOOP-9902
> URL: https://issues.apache.org/jira/browse/HADOOP-9902
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>  Labels: releasenotes
> Fix For: 3.0.0
>
> Attachments: HADOOP-9902-10.patch, HADOOP-9902-11.patch, 
> HADOOP-9902-12.patch, HADOOP-9902-13-branch-2.patch, HADOOP-9902-13.patch, 
> HADOOP-9902-14.patch, HADOOP-9902-15.patch, HADOOP-9902-16.patch, 
> HADOOP-9902-2.patch, HADOOP-9902-3.patch, HADOOP-9902-4.patch, 
> HADOOP-9902-5.patch, HADOOP-9902-6.patch, HADOOP-9902-7.patch, 
> HADOOP-9902-8.patch, HADOOP-9902-9.patch, HADOOP-9902.patch, HADOOP-9902.txt, 
> hadoop-9902-1.patch, more-info.txt
>
>
> Umbrella JIRA for shell script rewrite.  See more-info.txt for more details.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-9902) Shell script rewrite

2014-08-19 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-9902:
-

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Commit to trunk svn rev 1618847.  Closing.

> Shell script rewrite
> 
>
> Key: HADOOP-9902
> URL: https://issues.apache.org/jira/browse/HADOOP-9902
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>  Labels: releasenotes
> Fix For: 3.0.0
>
> Attachments: HADOOP-9902-10.patch, HADOOP-9902-11.patch, 
> HADOOP-9902-12.patch, HADOOP-9902-13-branch-2.patch, HADOOP-9902-13.patch, 
> HADOOP-9902-14.patch, HADOOP-9902-15.patch, HADOOP-9902-16.patch, 
> HADOOP-9902-2.patch, HADOOP-9902-3.patch, HADOOP-9902-4.patch, 
> HADOOP-9902-5.patch, HADOOP-9902-6.patch, HADOOP-9902-7.patch, 
> HADOOP-9902-8.patch, HADOOP-9902-9.patch, HADOOP-9902.patch, HADOOP-9902.txt, 
> hadoop-9902-1.patch, more-info.txt
>
>
> Umbrella JIRA for shell script rewrite.  See more-info.txt for more details.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10978) HADOOP_IDENT_STRING is overriden in hadoop-env.sh

2014-08-19 Thread Mathias Herberts (JIRA)
Mathias Herberts created HADOOP-10978:
-

 Summary: HADOOP_IDENT_STRING is overriden in hadoop-env.sh
 Key: HADOOP-10978
 URL: https://issues.apache.org/jira/browse/HADOOP-10978
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Mathias Herberts
Priority: Minor


hadoop-env.sh forces HADOOP_IDENT_STRING to $USER possibly overriding a 
previously set value.

Instead hadoop-env.sh should set HADOOP_IDENT_STRING to:

export HADOOP_IDENT_STRING=${HADOOP_IDENT_STRING:-$USER}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10615) FileInputStream in JenkinsHash#main() is never closed

2014-08-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14102172#comment-14102172
 ] 

Hadoop QA commented on HADOOP-10615:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12662559/HADOOP-10615-2.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common:

  org.apache.hadoop.ha.TestActiveStandbyElector

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4503//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4503//console

This message is automatically generated.

> FileInputStream in JenkinsHash#main() is never closed
> -
>
> Key: HADOOP-10615
> URL: https://issues.apache.org/jira/browse/HADOOP-10615
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Chen He
>Priority: Minor
> Attachments: HADOOP-10615-2.patch, HADOOP-10615.patch
>
>
> {code}
> FileInputStream in = new FileInputStream(args[0]);
> {code}
> The above FileInputStream is not closed upon exit of main.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10970) Cleanup KMS configuration keys

2014-08-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10970?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14102176#comment-14102176
 ] 

Hadoop QA commented on HADOOP-10970:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12662627/hadoop-10970.003.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-kms.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4504//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4504//console

This message is automatically generated.

> Cleanup KMS configuration keys
> --
>
> Key: HADOOP-10970
> URL: https://issues.apache.org/jira/browse/HADOOP-10970
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Attachments: hadoop-10970.001.patch, hadoop-10970.002.patch, 
> hadoop-10970.003.patch
>
>
> It'd be nice to add descriptions to the config keys in kms-site.xml.
> Also, it'd be good to rename key.provider.path to key.provider.uri for 
> clarity, or just drop ".path".



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10530) Make hadoop trunk build on Java7+ only

2014-08-19 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14102185#comment-14102185
 ] 

Allen Wittenauer commented on HADOOP-10530:
---

At this point, shouldn't trunk really be jdk 1.8?

> Make hadoop trunk build on Java7+ only
> --
>
> Key: HADOOP-10530
> URL: https://issues.apache.org/jira/browse/HADOOP-10530
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0, 2.4.0
> Environment: Java 1.7+
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-10530-001.patch, HADOOP-10530-002.patch
>
>
> As discussed on hadoop-common, hadoop 3 is envisaged to be Java7+ *only* 
> -this JIRA covers switching the build for this
> # maven enforcer plugin to set Java version = {{[1.7)}}
> # compiler to set language to java 1.7



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10978) HADOOP_IDENT_STRING is overriden in hadoop-env.sh

2014-08-19 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14102196#comment-14102196
 ] 

Allen Wittenauer commented on HADOOP-10978:
---

This problem has already been fixed in trunk as part of HADOOP-9902.

> HADOOP_IDENT_STRING is overriden in hadoop-env.sh
> -
>
> Key: HADOOP-10978
> URL: https://issues.apache.org/jira/browse/HADOOP-10978
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Mathias Herberts
>Priority: Minor
>
> hadoop-env.sh forces HADOOP_IDENT_STRING to $USER possibly overriding a 
> previously set value.
> Instead hadoop-env.sh should set HADOOP_IDENT_STRING to:
> export HADOOP_IDENT_STRING=${HADOOP_IDENT_STRING:-$USER}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10788) [post-HADOOP-9902] Rewrite httpfs, kms, sls, and other stragglers

2014-08-19 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-10788:
--

Summary: [post-HADOOP-9902] Rewrite httpfs, kms, sls, and other stragglers  
(was: Rewrite httpfs, kms, sls, and other stragglers from HADOOP-9902)

> [post-HADOOP-9902] Rewrite httpfs, kms, sls, and other stragglers
> -
>
> Key: HADOOP-10788
> URL: https://issues.apache.org/jira/browse/HADOOP-10788
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>
> There are some stragglers not targeted by HADOOP-9902.  These should also get 
> rewritten to use the new hadoop-functions.sh framework. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10908) [post-HADOOP-9902] Cluster Node Setup needs updating

2014-08-19 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-10908:
--

Summary: [post-HADOOP-9902] Cluster Node Setup needs updating  (was: 
Cluster Node Setup needs updating post-HADOOP-9902)

> [post-HADOOP-9902] Cluster Node Setup needs updating
> 
>
> Key: HADOOP-10908
> URL: https://issues.apache.org/jira/browse/HADOOP-10908
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>
> A lot of the instructions in the cluster node setup are not good practices 
> post-9902.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10787) [post-HADOOP-9902] Rename/remove DEFAULT_LIBEXEC_DIR from the shell scripts

2014-08-19 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-10787:
--

Summary: [post-HADOOP-9902] Rename/remove DEFAULT_LIBEXEC_DIR from the 
shell scripts  (was: Rename DEFAULT_LIBEXEC_DIR from the shell scripts)

> [post-HADOOP-9902] Rename/remove DEFAULT_LIBEXEC_DIR from the shell scripts
> ---
>
> Key: HADOOP-10787
> URL: https://issues.apache.org/jira/browse/HADOOP-10787
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>
> DEFAULT_LIBEXEC_DIR pollutes the shell name space.  It should be renamed to 
> HADOOP_DEFAULT_LIBEXEC_DIR.  Unfortunately, this touches every single shell 
> script.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10879) [post-HADOOP-9902] Rename *-env.sh in the tree to *-env.sh.example

2014-08-19 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-10879:
--

Summary: [post-HADOOP-9902] Rename *-env.sh in the tree to *-env.sh.example 
 (was: Rename *-env.sh in the tree to *-env.sh.example)

> [post-HADOOP-9902] Rename *-env.sh in the tree to *-env.sh.example
> --
>
> Key: HADOOP-10879
> URL: https://issues.apache.org/jira/browse/HADOOP-10879
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Allen Wittenauer
>
> With HADOOP-9902 in place, we don't have to ship *-env.sh called as such and 
> only provide examples.  This goes a long way with being able to upgrade the 
> binaries in place since we would no longer overwrite those files upon 
> extraction.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10854) [post-HADOOP-9902] unit tests for the shell scripts

2014-08-19 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10854?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-10854:
--

Summary: [post-HADOOP-9902] unit tests for the shell scripts  (was: unit 
tests for the shell scripts)

> [post-HADOOP-9902] unit tests for the shell scripts
> ---
>
> Key: HADOOP-10854
> URL: https://issues.apache.org/jira/browse/HADOOP-10854
> Project: Hadoop Common
>  Issue Type: Test
>Reporter: Allen Wittenauer
>
> With HADOOP-9902 moving a lot of functionality to functions, we should build 
> some unit tests for them.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10854) [post-HADOOP-9902] unit tests for the shell scripts

2014-08-19 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10854?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-10854:
--

Description: With HADOOP-9902 moving a lot of the core functionality to 
functions, we should build some unit tests for them.  (was: With HADOOP-9902 
moving a lot of functionality to functions, we should build some unit tests for 
them.)

> [post-HADOOP-9902] unit tests for the shell scripts
> ---
>
> Key: HADOOP-10854
> URL: https://issues.apache.org/jira/browse/HADOOP-10854
> Project: Hadoop Common
>  Issue Type: Test
>Reporter: Allen Wittenauer
>
> With HADOOP-9902 moving a lot of the core functionality to functions, we 
> should build some unit tests for them.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10979) [post-HADOOP-9902] Auto-entries in hadoop_usage

2014-08-19 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HADOOP-10979:
-

 Summary: [post-HADOOP-9902] Auto-entries in hadoop_usage
 Key: HADOOP-10979
 URL: https://issues.apache.org/jira/browse/HADOOP-10979
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Allen Wittenauer
Priority: Minor


It would make adding common options to hadoop_usage output easier if some 
entries were auto-populated.  This is similar to what happens in FsShell and 
other parts of the Java code.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10979) [post-HADOOP-9902] Auto-entries in hadoop_usage

2014-08-19 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14102209#comment-14102209
 ] 

Allen Wittenauer commented on HADOOP-10979:
---

In particular:

* --config

*  --daemon options are standardized across all of the subsystems

* help

* version?



> [post-HADOOP-9902] Auto-entries in hadoop_usage
> ---
>
> Key: HADOOP-10979
> URL: https://issues.apache.org/jira/browse/HADOOP-10979
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Allen Wittenauer
>Priority: Minor
>
> It would make adding common options to hadoop_usage output easier if some 
> entries were auto-populated.  This is similar to what happens in FsShell and 
> other parts of the Java code.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10901) [post-HADOOP-9902] provide un-camelCased versions of shell commands

2014-08-19 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-10901:
--

Summary: [post-HADOOP-9902] provide un-camelCased versions of shell 
commands  (was: provide un-camelCased versions of shell commands)

> [post-HADOOP-9902] provide un-camelCased versions of shell commands
> ---
>
> Key: HADOOP-10901
> URL: https://issues.apache.org/jira/browse/HADOOP-10901
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Allen Wittenauer
>
> There is a heavy disposition to do camelCase subcommands because it reflects 
> what is in the Java code.  However, it is very counter to the shell.  We 
> should replace the case options to accept both the camelCase and the fully 
> lowercase options.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10530) Make hadoop trunk build on Java7+ only

2014-08-19 Thread Tsuyoshi OZAWA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14102224#comment-14102224
 ] 

Tsuyoshi OZAWA commented on HADOOP-10530:
-

Good catch, Allen. JDK 8u11 is available at this time. 
http://www.oracle.com/technetwork/java/javase/8u-relnotes-2225394.html

> Make hadoop trunk build on Java7+ only
> --
>
> Key: HADOOP-10530
> URL: https://issues.apache.org/jira/browse/HADOOP-10530
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.0.0, 2.4.0
> Environment: Java 1.7+
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-10530-001.patch, HADOOP-10530-002.patch
>
>
> As discussed on hadoop-common, hadoop 3 is envisaged to be Java7+ *only* 
> -this JIRA covers switching the build for this
> # maven enforcer plugin to set Java version = {{[1.7)}}
> # compiler to set language to java 1.7



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10893) isolated classloader on the client side

2014-08-19 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14102227#comment-14102227
 ] 

Kihwal Lee commented on HADOOP-10893:
-

+1 the patch looks good. [~jlowe], do you have any further comments?

> isolated classloader on the client side
> ---
>
> Key: HADOOP-10893
> URL: https://issues.apache.org/jira/browse/HADOOP-10893
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: util
>Affects Versions: 2.4.0
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
> Attachments: HADOOP-10893.patch, HADOOP-10893.patch, 
> HADOOP-10893.patch, HADOOP-10893.patch, HADOOP-10893.patch, 
> classloader-test.tar.gz
>
>
> We have the job classloader on the mapreduce tasks that run on the cluster. 
> It has a benefit of being able to isolate class space for user code and avoid 
> version clashes.
> Although it occurs less often, version clashes do occur on the client JVM. It 
> would be good to introduce an isolated classloader on the client side as well 
> to address this. A natural point to introduce this may be through RunJar, as 
> that's how most of hadoop jobs are run.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10978) HADOOP_IDENT_STRING is overriden in hadoop-env.sh

2014-08-19 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10978?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-10978:
--

Affects Version/s: 2.6.0

> HADOOP_IDENT_STRING is overriden in hadoop-env.sh
> -
>
> Key: HADOOP-10978
> URL: https://issues.apache.org/jira/browse/HADOOP-10978
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Mathias Herberts
>Priority: Minor
>
> hadoop-env.sh forces HADOOP_IDENT_STRING to $USER possibly overriding a 
> previously set value.
> Instead hadoop-env.sh should set HADOOP_IDENT_STRING to:
> export HADOOP_IDENT_STRING=${HADOOP_IDENT_STRING:-$USER}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-9902) Shell script rewrite

2014-08-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14102268#comment-14102268
 ] 

Hadoop QA commented on HADOOP-9902:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12662613/HADOOP-9902-16.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-assemblies hadoop-common-project/hadoop-common 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.ha.TestActiveStandbyElector
  org.apache.hadoop.ha.TestZKFailoverController
  
org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4502//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4502//console

This message is automatically generated.

> Shell script rewrite
> 
>
> Key: HADOOP-9902
> URL: https://issues.apache.org/jira/browse/HADOOP-9902
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>  Labels: releasenotes
> Fix For: 3.0.0
>
> Attachments: HADOOP-9902-10.patch, HADOOP-9902-11.patch, 
> HADOOP-9902-12.patch, HADOOP-9902-13-branch-2.patch, HADOOP-9902-13.patch, 
> HADOOP-9902-14.patch, HADOOP-9902-15.patch, HADOOP-9902-16.patch, 
> HADOOP-9902-2.patch, HADOOP-9902-3.patch, HADOOP-9902-4.patch, 
> HADOOP-9902-5.patch, HADOOP-9902-6.patch, HADOOP-9902-7.patch, 
> HADOOP-9902-8.patch, HADOOP-9902-9.patch, HADOOP-9902.patch, HADOOP-9902.txt, 
> hadoop-9902-1.patch, more-info.txt
>
>
> Umbrella JIRA for shell script rewrite.  See more-info.txt for more details.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10968) hadoop common fails to detect java_libarch on ppc64le

2014-08-19 Thread Dinar Valeev (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14102285#comment-14102285
 ] 

Dinar Valeev commented on HADOOP-10968:
---

ppc64(BE) still should be fine,since libarch matches CMAKE_SYSTEM_PROCESSOR

> hadoop common fails to detect java_libarch on ppc64le
> -
>
> Key: HADOOP-10968
> URL: https://issues.apache.org/jira/browse/HADOOP-10968
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.3.0
>Reporter: Dinar Valeev
> Fix For: 0.23.2
>
> Attachments: 0001-Set-java_libarch-for-ppc64le-2.patch
>
>
> [INFO] 
> [INFO] --- maven-antrun-plugin:1.7:run (make) @ hadoop-common ---
> [INFO] Executing tasks
> main:
>  [exec] -- The C compiler identification is GNU 4.8.3
>  [exec] -- The CXX compiler identification is GNU 4.8.3
>  [exec] -- Check for working C compiler: /usr/bin/cc
>  [exec] -- Check for working C compiler: /usr/bin/cc -- works
>  [exec] -- Detecting C compiler ABI info
>  [exec] -- Detecting C compiler ABI info - done
>  [exec] -- Check for working CXX compiler: /usr/bin/c++
>  [exec] -- Check for working CXX compiler: /usr/bin/c++ -- works
>  [exec] JAVA_HOME=, JAVA_JVM_LIBRARY=JAVA_JVM_LIBRARY-NOTFOUND
>  [exec] 
> JAVA_INCLUDE_PATH=/usr/lib64/jvm/java-1.7.0-openjdk-1.7.0/include, 
> JAVA_INCLUDE_PATH2=/usr/lib64/jvm/java-1.7.0-openjdk-1.7.0/include/linux
>  [exec] CMake Error at JNIFlags.cmake:114 (MESSAGE):
>  [exec]   Failed to find a viable JVM installation under JAVA_HOME.
>  [exec] Call Stack (most recent call first):
>  [exec]   CMakeLists.txt:24 (include)
>  [exec] 
>  [exec] 
>  [exec] -- Detecting CXX compiler ABI info
>  [exec] -- Detecting CXX compiler ABI info - done
>  [exec] -- Configuring incomplete, errors occurred!
>  [exec] See also 
> "/root/bigtop/build/hadoop/rpm/BUILD/hadoop-2.3.0-src/hadoop-common-project/hadoop-common/target/native/CMakeFiles/CMakeOutput.log".
> [INFO] 
> 
> [INFO] Reactor Summary:
> [INFO] 
> [INFO] Apache Hadoop Main . SUCCESS [ 10.680 
> s]
> [INFO] Apache Hadoop Project POM .. SUCCESS [  0.716 
> s]
> [INFO] Apache Hadoop Annotations .. SUCCESS [  3.270 
> s]
> [INFO] Apache Hadoop Assemblies ... SUCCESS [  0.274 
> s]
> [INFO] Apache Hadoop Project Dist POM . SUCCESS [  1.819 
> s]
> [INFO] Apache Hadoop Maven Plugins  SUCCESS [  3.284 
> s]
> [INFO] Apache Hadoop MiniKDC .. SUCCESS [  2.863 
> s]
> [INFO] Apache Hadoop Auth . SUCCESS [  4.032 
> s]
> [INFO] Apache Hadoop Auth Examples  SUCCESS [  2.475 
> s]
> [INFO] Apache Hadoop Common ... FAILURE [ 10.458 
> s]
> [INFO] Apache Hadoop NFS .. SKIPPED
> [INFO] Apache Hadoop Common Project ... SKIPPED
> [INFO] Apache Hadoop HDFS . SKIPPED
> [INFO] Apache Hadoop HttpFS ... SKIPPED
> [INFO] Apache Hadoop HDFS BookKeeper Journal .. SKIPPED
> [INFO] Apache Hadoop HDFS-NFS . SKIPPED
> [INFO] Apache Hadoop HDFS Project . SKIPPED
> [INFO] hadoop-yarn  SKIPPED
> [INFO] hadoop-yarn-api  SKIPPED
> [INFO] hadoop-yarn-common . SKIPPED
> [INFO] hadoop-yarn-server . SKIPPED
> [INFO] hadoop-yarn-server-common .. SKIPPED
> [INFO] hadoop-yarn-server-nodemanager . SKIPPED
> [INFO] hadoop-yarn-server-web-proxy ... SKIPPED
> [INFO] hadoop-yarn-server-resourcemanager . SKIPPED
> [INFO] hadoop-yarn-server-tests ... SKIPPED
> [INFO] hadoop-yarn-client . SKIPPED
> [INFO] hadoop-yarn-applications ... SKIPPED
> [INFO] hadoop-yarn-applications-distributedshell .. SKIPPED
> [INFO] hadoop-yarn-applications-unmanaged-am-launcher . SKIPPED
> [INFO] hadoop-yarn-site ... SKIPPED
> [INFO] hadoop-yarn-project  SKIPPED
> [INFO] hadoop-mapreduce-client  SKIPPED
> [INFO] hadoop-mapreduce-client-core ... SKIPPED
> [INFO] hadoop-mapreduce-client-common . SKIPPED
> [INFO] hadoop-mapreduce-client-shuffle 

[jira] [Created] (HADOOP-10980) TestActiveStandbyElector fails occasionally in trunk

2014-08-19 Thread Ted Yu (JIRA)
Ted Yu created HADOOP-10980:
---

 Summary: TestActiveStandbyElector fails occasionally in trunk
 Key: HADOOP-10980
 URL: https://issues.apache.org/jira/browse/HADOOP-10980
 Project: Hadoop Common
  Issue Type: Test
Reporter: Ted Yu
Priority: Minor


>From https://builds.apache.org/job/Hadoop-Common-trunk/1211/consoleFull :
{code}
Running org.apache.hadoop.ha.TestActiveStandbyElector
Tests run: 23, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.7 sec <<< 
FAILURE! - in org.apache.hadoop.ha.TestActiveStandbyElector
testWithoutZKServer(org.apache.hadoop.ha.TestActiveStandbyElector)  Time 
elapsed: 0.051 sec  <<< FAILURE!
java.lang.AssertionError: Did not throw zookeeper connection loss exceptions!
at org.junit.Assert.fail(Assert.java:88)
at 
org.apache.hadoop.ha.TestActiveStandbyElector.testWithoutZKServer(TestActiveStandbyElector.java:722)
{code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10668) TestZKFailoverControllerStress#testExpireBackAndForth occasionally fails

2014-08-19 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10668?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14102358#comment-14102358
 ] 

Ted Yu commented on HADOOP-10668:
-

Failed again in 
https://builds.apache.org/job/Hadoop-Common-trunk/1211/consoleFull

> TestZKFailoverControllerStress#testExpireBackAndForth occasionally fails
> 
>
> Key: HADOOP-10668
> URL: https://issues.apache.org/jira/browse/HADOOP-10668
> Project: Hadoop Common
>  Issue Type: Test
>Reporter: Ted Yu
>Priority: Minor
>  Labels: test
>
> From 
> https://builds.apache.org/job/PreCommit-HADOOP-Build/4018//testReport/org.apache.hadoop.ha/TestZKFailoverControllerStress/testExpireBackAndForth/
>  :
> {code}
> org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode
>   at org.apache.zookeeper.server.DataTree.getData(DataTree.java:648)
>   at org.apache.zookeeper.server.ZKDatabase.getData(ZKDatabase.java:371)
>   at 
> org.apache.hadoop.ha.MiniZKFCCluster.expireActiveLockHolder(MiniZKFCCluster.java:199)
>   at 
> org.apache.hadoop.ha.MiniZKFCCluster.expireAndVerifyFailover(MiniZKFCCluster.java:234)
>   at 
> org.apache.hadoop.ha.TestZKFailoverControllerStress.testExpireBackAndForth(TestZKFailoverControllerStress.java:84)
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10970) Cleanup KMS configuration keys

2014-08-19 Thread Owen O'Malley (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10970?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14102382#comment-14102382
 ] 

Owen O'Malley commented on HADOOP-10970:


The comment on the hadoop.security.key.provider.path value says 'URI' instead 
of 'URI path'. Please fix it.

> Cleanup KMS configuration keys
> --
>
> Key: HADOOP-10970
> URL: https://issues.apache.org/jira/browse/HADOOP-10970
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Attachments: hadoop-10970.001.patch, hadoop-10970.002.patch, 
> hadoop-10970.003.patch
>
>
> It'd be nice to add descriptions to the config keys in kms-site.xml.
> Also, it'd be good to rename key.provider.path to key.provider.uri for 
> clarity, or just drop ".path".



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10059) RPC authentication and authorization metrics overflow to negative values on busy clusters

2014-08-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14102391#comment-14102391
 ] 

Hudson commented on HADOOP-10059:
-

FAILURE: Integrated in Hadoop-Yarn-trunk #651 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/651/])
HADOOP-10059. RPC authentication and authorization metrics overflow to negative 
values on busy clusters. Contributed by Tsuyoshi OZAWA and Akira AJISAKA 
(jlowe: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1618659)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/metrics/RpcMetrics.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestRPC.java


> RPC authentication and authorization metrics overflow to negative values on 
> busy clusters
> -
>
> Key: HADOOP-10059
> URL: https://issues.apache.org/jira/browse/HADOOP-10059
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 0.23.9, 2.2.0
>Reporter: Jason Lowe
>Assignee: Tsuyoshi OZAWA
>Priority: Minor
> Fix For: 3.0.0, 2.6.0
>
> Attachments: HADOOP-10059.1.patch, HADOOP-10059.2.patch
>
>
> The RPC metrics for authorization and authentication successes can easily 
> overflow to negative values on a busy cluster that has been up for a long 
> time.  We should consider providing 64-bit values for these counters.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10873) Fix dead links in the API doc

2014-08-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14102396#comment-14102396
 ] 

Hudson commented on HADOOP-10873:
-

FAILURE: Integrated in Hadoop-Yarn-trunk #651 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/651/])
HADOOP-10873. Fix dead link in Configuration javadoc (Akira AJISAKA via aw) 
(aw: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1618721)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java


> Fix dead links in the API doc
> -
>
> Key: HADOOP-10873
> URL: https://issues.apache.org/jira/browse/HADOOP-10873
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Reporter: Akira AJISAKA
>
> There are a lot of dead links in [Hadoop API 
> doc|http://hadoop.apache.org/docs/r2.4.1/api/]. We should fix them.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10973) Native Libraries Guide contains format error

2014-08-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14102393#comment-14102393
 ] 

Hudson commented on HADOOP-10973:
-

FAILURE: Integrated in Hadoop-Yarn-trunk #651 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/651/])
HADOOP-10973. Native Libraries Guide contains format error. (Contributed by 
Peter Klavins) (arp: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1618682)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/site/apt/NativeLibraries.apt.vm


> Native Libraries Guide contains format error
> 
>
> Key: HADOOP-10973
> URL: https://issues.apache.org/jira/browse/HADOOP-10973
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.0.0
>Reporter: Peter Klavins
>Assignee: Peter Klavins
>Priority: Minor
>  Labels: apt, documentation, xdocs
> Fix For: 3.0.0, 2.6.0
>
> Attachments: HADOOP-10973.patch
>
>
> The move from xdocs to APT introduced a formatting bug so that the sub-list 
> under Usage point 4 was merged into the text itself and no longer appeared as 
> a sub-list. Compare xdocs version 
> http://hadoop.apache.org/docs/r1.2.1/native_libraries.html#Native+Hadoop+Library
>  to APT version 
> http://hadoop.apache.org/docs/r2.5.0/hadoop-project-dist/hadoop-common/NativeLibraries.html#Native_Hadoop_Library.
> The patch is to trunk, but is also valid for released versions 0.23.11, 
> 2.2.0, 2.3.0, 2.4.0, 2.4.1, and 2.5.0, and could be back-ported to them if 
> deemed necessary.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10972) Native Libraries Guide contains mis-spelt build line

2014-08-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14102390#comment-14102390
 ] 

Hudson commented on HADOOP-10972:
-

FAILURE: Integrated in Hadoop-Yarn-trunk #651 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/651/])
HADOOP-10972. Native Libraries Guide contains mis-spelt build line (Peter 
Klavins via aw) (aw: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1618719)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/site/apt/NativeLibraries.apt.vm


> Native Libraries Guide contains mis-spelt build line
> 
>
> Key: HADOOP-10972
> URL: https://issues.apache.org/jira/browse/HADOOP-10972
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.0.0, 2.3.0
>Reporter: Peter Klavins
>Assignee: Peter Klavins
>  Labels: documentation, newbie
> Fix For: 3.0.0, 2.6.0
>
> Attachments: HADOOP-10972.patch
>
>
> The Native Libraries Guide mis-spells the define 'skipTests' with a lowercase 
> 't' in the build line. The correct build line is:
> {code:none}
> $ mvn package -Pdist,native -DskipTests -Dtar
> {code}
> Patch is to trunk, but is also valid for released versions 2.2.0, 2.3.0, 
> 2.4.0, 2.4.1, and 2.5.0, and could be back-ported to them if deemed necessary.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-9902) Shell script rewrite

2014-08-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14102402#comment-14102402
 ] 

Hudson commented on HADOOP-9902:


FAILURE: Integrated in Hadoop-Yarn-trunk #651 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/651/])
HADOOP-9902. Shell script rewrite (aw) (aw: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1618847)
* 
/hadoop/common/trunk/hadoop-assemblies/src/main/resources/assemblies/hadoop-dist.xml
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/bin/hadoop
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/bin/hadoop-config.sh
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/bin/hadoop-daemon.sh
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/bin/hadoop-daemons.sh
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/bin/hadoop-functions.sh
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/bin/hadoop-layout.sh.example
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/bin/rcc
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/bin/slaves.sh
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/bin/start-all.sh
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/bin/stop-all.sh
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/conf/hadoop-env.sh
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/distribute-exclude.sh
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs-config.sh
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/refresh-namenodes.sh
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/start-balancer.sh
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/start-dfs.sh
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/start-secure-dns.sh
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/stop-balancer.sh
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/stop-dfs.sh
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/stop-secure-dns.sh
* /hadoop/common/trunk/hadoop-mapreduce-project/bin/mapred
* /hadoop/common/trunk/hadoop-mapreduce-project/bin/mapred-config.sh
* /hadoop/common/trunk/hadoop-mapreduce-project/bin/mr-jobhistory-daemon.sh
* /hadoop/common/trunk/hadoop-mapreduce-project/conf/mapred-env.sh
* /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/bin/slaves.sh
* /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/bin/start-yarn.sh
* /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/bin/stop-yarn.sh
* /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/bin/yarn
* /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/bin/yarn-config.sh
* /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/bin/yarn-daemon.sh
* /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/bin/yarn-daemons.sh
* /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/conf/yarn-env.sh


> Shell script rewrite
> 
>
> Key: HADOOP-9902
> URL: https://issues.apache.org/jira/browse/HADOOP-9902
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>  Labels: releasenotes
> Fix For: 3.0.0
>
> Attachments: HADOOP-9902-10.patch, HADOOP-9902-11.patch, 
> HADOOP-9902-12.patch, HADOOP-9902-13-branch-2.patch, HADOOP-9902-13.patch, 
> HADOOP-9902-14.patch, HADOOP-9902-15.patch, HADOOP-9902-16.patch, 
> HADOOP-9902-2.patch, HADOOP-9902-3.patch, HADOOP-9902-4.patch, 
> HADOOP-9902-5.patch, HADOOP-9902-6.patch, HADOOP-9902-7.patch, 
> HADOOP-9902-8.patch, HADOOP-9902-9.patch, HADOOP-9902.patch, HADOOP-9902.txt, 
> hadoop-9902-1.patch, more-info.txt
>
>
> Umbrella JIRA for shell script rewrite.  See more-info.txt for more details.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10975) org.apache.hadoop.util.DataChecksum should support native checksum calculation

2014-08-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14102394#comment-14102394
 ] 

Hudson commented on HADOOP-10975:
-

FAILURE: Integrated in Hadoop-Yarn-trunk #651 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/651/])
Update CHANGES.txt for HDFS-6561 which was renamed to HADOOP-10975. (todd: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1618686)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt


> org.apache.hadoop.util.DataChecksum should support native checksum calculation
> --
>
> Key: HADOOP-10975
> URL: https://issues.apache.org/jira/browse/HADOOP-10975
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: performance
>Reporter: James Thomas
>Assignee: James Thomas
> Fix For: 2.6.0
>
> Attachments: HDFS-6561.2.patch, HDFS-6561.3.patch, HDFS-6561.4.patch, 
> HDFS-6561.5.patch, HDFS-6561.patch, hdfs-6561-just-hadoop-changes.txt
>
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10975) org.apache.hadoop.util.DataChecksum should support native checksum calculation

2014-08-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14102395#comment-14102395
 ] 

Hudson commented on HADOOP-10975:
-

FAILURE: Integrated in Hadoop-Yarn-trunk #651 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/651/])
Update CHANGES.txt for HDFS-6561 which was renamed to HADOOP-10975. (todd: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1618686)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt


> org.apache.hadoop.util.DataChecksum should support native checksum calculation
> --
>
> Key: HADOOP-10975
> URL: https://issues.apache.org/jira/browse/HADOOP-10975
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: performance
>Reporter: James Thomas
>Assignee: James Thomas
> Fix For: 2.6.0
>
> Attachments: HDFS-6561.2.patch, HDFS-6561.3.patch, HDFS-6561.4.patch, 
> HDFS-6561.5.patch, HDFS-6561.patch, hdfs-6561-just-hadoop-changes.txt
>
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10641) Introduce Coordination Engine interface

2014-08-19 Thread Michael Parkin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10641?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Parkin updated HADOOP-10641:


Attachment: ce-tla.zip

Please find attached an first attempt at a CE specification in TLA+. With TLC 
and the attached configuration file, model checking is successful.

The specification is for a CE that accepts proposals (containing values 
submitted by proposers) and produces a sequence of agreements. The mechanism 
through which proposals are agreed will depend on the coordination algorithm 
used by the CE implementation - at the moment all the specification states is 
that a submitted proposal ends up in the agreement sequence.

> Introduce Coordination Engine interface
> ---
>
> Key: HADOOP-10641
> URL: https://issues.apache.org/jira/browse/HADOOP-10641
> Project: Hadoop Common
>  Issue Type: New Feature
>Affects Versions: 3.0.0
>Reporter: Konstantin Shvachko
>Assignee: Plamen Jeliazkov
> Attachments: HADOOP-10641.patch, HADOOP-10641.patch, 
> HADOOP-10641.patch, HADOOP-10641.patch, ce-tla.zip, hadoop-coordination.patch
>
>
> Coordination Engine (CE) is a system, which allows to agree on a sequence of 
> events in a distributed system. In order to be reliable CE should be 
> distributed by itself.
> Coordination Engine can be based on different algorithms (paxos, raft, 2PC, 
> zab) and have different implementations, depending on use cases, reliability, 
> availability, and performance requirements.
> CE should have a common API, so that it could serve as a pluggable component 
> in different projects. The immediate beneficiaries are HDFS (HDFS-6469) and 
> HBase (HBASE-10909).
> First implementation is proposed to be based on ZooKeeper.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-8896) Javadoc points to Wrong Reader and Writer classes in SequenceFile

2014-08-19 Thread Ray Chiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ray Chiang updated HADOOP-8896:
---

Attachment: HADOOP-8896-03.patch

Clean up one more link, as requested.

> Javadoc points to Wrong Reader and Writer classes in SequenceFile
> -
>
> Key: HADOOP-8896
> URL: https://issues.apache.org/jira/browse/HADOOP-8896
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, io
>Affects Versions: 2.0.1-alpha
>Reporter: Timothy Mann
>Priority: Trivial
>  Labels: sequence-file
> Attachments: HADOOP-8896-03.patch, HADOOP8896-01.patch
>
>   Original Estimate: 1m
>  Remaining Estimate: 1m
>
> Line 56 of org.apache.hadoop.io.SequenceFile refers to {@link Writer}, {@link 
> Reader} in the javadoc comment describing the class SequenceFile. When the 
> javadoc is built Reader and Writer link to java.io.Reader and java.io.Writer, 
> respectively. However, they should instead refer to {@link 
> SequenceFile.Reader} and {@link SequenceFile.Writer}.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-8896) Javadoc points to Wrong Reader and Writer classes in SequenceFile

2014-08-19 Thread Ray Chiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ray Chiang updated HADOOP-8896:
---

Attachment: (was: HADOOP-8896-03.patch)

> Javadoc points to Wrong Reader and Writer classes in SequenceFile
> -
>
> Key: HADOOP-8896
> URL: https://issues.apache.org/jira/browse/HADOOP-8896
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, io
>Affects Versions: 2.0.1-alpha
>Reporter: Timothy Mann
>Priority: Trivial
>  Labels: sequence-file
> Attachments: HADOOP-8896-02.patch, HADOOP8896-01.patch
>
>   Original Estimate: 1m
>  Remaining Estimate: 1m
>
> Line 56 of org.apache.hadoop.io.SequenceFile refers to {@link Writer}, {@link 
> Reader} in the javadoc comment describing the class SequenceFile. When the 
> javadoc is built Reader and Writer link to java.io.Reader and java.io.Writer, 
> respectively. However, they should instead refer to {@link 
> SequenceFile.Reader} and {@link SequenceFile.Writer}.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-8896) Javadoc points to Wrong Reader and Writer classes in SequenceFile

2014-08-19 Thread Ray Chiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ray Chiang updated HADOOP-8896:
---

Attachment: HADOOP-8896-02.patch

Fix one more link as requested.

> Javadoc points to Wrong Reader and Writer classes in SequenceFile
> -
>
> Key: HADOOP-8896
> URL: https://issues.apache.org/jira/browse/HADOOP-8896
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, io
>Affects Versions: 2.0.1-alpha
>Reporter: Timothy Mann
>Priority: Trivial
>  Labels: sequence-file
> Attachments: HADOOP-8896-02.patch, HADOOP8896-01.patch
>
>   Original Estimate: 1m
>  Remaining Estimate: 1m
>
> Line 56 of org.apache.hadoop.io.SequenceFile refers to {@link Writer}, {@link 
> Reader} in the javadoc comment describing the class SequenceFile. When the 
> javadoc is built Reader and Writer link to java.io.Reader and java.io.Writer, 
> respectively. However, they should instead refer to {@link 
> SequenceFile.Reader} and {@link SequenceFile.Writer}.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10641) Introduce Coordination Engine interface

2014-08-19 Thread Alex Newman (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10641?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14102447#comment-14102447
 ] 

Alex Newman commented on HADOOP-10641:
--

[~mparkin] this looks really neat. 

[~ste...@apache.org] I am curious if that helps at all? 

> Introduce Coordination Engine interface
> ---
>
> Key: HADOOP-10641
> URL: https://issues.apache.org/jira/browse/HADOOP-10641
> Project: Hadoop Common
>  Issue Type: New Feature
>Affects Versions: 3.0.0
>Reporter: Konstantin Shvachko
>Assignee: Plamen Jeliazkov
> Attachments: HADOOP-10641.patch, HADOOP-10641.patch, 
> HADOOP-10641.patch, HADOOP-10641.patch, ce-tla.zip, hadoop-coordination.patch
>
>
> Coordination Engine (CE) is a system, which allows to agree on a sequence of 
> events in a distributed system. In order to be reliable CE should be 
> distributed by itself.
> Coordination Engine can be based on different algorithms (paxos, raft, 2PC, 
> zab) and have different implementations, depending on use cases, reliability, 
> availability, and performance requirements.
> CE should have a common API, so that it could serve as a pluggable component 
> in different projects. The immediate beneficiaries are HDFS (HDFS-6469) and 
> HBase (HBASE-10909).
> First implementation is proposed to be based on ZooKeeper.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-9902) Shell script rewrite

2014-08-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14102448#comment-14102448
 ] 

Hudson commented on HADOOP-9902:


FAILURE: Integrated in Hadoop-trunk-Commit #6087 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/6087/])
HADOOP-9902. Shell script rewrite (aw) (aw: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1618847)
* 
/hadoop/common/trunk/hadoop-assemblies/src/main/resources/assemblies/hadoop-dist.xml
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/bin/hadoop
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/bin/hadoop-config.sh
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/bin/hadoop-daemon.sh
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/bin/hadoop-daemons.sh
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/bin/hadoop-functions.sh
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/bin/hadoop-layout.sh.example
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/bin/rcc
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/bin/slaves.sh
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/bin/start-all.sh
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/bin/stop-all.sh
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/conf/hadoop-env.sh
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/distribute-exclude.sh
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs-config.sh
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/refresh-namenodes.sh
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/start-balancer.sh
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/start-dfs.sh
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/start-secure-dns.sh
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/stop-balancer.sh
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/stop-dfs.sh
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/stop-secure-dns.sh
* /hadoop/common/trunk/hadoop-mapreduce-project/bin/mapred
* /hadoop/common/trunk/hadoop-mapreduce-project/bin/mapred-config.sh
* /hadoop/common/trunk/hadoop-mapreduce-project/bin/mr-jobhistory-daemon.sh
* /hadoop/common/trunk/hadoop-mapreduce-project/conf/mapred-env.sh
* /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/bin/slaves.sh
* /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/bin/start-yarn.sh
* /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/bin/stop-yarn.sh
* /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/bin/yarn
* /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/bin/yarn-config.sh
* /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/bin/yarn-daemon.sh
* /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/bin/yarn-daemons.sh
* /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/conf/yarn-env.sh


> Shell script rewrite
> 
>
> Key: HADOOP-9902
> URL: https://issues.apache.org/jira/browse/HADOOP-9902
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>  Labels: releasenotes
> Fix For: 3.0.0
>
> Attachments: HADOOP-9902-10.patch, HADOOP-9902-11.patch, 
> HADOOP-9902-12.patch, HADOOP-9902-13-branch-2.patch, HADOOP-9902-13.patch, 
> HADOOP-9902-14.patch, HADOOP-9902-15.patch, HADOOP-9902-16.patch, 
> HADOOP-9902-2.patch, HADOOP-9902-3.patch, HADOOP-9902-4.patch, 
> HADOOP-9902-5.patch, HADOOP-9902-6.patch, HADOOP-9902-7.patch, 
> HADOOP-9902-8.patch, HADOOP-9902-9.patch, HADOOP-9902.patch, HADOOP-9902.txt, 
> hadoop-9902-1.patch, more-info.txt
>
>
> Umbrella JIRA for shell script rewrite.  See more-info.txt for more details.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10977) Periodically dump RPC metrics to logs

2014-08-19 Thread Esteban Gutierrez (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14102494#comment-14102494
 ] 

Esteban Gutierrez commented on HADOOP-10977:


[~arpitagarwal] Ops usually use curl or run an agent that will collect those 
metrics such as tcollector. Are you thinking in something else?

> Periodically dump RPC metrics to logs
> -
>
> Key: HADOOP-10977
> URL: https://issues.apache.org/jira/browse/HADOOP-10977
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 2.5.0
>Reporter: Arpit Agarwal
>
> It would be useful to periodically dump RPC/other metrics to a log file. We 
> could use a separate async log stream to avoid contending with logging on hot 
> paths.
> Placeholder Jira, this needs more thought.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10362) Closing of Reader in HadoopArchives#HArchiveInputFormat#getSplits() should check against null

2014-08-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14102510#comment-14102510
 ] 

Hadoop QA commented on HADOOP-10362:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12662729/HADOOP-10362.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-tools/hadoop-archives.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4505//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4505//console

This message is automatically generated.

> Closing of Reader in HadoopArchives#HArchiveInputFormat#getSplits() should 
> check against null
> -
>
> Key: HADOOP-10362
> URL: https://issues.apache.org/jira/browse/HADOOP-10362
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ted Yu
>Priority: Minor
> Attachments: HADOOP-10362.patch
>
>
> {code}
>   try {
> reader = new SequenceFile.Reader(fs, src, jconf);
> ...
>   finally {
> reader.close();
>   }
> {code}
> If Reader ctor throws exception, the close() method would be called on null 
> object.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10968) hadoop common fails to detect java_libarch on ppc64le

2014-08-19 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14102570#comment-14102570
 ] 

Colin Patrick McCabe commented on HADOOP-10968:
---

P.S. Test failures are unrelated to this change to the native code

> hadoop common fails to detect java_libarch on ppc64le
> -
>
> Key: HADOOP-10968
> URL: https://issues.apache.org/jira/browse/HADOOP-10968
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.3.0
>Reporter: Dinar Valeev
> Fix For: 0.23.2
>
> Attachments: 0001-Set-java_libarch-for-ppc64le-2.patch
>
>
> [INFO] 
> [INFO] --- maven-antrun-plugin:1.7:run (make) @ hadoop-common ---
> [INFO] Executing tasks
> main:
>  [exec] -- The C compiler identification is GNU 4.8.3
>  [exec] -- The CXX compiler identification is GNU 4.8.3
>  [exec] -- Check for working C compiler: /usr/bin/cc
>  [exec] -- Check for working C compiler: /usr/bin/cc -- works
>  [exec] -- Detecting C compiler ABI info
>  [exec] -- Detecting C compiler ABI info - done
>  [exec] -- Check for working CXX compiler: /usr/bin/c++
>  [exec] -- Check for working CXX compiler: /usr/bin/c++ -- works
>  [exec] JAVA_HOME=, JAVA_JVM_LIBRARY=JAVA_JVM_LIBRARY-NOTFOUND
>  [exec] 
> JAVA_INCLUDE_PATH=/usr/lib64/jvm/java-1.7.0-openjdk-1.7.0/include, 
> JAVA_INCLUDE_PATH2=/usr/lib64/jvm/java-1.7.0-openjdk-1.7.0/include/linux
>  [exec] CMake Error at JNIFlags.cmake:114 (MESSAGE):
>  [exec]   Failed to find a viable JVM installation under JAVA_HOME.
>  [exec] Call Stack (most recent call first):
>  [exec]   CMakeLists.txt:24 (include)
>  [exec] 
>  [exec] 
>  [exec] -- Detecting CXX compiler ABI info
>  [exec] -- Detecting CXX compiler ABI info - done
>  [exec] -- Configuring incomplete, errors occurred!
>  [exec] See also 
> "/root/bigtop/build/hadoop/rpm/BUILD/hadoop-2.3.0-src/hadoop-common-project/hadoop-common/target/native/CMakeFiles/CMakeOutput.log".
> [INFO] 
> 
> [INFO] Reactor Summary:
> [INFO] 
> [INFO] Apache Hadoop Main . SUCCESS [ 10.680 
> s]
> [INFO] Apache Hadoop Project POM .. SUCCESS [  0.716 
> s]
> [INFO] Apache Hadoop Annotations .. SUCCESS [  3.270 
> s]
> [INFO] Apache Hadoop Assemblies ... SUCCESS [  0.274 
> s]
> [INFO] Apache Hadoop Project Dist POM . SUCCESS [  1.819 
> s]
> [INFO] Apache Hadoop Maven Plugins  SUCCESS [  3.284 
> s]
> [INFO] Apache Hadoop MiniKDC .. SUCCESS [  2.863 
> s]
> [INFO] Apache Hadoop Auth . SUCCESS [  4.032 
> s]
> [INFO] Apache Hadoop Auth Examples  SUCCESS [  2.475 
> s]
> [INFO] Apache Hadoop Common ... FAILURE [ 10.458 
> s]
> [INFO] Apache Hadoop NFS .. SKIPPED
> [INFO] Apache Hadoop Common Project ... SKIPPED
> [INFO] Apache Hadoop HDFS . SKIPPED
> [INFO] Apache Hadoop HttpFS ... SKIPPED
> [INFO] Apache Hadoop HDFS BookKeeper Journal .. SKIPPED
> [INFO] Apache Hadoop HDFS-NFS . SKIPPED
> [INFO] Apache Hadoop HDFS Project . SKIPPED
> [INFO] hadoop-yarn  SKIPPED
> [INFO] hadoop-yarn-api  SKIPPED
> [INFO] hadoop-yarn-common . SKIPPED
> [INFO] hadoop-yarn-server . SKIPPED
> [INFO] hadoop-yarn-server-common .. SKIPPED
> [INFO] hadoop-yarn-server-nodemanager . SKIPPED
> [INFO] hadoop-yarn-server-web-proxy ... SKIPPED
> [INFO] hadoop-yarn-server-resourcemanager . SKIPPED
> [INFO] hadoop-yarn-server-tests ... SKIPPED
> [INFO] hadoop-yarn-client . SKIPPED
> [INFO] hadoop-yarn-applications ... SKIPPED
> [INFO] hadoop-yarn-applications-distributedshell .. SKIPPED
> [INFO] hadoop-yarn-applications-unmanaged-am-launcher . SKIPPED
> [INFO] hadoop-yarn-site ... SKIPPED
> [INFO] hadoop-yarn-project  SKIPPED
> [INFO] hadoop-mapreduce-client  SKIPPED
> [INFO] hadoop-mapreduce-client-core ... SKIPPED
> [INFO] hadoop-mapreduce-client-common . SKIPPED
> [INFO] hadoop-mapreduce-client-shuffle .

[jira] [Commented] (HADOOP-10968) hadoop common fails to detect java_libarch on ppc64le

2014-08-19 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14102567#comment-14102567
 ] 

Colin Patrick McCabe commented on HADOOP-10968:
---

Thanks, Dinar.  +1.  Will commit to 2.6

> hadoop common fails to detect java_libarch on ppc64le
> -
>
> Key: HADOOP-10968
> URL: https://issues.apache.org/jira/browse/HADOOP-10968
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.3.0
>Reporter: Dinar Valeev
> Fix For: 0.23.2
>
> Attachments: 0001-Set-java_libarch-for-ppc64le-2.patch
>
>
> [INFO] 
> [INFO] --- maven-antrun-plugin:1.7:run (make) @ hadoop-common ---
> [INFO] Executing tasks
> main:
>  [exec] -- The C compiler identification is GNU 4.8.3
>  [exec] -- The CXX compiler identification is GNU 4.8.3
>  [exec] -- Check for working C compiler: /usr/bin/cc
>  [exec] -- Check for working C compiler: /usr/bin/cc -- works
>  [exec] -- Detecting C compiler ABI info
>  [exec] -- Detecting C compiler ABI info - done
>  [exec] -- Check for working CXX compiler: /usr/bin/c++
>  [exec] -- Check for working CXX compiler: /usr/bin/c++ -- works
>  [exec] JAVA_HOME=, JAVA_JVM_LIBRARY=JAVA_JVM_LIBRARY-NOTFOUND
>  [exec] 
> JAVA_INCLUDE_PATH=/usr/lib64/jvm/java-1.7.0-openjdk-1.7.0/include, 
> JAVA_INCLUDE_PATH2=/usr/lib64/jvm/java-1.7.0-openjdk-1.7.0/include/linux
>  [exec] CMake Error at JNIFlags.cmake:114 (MESSAGE):
>  [exec]   Failed to find a viable JVM installation under JAVA_HOME.
>  [exec] Call Stack (most recent call first):
>  [exec]   CMakeLists.txt:24 (include)
>  [exec] 
>  [exec] 
>  [exec] -- Detecting CXX compiler ABI info
>  [exec] -- Detecting CXX compiler ABI info - done
>  [exec] -- Configuring incomplete, errors occurred!
>  [exec] See also 
> "/root/bigtop/build/hadoop/rpm/BUILD/hadoop-2.3.0-src/hadoop-common-project/hadoop-common/target/native/CMakeFiles/CMakeOutput.log".
> [INFO] 
> 
> [INFO] Reactor Summary:
> [INFO] 
> [INFO] Apache Hadoop Main . SUCCESS [ 10.680 
> s]
> [INFO] Apache Hadoop Project POM .. SUCCESS [  0.716 
> s]
> [INFO] Apache Hadoop Annotations .. SUCCESS [  3.270 
> s]
> [INFO] Apache Hadoop Assemblies ... SUCCESS [  0.274 
> s]
> [INFO] Apache Hadoop Project Dist POM . SUCCESS [  1.819 
> s]
> [INFO] Apache Hadoop Maven Plugins  SUCCESS [  3.284 
> s]
> [INFO] Apache Hadoop MiniKDC .. SUCCESS [  2.863 
> s]
> [INFO] Apache Hadoop Auth . SUCCESS [  4.032 
> s]
> [INFO] Apache Hadoop Auth Examples  SUCCESS [  2.475 
> s]
> [INFO] Apache Hadoop Common ... FAILURE [ 10.458 
> s]
> [INFO] Apache Hadoop NFS .. SKIPPED
> [INFO] Apache Hadoop Common Project ... SKIPPED
> [INFO] Apache Hadoop HDFS . SKIPPED
> [INFO] Apache Hadoop HttpFS ... SKIPPED
> [INFO] Apache Hadoop HDFS BookKeeper Journal .. SKIPPED
> [INFO] Apache Hadoop HDFS-NFS . SKIPPED
> [INFO] Apache Hadoop HDFS Project . SKIPPED
> [INFO] hadoop-yarn  SKIPPED
> [INFO] hadoop-yarn-api  SKIPPED
> [INFO] hadoop-yarn-common . SKIPPED
> [INFO] hadoop-yarn-server . SKIPPED
> [INFO] hadoop-yarn-server-common .. SKIPPED
> [INFO] hadoop-yarn-server-nodemanager . SKIPPED
> [INFO] hadoop-yarn-server-web-proxy ... SKIPPED
> [INFO] hadoop-yarn-server-resourcemanager . SKIPPED
> [INFO] hadoop-yarn-server-tests ... SKIPPED
> [INFO] hadoop-yarn-client . SKIPPED
> [INFO] hadoop-yarn-applications ... SKIPPED
> [INFO] hadoop-yarn-applications-distributedshell .. SKIPPED
> [INFO] hadoop-yarn-applications-unmanaged-am-launcher . SKIPPED
> [INFO] hadoop-yarn-site ... SKIPPED
> [INFO] hadoop-yarn-project  SKIPPED
> [INFO] hadoop-mapreduce-client  SKIPPED
> [INFO] hadoop-mapreduce-client-core ... SKIPPED
> [INFO] hadoop-mapreduce-client-common . SKIPPED
> [INFO] hadoop-mapreduce-client-shuffle  SKIPPED
> [INFO

[jira] [Updated] (HADOOP-10968) hadoop native build fails to detect java_libarch on ppc64le

2014-08-19 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-10968:
--

Summary: hadoop native build fails to detect java_libarch on ppc64le  (was: 
hadoop common native build fails to detect java_libarch on ppc64le)

> hadoop native build fails to detect java_libarch on ppc64le
> ---
>
> Key: HADOOP-10968
> URL: https://issues.apache.org/jira/browse/HADOOP-10968
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.3.0
>Reporter: Dinar Valeev
> Fix For: 0.23.2
>
> Attachments: 0001-Set-java_libarch-for-ppc64le-2.patch
>
>
> [INFO] 
> [INFO] --- maven-antrun-plugin:1.7:run (make) @ hadoop-common ---
> [INFO] Executing tasks
> main:
>  [exec] -- The C compiler identification is GNU 4.8.3
>  [exec] -- The CXX compiler identification is GNU 4.8.3
>  [exec] -- Check for working C compiler: /usr/bin/cc
>  [exec] -- Check for working C compiler: /usr/bin/cc -- works
>  [exec] -- Detecting C compiler ABI info
>  [exec] -- Detecting C compiler ABI info - done
>  [exec] -- Check for working CXX compiler: /usr/bin/c++
>  [exec] -- Check for working CXX compiler: /usr/bin/c++ -- works
>  [exec] JAVA_HOME=, JAVA_JVM_LIBRARY=JAVA_JVM_LIBRARY-NOTFOUND
>  [exec] 
> JAVA_INCLUDE_PATH=/usr/lib64/jvm/java-1.7.0-openjdk-1.7.0/include, 
> JAVA_INCLUDE_PATH2=/usr/lib64/jvm/java-1.7.0-openjdk-1.7.0/include/linux
>  [exec] CMake Error at JNIFlags.cmake:114 (MESSAGE):
>  [exec]   Failed to find a viable JVM installation under JAVA_HOME.
>  [exec] Call Stack (most recent call first):
>  [exec]   CMakeLists.txt:24 (include)
>  [exec] 
>  [exec] 
>  [exec] -- Detecting CXX compiler ABI info
>  [exec] -- Detecting CXX compiler ABI info - done
>  [exec] -- Configuring incomplete, errors occurred!
>  [exec] See also 
> "/root/bigtop/build/hadoop/rpm/BUILD/hadoop-2.3.0-src/hadoop-common-project/hadoop-common/target/native/CMakeFiles/CMakeOutput.log".
> [INFO] 
> 
> [INFO] Reactor Summary:
> [INFO] 
> [INFO] Apache Hadoop Main . SUCCESS [ 10.680 
> s]
> [INFO] Apache Hadoop Project POM .. SUCCESS [  0.716 
> s]
> [INFO] Apache Hadoop Annotations .. SUCCESS [  3.270 
> s]
> [INFO] Apache Hadoop Assemblies ... SUCCESS [  0.274 
> s]
> [INFO] Apache Hadoop Project Dist POM . SUCCESS [  1.819 
> s]
> [INFO] Apache Hadoop Maven Plugins  SUCCESS [  3.284 
> s]
> [INFO] Apache Hadoop MiniKDC .. SUCCESS [  2.863 
> s]
> [INFO] Apache Hadoop Auth . SUCCESS [  4.032 
> s]
> [INFO] Apache Hadoop Auth Examples  SUCCESS [  2.475 
> s]
> [INFO] Apache Hadoop Common ... FAILURE [ 10.458 
> s]
> [INFO] Apache Hadoop NFS .. SKIPPED
> [INFO] Apache Hadoop Common Project ... SKIPPED
> [INFO] Apache Hadoop HDFS . SKIPPED
> [INFO] Apache Hadoop HttpFS ... SKIPPED
> [INFO] Apache Hadoop HDFS BookKeeper Journal .. SKIPPED
> [INFO] Apache Hadoop HDFS-NFS . SKIPPED
> [INFO] Apache Hadoop HDFS Project . SKIPPED
> [INFO] hadoop-yarn  SKIPPED
> [INFO] hadoop-yarn-api  SKIPPED
> [INFO] hadoop-yarn-common . SKIPPED
> [INFO] hadoop-yarn-server . SKIPPED
> [INFO] hadoop-yarn-server-common .. SKIPPED
> [INFO] hadoop-yarn-server-nodemanager . SKIPPED
> [INFO] hadoop-yarn-server-web-proxy ... SKIPPED
> [INFO] hadoop-yarn-server-resourcemanager . SKIPPED
> [INFO] hadoop-yarn-server-tests ... SKIPPED
> [INFO] hadoop-yarn-client . SKIPPED
> [INFO] hadoop-yarn-applications ... SKIPPED
> [INFO] hadoop-yarn-applications-distributedshell .. SKIPPED
> [INFO] hadoop-yarn-applications-unmanaged-am-launcher . SKIPPED
> [INFO] hadoop-yarn-site ... SKIPPED
> [INFO] hadoop-yarn-project  SKIPPED
> [INFO] hadoop-mapreduce-client  SKIPPED
> [INFO] hadoop-mapreduce-client-core ... SKIPPED
> [INFO] hadoop-mapreduce-client-common . SKIPPED
> [INFO] hado

[jira] [Updated] (HADOOP-10968) hadoop common native build fails to detect java_libarch on ppc64le

2014-08-19 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-10968:
--

Summary: hadoop common native build fails to detect java_libarch on ppc64le 
 (was: hadoop common fails to detect java_libarch on ppc64le)

> hadoop common native build fails to detect java_libarch on ppc64le
> --
>
> Key: HADOOP-10968
> URL: https://issues.apache.org/jira/browse/HADOOP-10968
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.3.0
>Reporter: Dinar Valeev
> Fix For: 0.23.2
>
> Attachments: 0001-Set-java_libarch-for-ppc64le-2.patch
>
>
> [INFO] 
> [INFO] --- maven-antrun-plugin:1.7:run (make) @ hadoop-common ---
> [INFO] Executing tasks
> main:
>  [exec] -- The C compiler identification is GNU 4.8.3
>  [exec] -- The CXX compiler identification is GNU 4.8.3
>  [exec] -- Check for working C compiler: /usr/bin/cc
>  [exec] -- Check for working C compiler: /usr/bin/cc -- works
>  [exec] -- Detecting C compiler ABI info
>  [exec] -- Detecting C compiler ABI info - done
>  [exec] -- Check for working CXX compiler: /usr/bin/c++
>  [exec] -- Check for working CXX compiler: /usr/bin/c++ -- works
>  [exec] JAVA_HOME=, JAVA_JVM_LIBRARY=JAVA_JVM_LIBRARY-NOTFOUND
>  [exec] 
> JAVA_INCLUDE_PATH=/usr/lib64/jvm/java-1.7.0-openjdk-1.7.0/include, 
> JAVA_INCLUDE_PATH2=/usr/lib64/jvm/java-1.7.0-openjdk-1.7.0/include/linux
>  [exec] CMake Error at JNIFlags.cmake:114 (MESSAGE):
>  [exec]   Failed to find a viable JVM installation under JAVA_HOME.
>  [exec] Call Stack (most recent call first):
>  [exec]   CMakeLists.txt:24 (include)
>  [exec] 
>  [exec] 
>  [exec] -- Detecting CXX compiler ABI info
>  [exec] -- Detecting CXX compiler ABI info - done
>  [exec] -- Configuring incomplete, errors occurred!
>  [exec] See also 
> "/root/bigtop/build/hadoop/rpm/BUILD/hadoop-2.3.0-src/hadoop-common-project/hadoop-common/target/native/CMakeFiles/CMakeOutput.log".
> [INFO] 
> 
> [INFO] Reactor Summary:
> [INFO] 
> [INFO] Apache Hadoop Main . SUCCESS [ 10.680 
> s]
> [INFO] Apache Hadoop Project POM .. SUCCESS [  0.716 
> s]
> [INFO] Apache Hadoop Annotations .. SUCCESS [  3.270 
> s]
> [INFO] Apache Hadoop Assemblies ... SUCCESS [  0.274 
> s]
> [INFO] Apache Hadoop Project Dist POM . SUCCESS [  1.819 
> s]
> [INFO] Apache Hadoop Maven Plugins  SUCCESS [  3.284 
> s]
> [INFO] Apache Hadoop MiniKDC .. SUCCESS [  2.863 
> s]
> [INFO] Apache Hadoop Auth . SUCCESS [  4.032 
> s]
> [INFO] Apache Hadoop Auth Examples  SUCCESS [  2.475 
> s]
> [INFO] Apache Hadoop Common ... FAILURE [ 10.458 
> s]
> [INFO] Apache Hadoop NFS .. SKIPPED
> [INFO] Apache Hadoop Common Project ... SKIPPED
> [INFO] Apache Hadoop HDFS . SKIPPED
> [INFO] Apache Hadoop HttpFS ... SKIPPED
> [INFO] Apache Hadoop HDFS BookKeeper Journal .. SKIPPED
> [INFO] Apache Hadoop HDFS-NFS . SKIPPED
> [INFO] Apache Hadoop HDFS Project . SKIPPED
> [INFO] hadoop-yarn  SKIPPED
> [INFO] hadoop-yarn-api  SKIPPED
> [INFO] hadoop-yarn-common . SKIPPED
> [INFO] hadoop-yarn-server . SKIPPED
> [INFO] hadoop-yarn-server-common .. SKIPPED
> [INFO] hadoop-yarn-server-nodemanager . SKIPPED
> [INFO] hadoop-yarn-server-web-proxy ... SKIPPED
> [INFO] hadoop-yarn-server-resourcemanager . SKIPPED
> [INFO] hadoop-yarn-server-tests ... SKIPPED
> [INFO] hadoop-yarn-client . SKIPPED
> [INFO] hadoop-yarn-applications ... SKIPPED
> [INFO] hadoop-yarn-applications-distributedshell .. SKIPPED
> [INFO] hadoop-yarn-applications-unmanaged-am-launcher . SKIPPED
> [INFO] hadoop-yarn-site ... SKIPPED
> [INFO] hadoop-yarn-project  SKIPPED
> [INFO] hadoop-mapreduce-client  SKIPPED
> [INFO] hadoop-mapreduce-client-core ... SKIPPED
> [INFO] hadoop-mapreduce-client-common . SKIPPED
> [IN

[jira] [Commented] (HADOOP-10363) Closing of SequenceFile.Reader / SequenceFile.Writer in DistCh should check against null

2014-08-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14102572#comment-14102572
 ] 

Hadoop QA commented on HADOOP-10363:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12662725/HADOOP-10363.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-tools/hadoop-extras.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4506//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4506//console

This message is automatically generated.

> Closing of SequenceFile.Reader / SequenceFile.Writer in DistCh should check 
> against null
> 
>
> Key: HADOOP-10363
> URL: https://issues.apache.org/jira/browse/HADOOP-10363
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ted Yu
>Priority: Minor
> Attachments: HADOOP-10363.patch
>
>
> Here is related code:
> {code}
>   try {
> for(in = new SequenceFile.Reader(fs, srcs, job); in.next(key, value); 
> ) {
> ...
>   finally {
> in.close();
>   }
> {code}
> {code}
> SequenceFile.Writer opWriter = null;
> try {
>   opWriter = SequenceFile.createWriter(fs, jobconf, opList, Text.class,
>   FileOperation.class, SequenceFile.CompressionType.NONE);
> ...
> } finally {
>   opWriter.close();
> }
> {code}
> If ctor of Reader / Writer throws exception, the close() would be called on 
> null object.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10968) hadoop native build fails to detect java_libarch on ppc64le

2014-08-19 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-10968:
--

   Resolution: Fixed
Fix Version/s: (was: 0.23.2)
   2.6.0
   Status: Resolved  (was: Patch Available)

> hadoop native build fails to detect java_libarch on ppc64le
> ---
>
> Key: HADOOP-10968
> URL: https://issues.apache.org/jira/browse/HADOOP-10968
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.3.0
>Reporter: Dinar Valeev
> Fix For: 2.6.0
>
> Attachments: 0001-Set-java_libarch-for-ppc64le-2.patch
>
>
> [INFO] 
> [INFO] --- maven-antrun-plugin:1.7:run (make) @ hadoop-common ---
> [INFO] Executing tasks
> main:
>  [exec] -- The C compiler identification is GNU 4.8.3
>  [exec] -- The CXX compiler identification is GNU 4.8.3
>  [exec] -- Check for working C compiler: /usr/bin/cc
>  [exec] -- Check for working C compiler: /usr/bin/cc -- works
>  [exec] -- Detecting C compiler ABI info
>  [exec] -- Detecting C compiler ABI info - done
>  [exec] -- Check for working CXX compiler: /usr/bin/c++
>  [exec] -- Check for working CXX compiler: /usr/bin/c++ -- works
>  [exec] JAVA_HOME=, JAVA_JVM_LIBRARY=JAVA_JVM_LIBRARY-NOTFOUND
>  [exec] 
> JAVA_INCLUDE_PATH=/usr/lib64/jvm/java-1.7.0-openjdk-1.7.0/include, 
> JAVA_INCLUDE_PATH2=/usr/lib64/jvm/java-1.7.0-openjdk-1.7.0/include/linux
>  [exec] CMake Error at JNIFlags.cmake:114 (MESSAGE):
>  [exec]   Failed to find a viable JVM installation under JAVA_HOME.
>  [exec] Call Stack (most recent call first):
>  [exec]   CMakeLists.txt:24 (include)
>  [exec] 
>  [exec] 
>  [exec] -- Detecting CXX compiler ABI info
>  [exec] -- Detecting CXX compiler ABI info - done
>  [exec] -- Configuring incomplete, errors occurred!
>  [exec] See also 
> "/root/bigtop/build/hadoop/rpm/BUILD/hadoop-2.3.0-src/hadoop-common-project/hadoop-common/target/native/CMakeFiles/CMakeOutput.log".
> [INFO] 
> 
> [INFO] Reactor Summary:
> [INFO] 
> [INFO] Apache Hadoop Main . SUCCESS [ 10.680 
> s]
> [INFO] Apache Hadoop Project POM .. SUCCESS [  0.716 
> s]
> [INFO] Apache Hadoop Annotations .. SUCCESS [  3.270 
> s]
> [INFO] Apache Hadoop Assemblies ... SUCCESS [  0.274 
> s]
> [INFO] Apache Hadoop Project Dist POM . SUCCESS [  1.819 
> s]
> [INFO] Apache Hadoop Maven Plugins  SUCCESS [  3.284 
> s]
> [INFO] Apache Hadoop MiniKDC .. SUCCESS [  2.863 
> s]
> [INFO] Apache Hadoop Auth . SUCCESS [  4.032 
> s]
> [INFO] Apache Hadoop Auth Examples  SUCCESS [  2.475 
> s]
> [INFO] Apache Hadoop Common ... FAILURE [ 10.458 
> s]
> [INFO] Apache Hadoop NFS .. SKIPPED
> [INFO] Apache Hadoop Common Project ... SKIPPED
> [INFO] Apache Hadoop HDFS . SKIPPED
> [INFO] Apache Hadoop HttpFS ... SKIPPED
> [INFO] Apache Hadoop HDFS BookKeeper Journal .. SKIPPED
> [INFO] Apache Hadoop HDFS-NFS . SKIPPED
> [INFO] Apache Hadoop HDFS Project . SKIPPED
> [INFO] hadoop-yarn  SKIPPED
> [INFO] hadoop-yarn-api  SKIPPED
> [INFO] hadoop-yarn-common . SKIPPED
> [INFO] hadoop-yarn-server . SKIPPED
> [INFO] hadoop-yarn-server-common .. SKIPPED
> [INFO] hadoop-yarn-server-nodemanager . SKIPPED
> [INFO] hadoop-yarn-server-web-proxy ... SKIPPED
> [INFO] hadoop-yarn-server-resourcemanager . SKIPPED
> [INFO] hadoop-yarn-server-tests ... SKIPPED
> [INFO] hadoop-yarn-client . SKIPPED
> [INFO] hadoop-yarn-applications ... SKIPPED
> [INFO] hadoop-yarn-applications-distributedshell .. SKIPPED
> [INFO] hadoop-yarn-applications-unmanaged-am-launcher . SKIPPED
> [INFO] hadoop-yarn-site ... SKIPPED
> [INFO] hadoop-yarn-project  SKIPPED
> [INFO] hadoop-mapreduce-client  SKIPPED
> [INFO] hadoop-mapreduce-client-core ... SKIPPED
> [INFO] hadoop-mapreduce-client-common . SKIPPED
> [INFO] hadoop-mapreduc

[jira] [Commented] (HADOOP-10977) Periodically dump RPC metrics to logs

2014-08-19 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14102636#comment-14102636
 ] 

Arpit Agarwal commented on HADOOP-10977:


Yes ideally ops will have that information however we have seen instances where 
they don't or where they don't keep sufficient metrics history.

It would be useful to have metrics history available along with the logs.

> Periodically dump RPC metrics to logs
> -
>
> Key: HADOOP-10977
> URL: https://issues.apache.org/jira/browse/HADOOP-10977
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 2.5.0
>Reporter: Arpit Agarwal
>
> It would be useful to periodically dump RPC/other metrics to a log file. We 
> could use a separate async log stream to avoid contending with logging on hot 
> paths.
> Placeholder Jira, this needs more thought.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10059) RPC authentication and authorization metrics overflow to negative values on busy clusters

2014-08-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14102640#comment-14102640
 ] 

Hudson commented on HADOOP-10059:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk #1842 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1842/])
HADOOP-10059. RPC authentication and authorization metrics overflow to negative 
values on busy clusters. Contributed by Tsuyoshi OZAWA and Akira AJISAKA 
(jlowe: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1618659)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/metrics/RpcMetrics.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestRPC.java


> RPC authentication and authorization metrics overflow to negative values on 
> busy clusters
> -
>
> Key: HADOOP-10059
> URL: https://issues.apache.org/jira/browse/HADOOP-10059
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 0.23.9, 2.2.0
>Reporter: Jason Lowe
>Assignee: Tsuyoshi OZAWA
>Priority: Minor
> Fix For: 3.0.0, 2.6.0
>
> Attachments: HADOOP-10059.1.patch, HADOOP-10059.2.patch
>
>
> The RPC metrics for authorization and authentication successes can easily 
> overflow to negative values on a busy cluster that has been up for a long 
> time.  We should consider providing 64-bit values for these counters.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10973) Native Libraries Guide contains format error

2014-08-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14102642#comment-14102642
 ] 

Hudson commented on HADOOP-10973:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk #1842 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1842/])
HADOOP-10973. Native Libraries Guide contains format error. (Contributed by 
Peter Klavins) (arp: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1618682)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/site/apt/NativeLibraries.apt.vm


> Native Libraries Guide contains format error
> 
>
> Key: HADOOP-10973
> URL: https://issues.apache.org/jira/browse/HADOOP-10973
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.0.0
>Reporter: Peter Klavins
>Assignee: Peter Klavins
>Priority: Minor
>  Labels: apt, documentation, xdocs
> Fix For: 3.0.0, 2.6.0
>
> Attachments: HADOOP-10973.patch
>
>
> The move from xdocs to APT introduced a formatting bug so that the sub-list 
> under Usage point 4 was merged into the text itself and no longer appeared as 
> a sub-list. Compare xdocs version 
> http://hadoop.apache.org/docs/r1.2.1/native_libraries.html#Native+Hadoop+Library
>  to APT version 
> http://hadoop.apache.org/docs/r2.5.0/hadoop-project-dist/hadoop-common/NativeLibraries.html#Native_Hadoop_Library.
> The patch is to trunk, but is also valid for released versions 0.23.11, 
> 2.2.0, 2.3.0, 2.4.0, 2.4.1, and 2.5.0, and could be back-ported to them if 
> deemed necessary.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10975) org.apache.hadoop.util.DataChecksum should support native checksum calculation

2014-08-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14102643#comment-14102643
 ] 

Hudson commented on HADOOP-10975:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk #1842 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1842/])
Update CHANGES.txt for HDFS-6561 which was renamed to HADOOP-10975. (todd: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1618686)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt


> org.apache.hadoop.util.DataChecksum should support native checksum calculation
> --
>
> Key: HADOOP-10975
> URL: https://issues.apache.org/jira/browse/HADOOP-10975
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: performance
>Reporter: James Thomas
>Assignee: James Thomas
> Fix For: 2.6.0
>
> Attachments: HDFS-6561.2.patch, HDFS-6561.3.patch, HDFS-6561.4.patch, 
> HDFS-6561.5.patch, HDFS-6561.patch, hdfs-6561-just-hadoop-changes.txt
>
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-9902) Shell script rewrite

2014-08-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14102651#comment-14102651
 ] 

Hudson commented on HADOOP-9902:


FAILURE: Integrated in Hadoop-Hdfs-trunk #1842 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1842/])
HADOOP-9902. Shell script rewrite (aw) (aw: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1618847)
* 
/hadoop/common/trunk/hadoop-assemblies/src/main/resources/assemblies/hadoop-dist.xml
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/bin/hadoop
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/bin/hadoop-config.sh
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/bin/hadoop-daemon.sh
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/bin/hadoop-daemons.sh
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/bin/hadoop-functions.sh
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/bin/hadoop-layout.sh.example
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/bin/rcc
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/bin/slaves.sh
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/bin/start-all.sh
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/bin/stop-all.sh
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/conf/hadoop-env.sh
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/distribute-exclude.sh
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs-config.sh
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/refresh-namenodes.sh
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/start-balancer.sh
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/start-dfs.sh
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/start-secure-dns.sh
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/stop-balancer.sh
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/stop-dfs.sh
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/stop-secure-dns.sh
* /hadoop/common/trunk/hadoop-mapreduce-project/bin/mapred
* /hadoop/common/trunk/hadoop-mapreduce-project/bin/mapred-config.sh
* /hadoop/common/trunk/hadoop-mapreduce-project/bin/mr-jobhistory-daemon.sh
* /hadoop/common/trunk/hadoop-mapreduce-project/conf/mapred-env.sh
* /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/bin/slaves.sh
* /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/bin/start-yarn.sh
* /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/bin/stop-yarn.sh
* /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/bin/yarn
* /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/bin/yarn-config.sh
* /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/bin/yarn-daemon.sh
* /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/bin/yarn-daemons.sh
* /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/conf/yarn-env.sh


> Shell script rewrite
> 
>
> Key: HADOOP-9902
> URL: https://issues.apache.org/jira/browse/HADOOP-9902
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>  Labels: releasenotes
> Fix For: 3.0.0
>
> Attachments: HADOOP-9902-10.patch, HADOOP-9902-11.patch, 
> HADOOP-9902-12.patch, HADOOP-9902-13-branch-2.patch, HADOOP-9902-13.patch, 
> HADOOP-9902-14.patch, HADOOP-9902-15.patch, HADOOP-9902-16.patch, 
> HADOOP-9902-2.patch, HADOOP-9902-3.patch, HADOOP-9902-4.patch, 
> HADOOP-9902-5.patch, HADOOP-9902-6.patch, HADOOP-9902-7.patch, 
> HADOOP-9902-8.patch, HADOOP-9902-9.patch, HADOOP-9902.patch, HADOOP-9902.txt, 
> hadoop-9902-1.patch, more-info.txt
>
>
> Umbrella JIRA for shell script rewrite.  See more-info.txt for more details.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10873) Fix dead links in the API doc

2014-08-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14102645#comment-14102645
 ] 

Hudson commented on HADOOP-10873:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk #1842 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1842/])
HADOOP-10873. Fix dead link in Configuration javadoc (Akira AJISAKA via aw) 
(aw: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1618721)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java


> Fix dead links in the API doc
> -
>
> Key: HADOOP-10873
> URL: https://issues.apache.org/jira/browse/HADOOP-10873
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Reporter: Akira AJISAKA
>
> There are a lot of dead links in [Hadoop API 
> doc|http://hadoop.apache.org/docs/r2.4.1/api/]. We should fix them.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10975) org.apache.hadoop.util.DataChecksum should support native checksum calculation

2014-08-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14102644#comment-14102644
 ] 

Hudson commented on HADOOP-10975:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk #1842 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1842/])
Update CHANGES.txt for HDFS-6561 which was renamed to HADOOP-10975. (todd: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1618686)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt


> org.apache.hadoop.util.DataChecksum should support native checksum calculation
> --
>
> Key: HADOOP-10975
> URL: https://issues.apache.org/jira/browse/HADOOP-10975
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: performance
>Reporter: James Thomas
>Assignee: James Thomas
> Fix For: 2.6.0
>
> Attachments: HDFS-6561.2.patch, HDFS-6561.3.patch, HDFS-6561.4.patch, 
> HDFS-6561.5.patch, HDFS-6561.patch, hdfs-6561-just-hadoop-changes.txt
>
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10972) Native Libraries Guide contains mis-spelt build line

2014-08-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14102639#comment-14102639
 ] 

Hudson commented on HADOOP-10972:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk #1842 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1842/])
HADOOP-10972. Native Libraries Guide contains mis-spelt build line (Peter 
Klavins via aw) (aw: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1618719)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/site/apt/NativeLibraries.apt.vm


> Native Libraries Guide contains mis-spelt build line
> 
>
> Key: HADOOP-10972
> URL: https://issues.apache.org/jira/browse/HADOOP-10972
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.0.0, 2.3.0
>Reporter: Peter Klavins
>Assignee: Peter Klavins
>  Labels: documentation, newbie
> Fix For: 3.0.0, 2.6.0
>
> Attachments: HADOOP-10972.patch
>
>
> The Native Libraries Guide mis-spells the define 'skipTests' with a lowercase 
> 't' in the build line. The correct build line is:
> {code:none}
> $ mvn package -Pdist,native -DskipTests -Dtar
> {code}
> Patch is to trunk, but is also valid for released versions 2.2.0, 2.3.0, 
> 2.4.0, 2.4.1, and 2.5.0, and could be back-ported to them if deemed necessary.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10954) Adding site documents of hadoop-tools

2014-08-19 Thread Masatake Iwasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki updated HADOOP-10954:
--

Attachment: HADOOP-10954-0.patch

I migrated the documents of GridMix and Rumen from forrest format to markdown. 
The migrated documents are as is and I would like to update the outdated part 
in followup issues, if any.

> Adding site documents of hadoop-tools
> -
>
> Key: HADOOP-10954
> URL: https://issues.apache.org/jira/browse/HADOOP-10954
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 2.5.0
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Minor
> Attachments: HADOOP-10954-0.patch
>
>
> There are no pages for hadoop-tools in the site documents of branch-2 or 
> later.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10954) Adding site documents of hadoop-tools

2014-08-19 Thread Masatake Iwasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki updated HADOOP-10954:
--

Affects Version/s: (was: 2.4.1)
   2.5.0
   Status: Patch Available  (was: Open)

> Adding site documents of hadoop-tools
> -
>
> Key: HADOOP-10954
> URL: https://issues.apache.org/jira/browse/HADOOP-10954
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 2.5.0
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Minor
> Attachments: HADOOP-10954-0.patch
>
>
> There are no pages for hadoop-tools in the site documents of branch-2 or 
> later.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10759) Remove hardcoded JAVA_HEAP_MAX in hadoop-config.sh

2014-08-19 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14102761#comment-14102761
 ] 

Chris Douglas commented on HADOOP-10759:


A veto is valid, even if the code is recently committed. [~eyang], could you 
please revert the change in branch-2 while this is discussed?

> Remove hardcoded JAVA_HEAP_MAX in hadoop-config.sh
> --
>
> Key: HADOOP-10759
> URL: https://issues.apache.org/jira/browse/HADOOP-10759
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: bin
>Affects Versions: 2.4.0
> Environment: Linux64
>Reporter: sam liu
>Priority: Minor
> Fix For: 2.6.0
>
> Attachments: HADOOP-10759.patch, HADOOP-10759.patch
>
>
> In hadoop-common-project/hadoop-common/src/main/bin/hadoop-config.sh, there 
> is a hard code for Java parameter: 'JAVA_HEAP_MAX=-Xmx1000m'. It should be 
> removed.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10759) Remove hardcoded JAVA_HEAP_MAX in hadoop-config.sh

2014-08-19 Thread Arpit Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14102767#comment-14102767
 ] 

Arpit Gupta commented on HADOOP-10759:
--

[~eyang]

At least with JDK1.6 we saw zookeeper taking p around 4GB of heap on a 16GB 
machine thus we filed ZOOKEEPER-1670

> Remove hardcoded JAVA_HEAP_MAX in hadoop-config.sh
> --
>
> Key: HADOOP-10759
> URL: https://issues.apache.org/jira/browse/HADOOP-10759
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: bin
>Affects Versions: 2.4.0
> Environment: Linux64
>Reporter: sam liu
>Priority: Minor
> Fix For: 2.6.0
>
> Attachments: HADOOP-10759.patch, HADOOP-10759.patch
>
>
> In hadoop-common-project/hadoop-common/src/main/bin/hadoop-config.sh, there 
> is a hard code for Java parameter: 'JAVA_HEAP_MAX=-Xmx1000m'. It should be 
> removed.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10893) isolated classloader on the client side

2014-08-19 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14102842#comment-14102842
 ] 

Jason Lowe commented on HADOOP-10893:
-

The patch no longer applies after HADOOP-9902, and since that only went into 
trunk we'll need a separate patch for branch-2.

I tried to kick the tires on the latest patch but the client classloader never 
activated even though I set HADOOP_USE_CLIENT_CLASSLOADER=true.  That's because 
the following code will always return false:
{code}
  boolean useClientClassLoader() {
return Boolean.getBoolean(System.getenv(HADOOP_USE_CLIENT_CLASSLOADER));
  }
{code}
getBoolean looks up the value of the specified system property, whereas 
parseBoolean tries to parse the given string as a boolean.

Other comments are minor or nits:

In hadoop-config.sh "The system classes are A comma-separated list" s/b "The 
system classes are a comma-separated list".

It would be nice if TestMain, TestSecond, TestThird were a bit less generically 
named since they are for a very specific test, e.g.: ClassLoaderCheckAppMain, 
ClassLoaderCheckAppSecond, ClassLoaderCheckAppThird, etc..  Not a must-fix 
though, rather thinking people may wonder what the names mean when they run 
across them in the source since TestMain sounds pretty generic.

> isolated classloader on the client side
> ---
>
> Key: HADOOP-10893
> URL: https://issues.apache.org/jira/browse/HADOOP-10893
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: util
>Affects Versions: 2.4.0
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
> Attachments: HADOOP-10893.patch, HADOOP-10893.patch, 
> HADOOP-10893.patch, HADOOP-10893.patch, HADOOP-10893.patch, 
> classloader-test.tar.gz
>
>
> We have the job classloader on the mapreduce tasks that run on the cluster. 
> It has a benefit of being able to isolate class space for user code and avoid 
> version clashes.
> Although it occurs less often, version clashes do occur on the client JVM. It 
> would be good to introduce an isolated classloader on the client side as well 
> to address this. A natural point to introduce this may be through RunJar, as 
> that's how most of hadoop jobs are run.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10950) rework heap management vars

2014-08-19 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-10950:
--

Hadoop Flags: Incompatible change

> rework  heap management  vars
> -
>
> Key: HADOOP-10950
> URL: https://issues.apache.org/jira/browse/HADOOP-10950
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Allen Wittenauer
>
> Post-HADOOP-9902, we need to rework how heap is configured for small 
> footprint machines, deprecate some options, introduce new ones for greater 
> flexibility.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10893) isolated classloader on the client side

2014-08-19 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14102898#comment-14102898
 ] 

Sangjin Lee commented on HADOOP-10893:
--

Thanks Jason. I'll address those issues, and upload a new patch (and add a 
separate patch for branch-2). It was on oversight on my part.

> isolated classloader on the client side
> ---
>
> Key: HADOOP-10893
> URL: https://issues.apache.org/jira/browse/HADOOP-10893
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: util
>Affects Versions: 2.4.0
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
> Attachments: HADOOP-10893.patch, HADOOP-10893.patch, 
> HADOOP-10893.patch, HADOOP-10893.patch, HADOOP-10893.patch, 
> classloader-test.tar.gz
>
>
> We have the job classloader on the mapreduce tasks that run on the cluster. 
> It has a benefit of being able to isolate class space for user code and avoid 
> version clashes.
> Although it occurs less often, version clashes do occur on the client JVM. It 
> would be good to introduce an isolated classloader on the client side as well 
> to address this. A natural point to introduce this may be through RunJar, as 
> that's how most of hadoop jobs are run.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-10951) Make org.apache.hadoop.security.Groups public

2014-08-19 Thread Hitesh Shah (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10951?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hitesh Shah resolved HADOOP-10951.
--

Resolution: Not a Problem

> Make org.apache.hadoop.security.Groups public
> -
>
> Key: HADOOP-10951
> URL: https://issues.apache.org/jira/browse/HADOOP-10951
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Hitesh Shah
>
> This class seems like useful functionality for most developers building yarn 
> applications with respect to application acls.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10365) BufferedOutputStream in FileUtil#unpackEntries() should be closed in finally block

2014-08-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14102949#comment-14102949
 ] 

Hadoop QA commented on HADOOP-10365:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12662721/HADOOP-10365.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common:

  org.apache.hadoop.metrics2.impl.TestMetricsSystemImpl
  org.apache.hadoop.crypto.key.TestValueQueue
  org.apache.hadoop.ipc.TestDecayRpcScheduler
  org.apache.hadoop.ipc.TestCallQueueManager

  The following test timeouts occurred in 
hadoop-common-project/hadoop-common:

org.apache.hadoop.http.TestHttpServerLifecycle

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4508//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4508//console

This message is automatically generated.

> BufferedOutputStream in FileUtil#unpackEntries() should be closed in finally 
> block
> --
>
> Key: HADOOP-10365
> URL: https://issues.apache.org/jira/browse/HADOOP-10365
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Reporter: Ted Yu
>Priority: Minor
> Attachments: HADOOP-10365.patch
>
>
> {code}
> BufferedOutputStream outputStream = new BufferedOutputStream(
> new FileOutputStream(outputFile));
> ...
> outputStream.flush();
> outputStream.close();
> {code}
> outputStream should be closed in finally block.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-8642) io.native.lib.available only controls zlib

2014-08-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14103003#comment-14103003
 ] 

Hadoop QA commented on HADOOP-8642:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12662733/HADOOP-8642.3.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common:

  org.apache.hadoop.ha.TestActiveStandbyElector
  org.apache.hadoop.ha.TestZKFailoverController

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4509//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4509//console

This message is automatically generated.

> io.native.lib.available only controls zlib
> --
>
> Key: HADOOP-8642
> URL: https://issues.apache.org/jira/browse/HADOOP-8642
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: native
>Affects Versions: 2.0.0-alpha
>Reporter: Eli Collins
>Assignee: Akira AJISAKA
> Attachments: HADOOP-8642.2.patch, HADOOP-8642.3.patch, 
> HADOOP-8642.patch
>
>
> Per core-default.xml {{io.native.lib.available}} indicates "Should native 
> hadoop libraries, if present, be used" however it looks like it only affects 
> zlib. Since we always load the native library this means we may use native 
> libraries even if io.native.lib.available is set to false.
> Let's make the flag to work as advertised - rather than always load the 
> native hadoop library we only attempt to load the library (and report that 
> native is available) if this flag is set. Since io.native.lib.available 
> defaults to true the default behavior should remain unchanged (except that 
> now we wont actually try to load the library if this flag is disabled).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10911) hadoop.auth cookie after HADOOP-10710 still not proper according to RFC2109

2014-08-19 Thread Gregory Chanan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10911?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gregory Chanan updated HADOOP-10911:


Attachment: HADOOP-10911v2.patch

bq. On Max-Age & Expired, i don't think we want to break old browsers. It seems 
to me an HttpClient bug that uses the presence of Expire to go back to old 
cookie format, the precense of Version=1 should trump. Can you dig on 
HttpClient side?

This is a bit complicated -- see the discussion here: 
http://mail-archives.apache.org/mod_mbox/hc-httpclient-users/201408.mbox/%3C1406895602.17749.8.camel%40ubuntu%3E
In short, it's not a valid Version=1 cookie, but httpclient would like to be 
able to handle it anyway, see HTTPCLIENT-1546.

I added a patch that does the following:
1) Runs the TestKerberosAuthenticator test cases against Tomcat as well as 
Jetty, this exposes the bug in HADOOP-10379, which didn't get a test added in 
HADOOP-10710
2) Adds an httpclient test case to TestKerberosAuthenticator.  This does 2 
things:
- Checks that the cookie is actually being processed.  Note that it's possible 
for the existing tests to pass by doing the SPNego negotiation on each request, 
rather than relying on the cookie.  But the entity type we use in the test 
doesn't support repeating, so an exception is raised if the SPNego process 
repeats
- Verifies that httpclient works with our cookie format (probably not strictly 
necessary, but nice to have given httpclient's popularity)

So, I think the the test cases are pretty useful for catching regressions.

As for the format itself, I just chose a simple format that passes all the 
tests.  That seems like a reasonable improvement over what we have now, but I'm 
not married to the particular format.

> hadoop.auth cookie after HADOOP-10710 still not proper according to RFC2109
> ---
>
> Key: HADOOP-10911
> URL: https://issues.apache.org/jira/browse/HADOOP-10911
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.5.0
>Reporter: Gregory Chanan
> Attachments: HADOOP-10911-tests.patch, HADOOP-10911.patch, 
> HADOOP-10911v2.patch
>
>
> I'm seeing the same problem reported in HADOOP-10710 (that is, httpclient is 
> unable to authenticate with servers running the authentication filter), even 
> with HADOOP-10710 applied.
> From my reading of the spec, the problem is as follows:
> Expires is not a valid directive according to the RFC, though it is mentioned 
> for backwards compatibility with netscape draft spec.  When httpclient sees 
> "Expires", it parses according to the netscape draft spec, but note from 
> RFC2109:
> {code}
> Note that the Expires date format contains embedded spaces, and that "old" 
> cookies did not have quotes around values. 
> {code}
> and note that AuthenticationFilter puts quotes around the value:
> https://github.com/apache/hadoop-common/blob/6b11bff94ebf7d99b3a9e513edd813cb82538400/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationFilter.java#L437-L439
> So httpclient's parsing appears to be kosher.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10968) hadoop native build fails to detect java_libarch on ppc64le

2014-08-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14103067#comment-14103067
 ] 

Hudson commented on HADOOP-10968:
-

FAILURE: Integrated in Hadoop-trunk-Commit #6088 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/6088/])
HADOOP-10968. hadoop native build fails to detect java_libarch on ppc64le 
(Dinar Valeev via Colin Patrick McCabe) (cmccabe: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1618919)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/JNIFlags.cmake


> hadoop native build fails to detect java_libarch on ppc64le
> ---
>
> Key: HADOOP-10968
> URL: https://issues.apache.org/jira/browse/HADOOP-10968
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.3.0
>Reporter: Dinar Valeev
> Fix For: 2.6.0
>
> Attachments: 0001-Set-java_libarch-for-ppc64le-2.patch
>
>
> [INFO] 
> [INFO] --- maven-antrun-plugin:1.7:run (make) @ hadoop-common ---
> [INFO] Executing tasks
> main:
>  [exec] -- The C compiler identification is GNU 4.8.3
>  [exec] -- The CXX compiler identification is GNU 4.8.3
>  [exec] -- Check for working C compiler: /usr/bin/cc
>  [exec] -- Check for working C compiler: /usr/bin/cc -- works
>  [exec] -- Detecting C compiler ABI info
>  [exec] -- Detecting C compiler ABI info - done
>  [exec] -- Check for working CXX compiler: /usr/bin/c++
>  [exec] -- Check for working CXX compiler: /usr/bin/c++ -- works
>  [exec] JAVA_HOME=, JAVA_JVM_LIBRARY=JAVA_JVM_LIBRARY-NOTFOUND
>  [exec] 
> JAVA_INCLUDE_PATH=/usr/lib64/jvm/java-1.7.0-openjdk-1.7.0/include, 
> JAVA_INCLUDE_PATH2=/usr/lib64/jvm/java-1.7.0-openjdk-1.7.0/include/linux
>  [exec] CMake Error at JNIFlags.cmake:114 (MESSAGE):
>  [exec]   Failed to find a viable JVM installation under JAVA_HOME.
>  [exec] Call Stack (most recent call first):
>  [exec]   CMakeLists.txt:24 (include)
>  [exec] 
>  [exec] 
>  [exec] -- Detecting CXX compiler ABI info
>  [exec] -- Detecting CXX compiler ABI info - done
>  [exec] -- Configuring incomplete, errors occurred!
>  [exec] See also 
> "/root/bigtop/build/hadoop/rpm/BUILD/hadoop-2.3.0-src/hadoop-common-project/hadoop-common/target/native/CMakeFiles/CMakeOutput.log".
> [INFO] 
> 
> [INFO] Reactor Summary:
> [INFO] 
> [INFO] Apache Hadoop Main . SUCCESS [ 10.680 
> s]
> [INFO] Apache Hadoop Project POM .. SUCCESS [  0.716 
> s]
> [INFO] Apache Hadoop Annotations .. SUCCESS [  3.270 
> s]
> [INFO] Apache Hadoop Assemblies ... SUCCESS [  0.274 
> s]
> [INFO] Apache Hadoop Project Dist POM . SUCCESS [  1.819 
> s]
> [INFO] Apache Hadoop Maven Plugins  SUCCESS [  3.284 
> s]
> [INFO] Apache Hadoop MiniKDC .. SUCCESS [  2.863 
> s]
> [INFO] Apache Hadoop Auth . SUCCESS [  4.032 
> s]
> [INFO] Apache Hadoop Auth Examples  SUCCESS [  2.475 
> s]
> [INFO] Apache Hadoop Common ... FAILURE [ 10.458 
> s]
> [INFO] Apache Hadoop NFS .. SKIPPED
> [INFO] Apache Hadoop Common Project ... SKIPPED
> [INFO] Apache Hadoop HDFS . SKIPPED
> [INFO] Apache Hadoop HttpFS ... SKIPPED
> [INFO] Apache Hadoop HDFS BookKeeper Journal .. SKIPPED
> [INFO] Apache Hadoop HDFS-NFS . SKIPPED
> [INFO] Apache Hadoop HDFS Project . SKIPPED
> [INFO] hadoop-yarn  SKIPPED
> [INFO] hadoop-yarn-api  SKIPPED
> [INFO] hadoop-yarn-common . SKIPPED
> [INFO] hadoop-yarn-server . SKIPPED
> [INFO] hadoop-yarn-server-common .. SKIPPED
> [INFO] hadoop-yarn-server-nodemanager . SKIPPED
> [INFO] hadoop-yarn-server-web-proxy ... SKIPPED
> [INFO] hadoop-yarn-server-resourcemanager . SKIPPED
> [INFO] hadoop-yarn-server-tests ... SKIPPED
> [INFO] hadoop-yarn-client . SKIPPED
> [INFO] hadoop-yarn-applications ... SKIPPED
> [INFO] hadoop-yarn-applications-distributedshell .. SKIPPED
> [INFO] hadoop-yarn-applications-unmanaged-am-launcher . SKIPPED
> [INFO] hadoop-yarn-

[jira] [Commented] (HADOOP-10972) Native Libraries Guide contains mis-spelt build line

2014-08-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14103078#comment-14103078
 ] 

Hudson commented on HADOOP-10972:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1868 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1868/])
HADOOP-10972. Native Libraries Guide contains mis-spelt build line (Peter 
Klavins via aw) (aw: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1618719)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/site/apt/NativeLibraries.apt.vm


> Native Libraries Guide contains mis-spelt build line
> 
>
> Key: HADOOP-10972
> URL: https://issues.apache.org/jira/browse/HADOOP-10972
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.0.0, 2.3.0
>Reporter: Peter Klavins
>Assignee: Peter Klavins
>  Labels: documentation, newbie
> Fix For: 3.0.0, 2.6.0
>
> Attachments: HADOOP-10972.patch
>
>
> The Native Libraries Guide mis-spells the define 'skipTests' with a lowercase 
> 't' in the build line. The correct build line is:
> {code:none}
> $ mvn package -Pdist,native -DskipTests -Dtar
> {code}
> Patch is to trunk, but is also valid for released versions 2.2.0, 2.3.0, 
> 2.4.0, 2.4.1, and 2.5.0, and could be back-ported to them if deemed necessary.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-9902) Shell script rewrite

2014-08-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14103094#comment-14103094
 ] 

Hudson commented on HADOOP-9902:


FAILURE: Integrated in Hadoop-Mapreduce-trunk #1868 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1868/])
HADOOP-9902. Shell script rewrite (aw) (aw: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1618847)
* 
/hadoop/common/trunk/hadoop-assemblies/src/main/resources/assemblies/hadoop-dist.xml
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/bin/hadoop
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/bin/hadoop-config.sh
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/bin/hadoop-daemon.sh
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/bin/hadoop-daemons.sh
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/bin/hadoop-functions.sh
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/bin/hadoop-layout.sh.example
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/bin/rcc
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/bin/slaves.sh
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/bin/start-all.sh
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/bin/stop-all.sh
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/conf/hadoop-env.sh
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/distribute-exclude.sh
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs-config.sh
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/refresh-namenodes.sh
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/start-balancer.sh
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/start-dfs.sh
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/start-secure-dns.sh
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/stop-balancer.sh
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/stop-dfs.sh
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/stop-secure-dns.sh
* /hadoop/common/trunk/hadoop-mapreduce-project/bin/mapred
* /hadoop/common/trunk/hadoop-mapreduce-project/bin/mapred-config.sh
* /hadoop/common/trunk/hadoop-mapreduce-project/bin/mr-jobhistory-daemon.sh
* /hadoop/common/trunk/hadoop-mapreduce-project/conf/mapred-env.sh
* /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/bin/slaves.sh
* /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/bin/start-yarn.sh
* /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/bin/stop-yarn.sh
* /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/bin/yarn
* /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/bin/yarn-config.sh
* /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/bin/yarn-daemon.sh
* /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/bin/yarn-daemons.sh
* /hadoop/common/trunk/hadoop-yarn-project/hadoop-yarn/conf/yarn-env.sh


> Shell script rewrite
> 
>
> Key: HADOOP-9902
> URL: https://issues.apache.org/jira/browse/HADOOP-9902
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>  Labels: releasenotes
> Fix For: 3.0.0
>
> Attachments: HADOOP-9902-10.patch, HADOOP-9902-11.patch, 
> HADOOP-9902-12.patch, HADOOP-9902-13-branch-2.patch, HADOOP-9902-13.patch, 
> HADOOP-9902-14.patch, HADOOP-9902-15.patch, HADOOP-9902-16.patch, 
> HADOOP-9902-2.patch, HADOOP-9902-3.patch, HADOOP-9902-4.patch, 
> HADOOP-9902-5.patch, HADOOP-9902-6.patch, HADOOP-9902-7.patch, 
> HADOOP-9902-8.patch, HADOOP-9902-9.patch, HADOOP-9902.patch, HADOOP-9902.txt, 
> hadoop-9902-1.patch, more-info.txt
>
>
> Umbrella JIRA for shell script rewrite.  See more-info.txt for more details.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10975) org.apache.hadoop.util.DataChecksum should support native checksum calculation

2014-08-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14103084#comment-14103084
 ] 

Hudson commented on HADOOP-10975:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1868 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1868/])
Update CHANGES.txt for HDFS-6561 which was renamed to HADOOP-10975. (todd: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1618686)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt


> org.apache.hadoop.util.DataChecksum should support native checksum calculation
> --
>
> Key: HADOOP-10975
> URL: https://issues.apache.org/jira/browse/HADOOP-10975
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: performance
>Reporter: James Thomas
>Assignee: James Thomas
> Fix For: 2.6.0
>
> Attachments: HDFS-6561.2.patch, HDFS-6561.3.patch, HDFS-6561.4.patch, 
> HDFS-6561.5.patch, HDFS-6561.patch, hdfs-6561-just-hadoop-changes.txt
>
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10973) Native Libraries Guide contains format error

2014-08-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14103080#comment-14103080
 ] 

Hudson commented on HADOOP-10973:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1868 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1868/])
HADOOP-10973. Native Libraries Guide contains format error. (Contributed by 
Peter Klavins) (arp: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1618682)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/site/apt/NativeLibraries.apt.vm


> Native Libraries Guide contains format error
> 
>
> Key: HADOOP-10973
> URL: https://issues.apache.org/jira/browse/HADOOP-10973
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.0.0
>Reporter: Peter Klavins
>Assignee: Peter Klavins
>Priority: Minor
>  Labels: apt, documentation, xdocs
> Fix For: 3.0.0, 2.6.0
>
> Attachments: HADOOP-10973.patch
>
>
> The move from xdocs to APT introduced a formatting bug so that the sub-list 
> under Usage point 4 was merged into the text itself and no longer appeared as 
> a sub-list. Compare xdocs version 
> http://hadoop.apache.org/docs/r1.2.1/native_libraries.html#Native+Hadoop+Library
>  to APT version 
> http://hadoop.apache.org/docs/r2.5.0/hadoop-project-dist/hadoop-common/NativeLibraries.html#Native_Hadoop_Library.
> The patch is to trunk, but is also valid for released versions 0.23.11, 
> 2.2.0, 2.3.0, 2.4.0, 2.4.1, and 2.5.0, and could be back-ported to them if 
> deemed necessary.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10873) Fix dead links in the API doc

2014-08-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14103087#comment-14103087
 ] 

Hudson commented on HADOOP-10873:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1868 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1868/])
HADOOP-10873. Fix dead link in Configuration javadoc (Akira AJISAKA via aw) 
(aw: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1618721)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java


> Fix dead links in the API doc
> -
>
> Key: HADOOP-10873
> URL: https://issues.apache.org/jira/browse/HADOOP-10873
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Reporter: Akira AJISAKA
>
> There are a lot of dead links in [Hadoop API 
> doc|http://hadoop.apache.org/docs/r2.4.1/api/]. We should fix them.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10975) org.apache.hadoop.util.DataChecksum should support native checksum calculation

2014-08-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14103085#comment-14103085
 ] 

Hudson commented on HADOOP-10975:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1868 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1868/])
Update CHANGES.txt for HDFS-6561 which was renamed to HADOOP-10975. (todd: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1618686)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt


> org.apache.hadoop.util.DataChecksum should support native checksum calculation
> --
>
> Key: HADOOP-10975
> URL: https://issues.apache.org/jira/browse/HADOOP-10975
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: performance
>Reporter: James Thomas
>Assignee: James Thomas
> Fix For: 2.6.0
>
> Attachments: HDFS-6561.2.patch, HDFS-6561.3.patch, HDFS-6561.4.patch, 
> HDFS-6561.5.patch, HDFS-6561.patch, hdfs-6561-just-hadoop-changes.txt
>
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10059) RPC authentication and authorization metrics overflow to negative values on busy clusters

2014-08-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14103079#comment-14103079
 ] 

Hudson commented on HADOOP-10059:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1868 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1868/])
HADOOP-10059. RPC authentication and authorization metrics overflow to negative 
values on busy clusters. Contributed by Tsuyoshi OZAWA and Akira AJISAKA 
(jlowe: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1618659)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/metrics/RpcMetrics.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestRPC.java


> RPC authentication and authorization metrics overflow to negative values on 
> busy clusters
> -
>
> Key: HADOOP-10059
> URL: https://issues.apache.org/jira/browse/HADOOP-10059
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 0.23.9, 2.2.0
>Reporter: Jason Lowe
>Assignee: Tsuyoshi OZAWA
>Priority: Minor
> Fix For: 3.0.0, 2.6.0
>
> Attachments: HADOOP-10059.1.patch, HADOOP-10059.2.patch
>
>
> The RPC metrics for authorization and authentication successes can easily 
> overflow to negative values on a busy cluster that has been up for a long 
> time.  We should consider providing 64-bit values for these counters.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10968) hadoop native build fails to detect java_libarch on ppc64le

2014-08-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10968?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14103083#comment-14103083
 ] 

Hudson commented on HADOOP-10968:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1868 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1868/])
HADOOP-10968. hadoop native build fails to detect java_libarch on ppc64le 
(Dinar Valeev via Colin Patrick McCabe) (cmccabe: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1618919)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/src/JNIFlags.cmake


> hadoop native build fails to detect java_libarch on ppc64le
> ---
>
> Key: HADOOP-10968
> URL: https://issues.apache.org/jira/browse/HADOOP-10968
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.3.0
>Reporter: Dinar Valeev
> Fix For: 2.6.0
>
> Attachments: 0001-Set-java_libarch-for-ppc64le-2.patch
>
>
> [INFO] 
> [INFO] --- maven-antrun-plugin:1.7:run (make) @ hadoop-common ---
> [INFO] Executing tasks
> main:
>  [exec] -- The C compiler identification is GNU 4.8.3
>  [exec] -- The CXX compiler identification is GNU 4.8.3
>  [exec] -- Check for working C compiler: /usr/bin/cc
>  [exec] -- Check for working C compiler: /usr/bin/cc -- works
>  [exec] -- Detecting C compiler ABI info
>  [exec] -- Detecting C compiler ABI info - done
>  [exec] -- Check for working CXX compiler: /usr/bin/c++
>  [exec] -- Check for working CXX compiler: /usr/bin/c++ -- works
>  [exec] JAVA_HOME=, JAVA_JVM_LIBRARY=JAVA_JVM_LIBRARY-NOTFOUND
>  [exec] 
> JAVA_INCLUDE_PATH=/usr/lib64/jvm/java-1.7.0-openjdk-1.7.0/include, 
> JAVA_INCLUDE_PATH2=/usr/lib64/jvm/java-1.7.0-openjdk-1.7.0/include/linux
>  [exec] CMake Error at JNIFlags.cmake:114 (MESSAGE):
>  [exec]   Failed to find a viable JVM installation under JAVA_HOME.
>  [exec] Call Stack (most recent call first):
>  [exec]   CMakeLists.txt:24 (include)
>  [exec] 
>  [exec] 
>  [exec] -- Detecting CXX compiler ABI info
>  [exec] -- Detecting CXX compiler ABI info - done
>  [exec] -- Configuring incomplete, errors occurred!
>  [exec] See also 
> "/root/bigtop/build/hadoop/rpm/BUILD/hadoop-2.3.0-src/hadoop-common-project/hadoop-common/target/native/CMakeFiles/CMakeOutput.log".
> [INFO] 
> 
> [INFO] Reactor Summary:
> [INFO] 
> [INFO] Apache Hadoop Main . SUCCESS [ 10.680 
> s]
> [INFO] Apache Hadoop Project POM .. SUCCESS [  0.716 
> s]
> [INFO] Apache Hadoop Annotations .. SUCCESS [  3.270 
> s]
> [INFO] Apache Hadoop Assemblies ... SUCCESS [  0.274 
> s]
> [INFO] Apache Hadoop Project Dist POM . SUCCESS [  1.819 
> s]
> [INFO] Apache Hadoop Maven Plugins  SUCCESS [  3.284 
> s]
> [INFO] Apache Hadoop MiniKDC .. SUCCESS [  2.863 
> s]
> [INFO] Apache Hadoop Auth . SUCCESS [  4.032 
> s]
> [INFO] Apache Hadoop Auth Examples  SUCCESS [  2.475 
> s]
> [INFO] Apache Hadoop Common ... FAILURE [ 10.458 
> s]
> [INFO] Apache Hadoop NFS .. SKIPPED
> [INFO] Apache Hadoop Common Project ... SKIPPED
> [INFO] Apache Hadoop HDFS . SKIPPED
> [INFO] Apache Hadoop HttpFS ... SKIPPED
> [INFO] Apache Hadoop HDFS BookKeeper Journal .. SKIPPED
> [INFO] Apache Hadoop HDFS-NFS . SKIPPED
> [INFO] Apache Hadoop HDFS Project . SKIPPED
> [INFO] hadoop-yarn  SKIPPED
> [INFO] hadoop-yarn-api  SKIPPED
> [INFO] hadoop-yarn-common . SKIPPED
> [INFO] hadoop-yarn-server . SKIPPED
> [INFO] hadoop-yarn-server-common .. SKIPPED
> [INFO] hadoop-yarn-server-nodemanager . SKIPPED
> [INFO] hadoop-yarn-server-web-proxy ... SKIPPED
> [INFO] hadoop-yarn-server-resourcemanager . SKIPPED
> [INFO] hadoop-yarn-server-tests ... SKIPPED
> [INFO] hadoop-yarn-client . SKIPPED
> [INFO] hadoop-yarn-applications ... SKIPPED
> [INFO] hadoop-yarn-applications-distributedshell .. SKIPPED
> [INFO] hadoop-yarn-applications-unmanaged-am-launcher . SKIPPED
> [INFO] hadoop

[jira] [Commented] (HADOOP-10911) hadoop.auth cookie after HADOOP-10710 still not proper according to RFC2109

2014-08-19 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14103124#comment-14103124
 ] 

Haohui Mai commented on HADOOP-10911:
-

Once hadoop itself moves over to Java 7 at 2.7, is it possible to use 
{{java.net.HttpCookie}} directly and delegate this issue to JDK?

> hadoop.auth cookie after HADOOP-10710 still not proper according to RFC2109
> ---
>
> Key: HADOOP-10911
> URL: https://issues.apache.org/jira/browse/HADOOP-10911
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.5.0
>Reporter: Gregory Chanan
> Attachments: HADOOP-10911-tests.patch, HADOOP-10911.patch, 
> HADOOP-10911v2.patch
>
>
> I'm seeing the same problem reported in HADOOP-10710 (that is, httpclient is 
> unable to authenticate with servers running the authentication filter), even 
> with HADOOP-10710 applied.
> From my reading of the spec, the problem is as follows:
> Expires is not a valid directive according to the RFC, though it is mentioned 
> for backwards compatibility with netscape draft spec.  When httpclient sees 
> "Expires", it parses according to the netscape draft spec, but note from 
> RFC2109:
> {code}
> Note that the Expires date format contains embedded spaces, and that "old" 
> cookies did not have quotes around values. 
> {code}
> and note that AuthenticationFilter puts quotes around the value:
> https://github.com/apache/hadoop-common/blob/6b11bff94ebf7d99b3a9e513edd813cb82538400/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationFilter.java#L437-L439
> So httpclient's parsing appears to be kosher.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10852) NetgroupCache is not thread-safe

2014-08-19 Thread Benoy Antony (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benoy Antony updated HADOOP-10852:
--

Attachment: HADOOP-10852.patch

updating the patch to fix the test failures.

> NetgroupCache is not thread-safe
> 
>
> Key: HADOOP-10852
> URL: https://issues.apache.org/jira/browse/HADOOP-10852
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.5.0, 2.4.1
>Reporter: Benoy Antony
>Assignee: Benoy Antony
> Attachments: HADOOP-10852.patch, HADOOP-10852.patch
>
>
> _NetgroupCache_ internally uses two _ConcurrentHashMap_s and a boolean 
> variable to signal updates on one of the _ConcurrentHashMap_s 
> None of the functions are synchronized  and hence is possible to have 
> unexpected results due to race condition between different threads.
> As an example, consider the following sequence:
> Thread1 :
> {{add}} a group
> {{netgroupToUsersMap}} is updated.
> {{netgroupToUsersMapUpdated}} is set to true.
> Thread 2:
> calls {{getNetgroups}} for a user
> Due to re-ordering, {{netgroupToUsersMapUpdated=true}} is visible, but 
> updates in {{netgroupToUsersMap}} is not visible.
> Does a wrong update with older {{netgroupToUsersMap}} values. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10893) isolated classloader on the client side

2014-08-19 Thread Sangjin Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sangjin Lee updated HADOOP-10893:
-

Attachment: HADOOP-10893.patch

Updated the patch with the following changes:
- fixed RunJar.useClientClassLoader()
- renamed the test classes
- updated the patch to apply after HADOOP-9902

I'll post a separate patch for branch-2 later.

> isolated classloader on the client side
> ---
>
> Key: HADOOP-10893
> URL: https://issues.apache.org/jira/browse/HADOOP-10893
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: util
>Affects Versions: 2.4.0
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
> Attachments: HADOOP-10893.patch, HADOOP-10893.patch, 
> HADOOP-10893.patch, HADOOP-10893.patch, HADOOP-10893.patch, 
> HADOOP-10893.patch, classloader-test.tar.gz
>
>
> We have the job classloader on the mapreduce tasks that run on the cluster. 
> It has a benefit of being able to isolate class space for user code and avoid 
> version clashes.
> Although it occurs less often, version clashes do occur on the client JVM. It 
> would be good to introduce an isolated classloader on the client side as well 
> to address this. A natural point to introduce this may be through RunJar, as 
> that's how most of hadoop jobs are run.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10954) Adding site documents of hadoop-tools

2014-08-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14103182#comment-14103182
 ] 

Hadoop QA commented on HADOOP-10954:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12662818/HADOOP-10954-0.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+0 tests included{color}.  The patch appears to be a 
documentation patch that doesn't require tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-tools/hadoop-gridmix hadoop-tools/hadoop-rumen.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4510//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4510//console

This message is automatically generated.

> Adding site documents of hadoop-tools
> -
>
> Key: HADOOP-10954
> URL: https://issues.apache.org/jira/browse/HADOOP-10954
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 2.5.0
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Minor
> Attachments: HADOOP-10954-0.patch
>
>
> There are no pages for hadoop-tools in the site documents of branch-2 or 
> later.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10893) isolated classloader on the client side

2014-08-19 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14103194#comment-14103194
 ] 

Allen Wittenauer commented on HADOOP-10893:
---

{code}
+# If HADOOP_USE_CLIENT_CLASSLOADER is set, user classes and their dependencies
+# as defined by HADOOP_CLASSPATH and the jar as the hadoop jar argument are
+# loaded by a separate classloader. It should not be mixed with
+# HADOOP_USER_CLASSPATH_FIRST. If it is set, HADOOP_USER_CLASSPATH_FIRST is
+# ignored. Can be defined by doing
+# export HADOOP_USE_CLIENT_CLASSLOADER=true
+
+# HADOOP_CLIENT_CLASSLOADER_SYSTEM_CLASSES overrides the default definition of
+# system classes for the client classloader. The system classes are a
+# comma-separated list of classes that should be loaded from the system
+# classpath, not the user-supplied JARs, when HADOOP_USE_CLIENT_CLASSLOADER is
+# enabled. Names ending in '.' (period) are treated as package names, and names
+# starting with a '-' are treated as negative matches.
+
{code}

I'm not a fan of this wall of text sitting in hadoop-env.sh.  Ideally, this 
should really be in documentation with a very light description here; that 
second paragraph seems too much.  Additionally, burying the variable in the 
middle of the description is confusing.  It should be the last thing in the 
section so that it is clear that's what one needs to change. In other words, 
follow the pattern established elsewhere.

The change to hadoop_add_to_classpath_userpath looks fine, based upon my 
understanding of what this patch is doing.

> isolated classloader on the client side
> ---
>
> Key: HADOOP-10893
> URL: https://issues.apache.org/jira/browse/HADOOP-10893
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: util
>Affects Versions: 2.4.0
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
> Attachments: HADOOP-10893.patch, HADOOP-10893.patch, 
> HADOOP-10893.patch, HADOOP-10893.patch, HADOOP-10893.patch, 
> HADOOP-10893.patch, classloader-test.tar.gz
>
>
> We have the job classloader on the mapreduce tasks that run on the cluster. 
> It has a benefit of being able to isolate class space for user code and avoid 
> version clashes.
> Although it occurs less often, version clashes do occur on the client JVM. It 
> would be good to introduce an isolated classloader on the client side as well 
> to address this. A natural point to introduce this may be through RunJar, as 
> that's how most of hadoop jobs are run.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10893) isolated classloader on the client side

2014-08-19 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14103196#comment-14103196
 ] 

Allen Wittenauer commented on HADOOP-10893:
---

OK, I see the mistake I made.  There is no example export line for 
HADOOP_CLIENT_CLASSLOADER_SYSTEM_CLASSES so I thought it was still describing 
the first one. So yeah, add that instead. ;)

> isolated classloader on the client side
> ---
>
> Key: HADOOP-10893
> URL: https://issues.apache.org/jira/browse/HADOOP-10893
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: util
>Affects Versions: 2.4.0
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
> Attachments: HADOOP-10893.patch, HADOOP-10893.patch, 
> HADOOP-10893.patch, HADOOP-10893.patch, HADOOP-10893.patch, 
> HADOOP-10893.patch, classloader-test.tar.gz
>
>
> We have the job classloader on the mapreduce tasks that run on the cluster. 
> It has a benefit of being able to isolate class space for user code and avoid 
> version clashes.
> Although it occurs less often, version clashes do occur on the client JVM. It 
> would be good to introduce an isolated classloader on the client side as well 
> to address this. A natural point to introduce this may be through RunJar, as 
> that's how most of hadoop jobs are run.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10893) isolated classloader on the client side

2014-08-19 Thread Sangjin Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sangjin Lee updated HADOOP-10893:
-

Attachment: HADOOP-10893-branch-2.patch

Posted a patch for branch-2.

> isolated classloader on the client side
> ---
>
> Key: HADOOP-10893
> URL: https://issues.apache.org/jira/browse/HADOOP-10893
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: util
>Affects Versions: 2.4.0
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
> Attachments: HADOOP-10893-branch-2.patch, HADOOP-10893.patch, 
> HADOOP-10893.patch, HADOOP-10893.patch, HADOOP-10893.patch, 
> HADOOP-10893.patch, HADOOP-10893.patch, classloader-test.tar.gz
>
>
> We have the job classloader on the mapreduce tasks that run on the cluster. 
> It has a benefit of being able to isolate class space for user code and avoid 
> version clashes.
> Although it occurs less often, version clashes do occur on the client JVM. It 
> would be good to introduce an isolated classloader on the client side as well 
> to address this. A natural point to introduce this may be through RunJar, as 
> that's how most of hadoop jobs are run.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10893) isolated classloader on the client side

2014-08-19 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14103212#comment-14103212
 ] 

Sangjin Lee commented on HADOOP-10893:
--

Thanks for the review Allen. I didn't find a suitable home for the description, 
and was following the convention prior to the change. Now that the pattern has 
changed, let me see if I can be more concise here. I'll also add an example 
export line for the latter variable.

> isolated classloader on the client side
> ---
>
> Key: HADOOP-10893
> URL: https://issues.apache.org/jira/browse/HADOOP-10893
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: util
>Affects Versions: 2.4.0
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
> Attachments: HADOOP-10893-branch-2.patch, HADOOP-10893.patch, 
> HADOOP-10893.patch, HADOOP-10893.patch, HADOOP-10893.patch, 
> HADOOP-10893.patch, HADOOP-10893.patch, classloader-test.tar.gz
>
>
> We have the job classloader on the mapreduce tasks that run on the cluster. 
> It has a benefit of being able to isolate class space for user code and avoid 
> version clashes.
> Although it occurs less often, version clashes do occur on the client JVM. It 
> would be good to introduce an isolated classloader on the client side as well 
> to address this. A natural point to introduce this may be through RunJar, as 
> that's how most of hadoop jobs are run.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Moved] (HADOOP-10982) Multiple Kerberos principals for KMS

2014-08-19 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10982?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang moved HDFS-6883 to HADOOP-10982:


 Target Version/s: 3.0.0  (was: 3.0.0)
Affects Version/s: (was: 3.0.0)
   3.0.0
  Key: HADOOP-10982  (was: HDFS-6883)
  Project: Hadoop Common  (was: Hadoop HDFS)

> Multiple Kerberos principals for KMS
> 
>
> Key: HADOOP-10982
> URL: https://issues.apache.org/jira/browse/HADOOP-10982
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Andrew Wang
>Assignee: Alejandro Abdelnur
>
> The Key Management Server should support multiple Kerberos principals.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Moved] (HADOOP-10983) Ability to fetch the KMS ACLs for a given key

2014-08-19 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang moved HDFS-6882 to HADOOP-10983:


 Target Version/s: 3.0.0  (was: 3.0.0)
Affects Version/s: (was: 3.0.0)
   3.0.0
  Key: HADOOP-10983  (was: HDFS-6882)
  Project: Hadoop Common  (was: Hadoop HDFS)

> Ability to fetch the KMS ACLs for a given key
> -
>
> Key: HADOOP-10983
> URL: https://issues.apache.org/jira/browse/HADOOP-10983
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Andrew Wang
>Assignee: Alejandro Abdelnur
>
> On HDFS-6134, [~sureshms] asked for APIs to be able to compare filesystem 
> permissions and KeyProvider permissions to diagnose where they might differ.
> We already have APIs in HDFS-6134 to query the EZ of a path and the key for 
> each EZ, so the only missing link is a KMS API that allows us to query the 
> ACLs for the key.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


  1   2   >