[jira] [Commented] (HADOOP-12589) Fix intermittent test failure of TestCopyPreserveFlag

2015-11-23 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15022718#comment-15022718
 ] 

Tsuyoshi Ozawa commented on HADOOP-12589:
-

[~cnauroth] thank you for taking a look. Currently, does one Jenkins server run 
on same machine or run tests in parallel? If so, we can switch off the parallel 
test execution or we can fix the test to handle the failure of mkdir.

> Fix intermittent test failure of TestCopyPreserveFlag 
> --
>
> Key: HADOOP-12589
> URL: https://issues.apache.org/jira/browse/HADOOP-12589
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
> Environment: jenkins
>Reporter: Tsuyoshi Ozawa
>
> Found this issue on HADOOP-11149.
> {quote}
> Tests run: 8, Failures: 0, Errors: 8, Skipped: 0, Time elapsed: 0.949 sec <<< 
> FAILURE! - in org.apache.hadoop.fs.shell.TestCopyPreserveFlag
> testDirectoryCpWithP(org.apache.hadoop.fs.shell.TestCopyPreserveFlag)  Time 
> elapsed: 0.616 sec  <<< ERROR!
> java.io.IOException: Mkdirs failed to create d0 (exists=false, 
> cwd=/testptch/hadoop/hadoop-common-project/hadoop-common/target/test/data/2/testStat)
>   at 
> org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:449)
>   at 
> org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:435)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:913)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:894)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:856)
>   at org.apache.hadoop.fs.FileSystem.createNewFile(FileSystem.java:1150)
>   at 
> org.apache.hadoop.fs.shell.TestCopyPreserveFlag.initialize(TestCopyPreserveFlag.java:72)
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-9822) create constant MAX_CAPACITY in RetryCache rather than hard-coding 16 in RetryCache constructor

2015-11-23 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HADOOP-9822:
---
Attachment: HADOOP-9822.4.patch

[~wheat9] thank you for taking a look. Updating a patch to make the constant 
{{private static final}}.

> create constant MAX_CAPACITY in RetryCache rather than hard-coding 16 in 
> RetryCache constructor
> ---
>
> Key: HADOOP-9822
> URL: https://issues.apache.org/jira/browse/HADOOP-9822
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.3.0
>Reporter: Tsuyoshi Ozawa
>Assignee: Tsuyoshi Ozawa
>Priority: Minor
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-9822.1.patch, HADOOP-9822.2.patch, 
> HADOOP-9822.3.patch, HADOOP-9822.4.patch
>
>
> The magic number "16" is also used in ClientId.BYTE_LENGTH, so hard-coding 
> magic number "16" is a bit confusing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HADOOP-10406) TestIPC.testIpcWithReaderQueuing may fail

2015-11-22 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10406?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15021408#comment-15021408
 ] 

Tsuyoshi Ozawa edited comment on HADOOP-10406 at 11/23/15 2:33 AM:
---

Reopening this issue since this issue seems to happen after the fix:

{quote}
Tests run: 34, Failures: 1, Errors: 1, Skipped: 0, Time elapsed: 115.75 sec <<< 
FAILURE! - in org.apache.hadoop.ipc.TestIPC
testIpcWithReaderQueuing(org.apache.hadoop.ipc.TestIPC)  Time elapsed: 0.074 
sec  <<< FAILURE!
java.lang.AssertionError: expected:<5> but was:<10>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:555)
at org.junit.Assert.assertEquals(Assert.java:542)
at org.apache.hadoop.ipc.TestIPC.checkBlocking(TestIPC.java:804)
at 
org.apache.hadoop.ipc.TestIPC.testIpcWithReaderQueuing(TestIPC.java:695)

testSerial(org.apache.hadoop.ipc.TestIPC)  Time elapsed: 60.037 sec  <<< ERROR!
java.lang.Exception: test timed out after 6 milliseconds
at sun.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
at 
java.util.concurrent.ThreadPoolExecutor.awaitTermination(ThreadPoolExecutor.java:1468)
at 
org.apache.hadoop.ipc.Client$ClientExecutorServiceFactory.unrefAndCleanup(Client.java:196)
at org.apache.hadoop.ipc.Client.stop(Client.java:1287)
at org.apache.hadoop.ipc.TestIPC.internalTestSerial(TestIPC.java:287)
at org.apache.hadoop.ipc.TestIPC.testSerial(TestIPC.java:262)
{quote}


was (Author: ozawa):
Reopening this issue since this issue seems to happen after the fix.a

{quote}
Tests run: 34, Failures: 1, Errors: 1, Skipped: 0, Time elapsed: 115.75 sec <<< 
FAILURE! - in org.apache.hadoop.ipc.TestIPC
testIpcWithReaderQueuing(org.apache.hadoop.ipc.TestIPC)  Time elapsed: 0.074 
sec  <<< FAILURE!
java.lang.AssertionError: expected:<5> but was:<10>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:555)
at org.junit.Assert.assertEquals(Assert.java:542)
at org.apache.hadoop.ipc.TestIPC.checkBlocking(TestIPC.java:804)
at 
org.apache.hadoop.ipc.TestIPC.testIpcWithReaderQueuing(TestIPC.java:695)

testSerial(org.apache.hadoop.ipc.TestIPC)  Time elapsed: 60.037 sec  <<< ERROR!
java.lang.Exception: test timed out after 6 milliseconds
at sun.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
at 
java.util.concurrent.ThreadPoolExecutor.awaitTermination(ThreadPoolExecutor.java:1468)
at 
org.apache.hadoop.ipc.Client$ClientExecutorServiceFactory.unrefAndCleanup(Client.java:196)
at org.apache.hadoop.ipc.Client.stop(Client.java:1287)
at org.apache.hadoop.ipc.TestIPC.internalTestSerial(TestIPC.java:287)
at org.apache.hadoop.ipc.TestIPC.testSerial(TestIPC.java:262)
{quote}

> TestIPC.testIpcWithReaderQueuing may fail
> -
>
> Key: HADOOP-10406
> URL: https://issues.apache.org/jira/browse/HADOOP-10406
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiao Chen
> Fix For: 2.8.0
>
> Attachments: HADOOP-10406.001.patch, HADOOP-10406.002.patch
>
>
> The test may fail with AssertionError.  The value 
> server.getNumOpenConnections() could be larger than maxAccept; see comments 
> for more details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11149) Increase the timeout of TestZKFailoverController

2015-11-22 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15021480#comment-15021480
 ] 

Tsuyoshi Ozawa commented on HADOOP-11149:
-

[~wheat9] FYI: I committed to trunk by using github integration: 
https://wiki.apache.org/hadoop/GithubIntegration.  It works well :-) Thanks you 
for committing to branch-2.

Thanks [~ste...@apache.org] for the contribution!



> Increase the timeout of TestZKFailoverController
> 
>
> Key: HADOOP-11149
> URL: https://issues.apache.org/jira/browse/HADOOP-11149
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.8.0, 3.0.0
> Environment: Jenkins
>Reporter: Rajat Jain
>Assignee: Steve Loughran
> Fix For: 2.8.0
>
> Attachments: HADOOP-11149-001.patch, HADOOP-11149-002.patch
>
>
> {code}
> Running org.apache.hadoop.ha.TestZKFailoverController
> Tests run: 19, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 56.875 sec 
> <<< FAILURE! - in org.apache.hadoop.ha.TestZKFailoverController
> testGracefulFailover(org.apache.hadoop.ha.TestZKFailoverController)  Time 
> elapsed: 25.045 sec  <<< ERROR!
> java.lang.Exception: test timed out after 25000 milliseconds
>   at java.lang.Object.wait(Native Method)
>   at 
> org.apache.hadoop.ha.ZKFailoverController.waitForActiveAttempt(ZKFailoverController.java:467)
>   at 
> org.apache.hadoop.ha.ZKFailoverController.doGracefulFailover(ZKFailoverController.java:657)
>   at 
> org.apache.hadoop.ha.ZKFailoverController.access$400(ZKFailoverController.java:61)
>   at 
> org.apache.hadoop.ha.ZKFailoverController$3.run(ZKFailoverController.java:602)
>   at 
> org.apache.hadoop.ha.ZKFailoverController$3.run(ZKFailoverController.java:599)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1621)
>   at 
> org.apache.hadoop.ha.ZKFailoverController.gracefulFailoverToYou(ZKFailoverController.java:599)
>   at 
> org.apache.hadoop.ha.ZKFCRpcServer.gracefulFailover(ZKFCRpcServer.java:94)
>   at 
> org.apache.hadoop.ha.TestZKFailoverController.testGracefulFailover(TestZKFailoverController.java:448)
> Results :
> Tests in error:
>   TestZKFailoverController.testGracefulFailover:448->Object.wait:-2 »  test 
> time...
> {code}
> Running on centos6.5



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12588) Fix intermittent test failure of TestGangliaMetrics

2015-11-22 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HADOOP-12588:

Summary: Fix intermittent test failure of TestGangliaMetrics  (was: 
TestGangliaMetrics fails by "Missing metrics: test.s1rec.Xxx")

> Fix intermittent test failure of TestGangliaMetrics
> ---
>
> Key: HADOOP-12588
> URL: https://issues.apache.org/jira/browse/HADOOP-12588
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Tsuyoshi Ozawa
>
> Jenkins found this test failure on HADOOP-11149.
> {quote}
> Tests run: 2, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.773 sec <<< 
> FAILURE! - in org.apache.hadoop.metrics2.impl.TestGangliaMetrics
> testGangliaMetrics2(org.apache.hadoop.metrics2.impl.TestGangliaMetrics)  Time 
> elapsed: 0.39 sec  <<< FAILURE!
> java.lang.AssertionError: Missing metrics: test.s1rec.Xxx
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at 
> org.apache.hadoop.metrics2.impl.TestGangliaMetrics.checkMetrics(TestGangliaMetrics.java:159)
>   at 
> org.apache.hadoop.metrics2.impl.TestGangliaMetrics.testGangliaMetrics2(TestGangliaMetrics.java:137)
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (HADOOP-10406) TestIPC.testIpcWithReaderQueuing may fail

2015-11-22 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10406?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa reopened HADOOP-10406:
-

Reopening this issue since this issue seems to happen after the fix.a

{quote}
Tests run: 34, Failures: 1, Errors: 1, Skipped: 0, Time elapsed: 115.75 sec <<< 
FAILURE! - in org.apache.hadoop.ipc.TestIPC
testIpcWithReaderQueuing(org.apache.hadoop.ipc.TestIPC)  Time elapsed: 0.074 
sec  <<< FAILURE!
java.lang.AssertionError: expected:<5> but was:<10>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:555)
at org.junit.Assert.assertEquals(Assert.java:542)
at org.apache.hadoop.ipc.TestIPC.checkBlocking(TestIPC.java:804)
at 
org.apache.hadoop.ipc.TestIPC.testIpcWithReaderQueuing(TestIPC.java:695)

testSerial(org.apache.hadoop.ipc.TestIPC)  Time elapsed: 60.037 sec  <<< ERROR!
java.lang.Exception: test timed out after 6 milliseconds
at sun.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
at 
java.util.concurrent.ThreadPoolExecutor.awaitTermination(ThreadPoolExecutor.java:1468)
at 
org.apache.hadoop.ipc.Client$ClientExecutorServiceFactory.unrefAndCleanup(Client.java:196)
at org.apache.hadoop.ipc.Client.stop(Client.java:1287)
at org.apache.hadoop.ipc.TestIPC.internalTestSerial(TestIPC.java:287)
at org.apache.hadoop.ipc.TestIPC.testSerial(TestIPC.java:262)
{quote}

> TestIPC.testIpcWithReaderQueuing may fail
> -
>
> Key: HADOOP-10406
> URL: https://issues.apache.org/jira/browse/HADOOP-10406
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiao Chen
> Fix For: 2.8.0
>
> Attachments: HADOOP-10406.001.patch, HADOOP-10406.002.patch
>
>
> The test may fail with AssertionError.  The value 
> server.getNumOpenConnections() could be larger than maxAccept; see comments 
> for more details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11149) Increase the timeout of TestZKFailoverController

2015-11-22 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15021483#comment-15021483
 ] 

Tsuyoshi Ozawa commented on HADOOP-11149:
-

[~wheat9] oh, we crossed the {{git push}}. It can cause the conflicts against 
CHANGES.txt - should I revert my commit?

> Increase the timeout of TestZKFailoverController
> 
>
> Key: HADOOP-11149
> URL: https://issues.apache.org/jira/browse/HADOOP-11149
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.8.0, 3.0.0
> Environment: Jenkins
>Reporter: Rajat Jain
>Assignee: Steve Loughran
> Fix For: 2.8.0
>
> Attachments: HADOOP-11149-001.patch, HADOOP-11149-002.patch
>
>
> {code}
> Running org.apache.hadoop.ha.TestZKFailoverController
> Tests run: 19, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 56.875 sec 
> <<< FAILURE! - in org.apache.hadoop.ha.TestZKFailoverController
> testGracefulFailover(org.apache.hadoop.ha.TestZKFailoverController)  Time 
> elapsed: 25.045 sec  <<< ERROR!
> java.lang.Exception: test timed out after 25000 milliseconds
>   at java.lang.Object.wait(Native Method)
>   at 
> org.apache.hadoop.ha.ZKFailoverController.waitForActiveAttempt(ZKFailoverController.java:467)
>   at 
> org.apache.hadoop.ha.ZKFailoverController.doGracefulFailover(ZKFailoverController.java:657)
>   at 
> org.apache.hadoop.ha.ZKFailoverController.access$400(ZKFailoverController.java:61)
>   at 
> org.apache.hadoop.ha.ZKFailoverController$3.run(ZKFailoverController.java:602)
>   at 
> org.apache.hadoop.ha.ZKFailoverController$3.run(ZKFailoverController.java:599)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1621)
>   at 
> org.apache.hadoop.ha.ZKFailoverController.gracefulFailoverToYou(ZKFailoverController.java:599)
>   at 
> org.apache.hadoop.ha.ZKFCRpcServer.gracefulFailover(ZKFCRpcServer.java:94)
>   at 
> org.apache.hadoop.ha.TestZKFailoverController.testGracefulFailover(TestZKFailoverController.java:448)
> Results :
> Tests in error:
>   TestZKFailoverController.testGracefulFailover:448->Object.wait:-2 »  test 
> time...
> {code}
> Running on centos6.5



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12588) TestGangliaMetrics fails by

2015-11-22 Thread Tsuyoshi Ozawa (JIRA)
Tsuyoshi Ozawa created HADOOP-12588:
---

 Summary: TestGangliaMetrics fails by 
 Key: HADOOP-12588
 URL: https://issues.apache.org/jira/browse/HADOOP-12588
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Tsuyoshi Ozawa


Jenkins found this test failure on HADOOP-11149.

{quote}
Tests run: 2, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.773 sec <<< 
FAILURE! - in org.apache.hadoop.metrics2.impl.TestGangliaMetrics
testGangliaMetrics2(org.apache.hadoop.metrics2.impl.TestGangliaMetrics)  Time 
elapsed: 0.39 sec  <<< FAILURE!
java.lang.AssertionError: Missing metrics: test.s1rec.Xxx
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.assertTrue(Assert.java:41)
at 
org.apache.hadoop.metrics2.impl.TestGangliaMetrics.checkMetrics(TestGangliaMetrics.java:159)
at 
org.apache.hadoop.metrics2.impl.TestGangliaMetrics.testGangliaMetrics2(TestGangliaMetrics.java:137)
{quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12588) TestGangliaMetrics fails by "Missing metrics: test.s1rec.Xxx"

2015-11-22 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HADOOP-12588:

Summary: TestGangliaMetrics fails by "Missing metrics: test.s1rec.Xxx"  
(was: TestGangliaMetrics fails by )

> TestGangliaMetrics fails by "Missing metrics: test.s1rec.Xxx"
> -
>
> Key: HADOOP-12588
> URL: https://issues.apache.org/jira/browse/HADOOP-12588
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Tsuyoshi Ozawa
>
> Jenkins found this test failure on HADOOP-11149.
> {quote}
> Tests run: 2, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 0.773 sec <<< 
> FAILURE! - in org.apache.hadoop.metrics2.impl.TestGangliaMetrics
> testGangliaMetrics2(org.apache.hadoop.metrics2.impl.TestGangliaMetrics)  Time 
> elapsed: 0.39 sec  <<< FAILURE!
> java.lang.AssertionError: Missing metrics: test.s1rec.Xxx
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at 
> org.apache.hadoop.metrics2.impl.TestGangliaMetrics.checkMetrics(TestGangliaMetrics.java:159)
>   at 
> org.apache.hadoop.metrics2.impl.TestGangliaMetrics.testGangliaMetrics2(TestGangliaMetrics.java:137)
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12589) Fix intermittent test failure of TestCopyPreserveFlag

2015-11-22 Thread Tsuyoshi Ozawa (JIRA)
Tsuyoshi Ozawa created HADOOP-12589:
---

 Summary: Fix intermittent test failure of TestCopyPreserveFlag 
 Key: HADOOP-12589
 URL: https://issues.apache.org/jira/browse/HADOOP-12589
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Tsuyoshi Ozawa


Found this issue on HADOOP-11149.

{quote}
Tests run: 8, Failures: 0, Errors: 8, Skipped: 0, Time elapsed: 0.949 sec <<< 
FAILURE! - in org.apache.hadoop.fs.shell.TestCopyPreserveFlag
testDirectoryCpWithP(org.apache.hadoop.fs.shell.TestCopyPreserveFlag)  Time 
elapsed: 0.616 sec  <<< ERROR!
java.io.IOException: Mkdirs failed to create d0 (exists=false, 
cwd=/testptch/hadoop/hadoop-common-project/hadoop-common/target/test/data/2/testStat)
at 
org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:449)
at 
org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:435)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:913)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:894)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:856)
at org.apache.hadoop.fs.FileSystem.createNewFile(FileSystem.java:1150)
at 
org.apache.hadoop.fs.shell.TestCopyPreserveFlag.initialize(TestCopyPreserveFlag.java:72)
{quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11149) Increase the timeout of TestZKFailoverController

2015-11-22 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15021475#comment-15021475
 ] 

Tsuyoshi Ozawa commented on HADOOP-11149:
-

Checking this in since the patch of this issue only includes 
TestZKFailoverController and MiniZKFCCluster. TestGangliaMetrics, TestIPC,  
TestCopyPreserveFlag pass locally, and they should be tracked on another jiras.
* TestIPC.testIpcWithReaderQueuing: reopened HADOOP-10406 
* TestGangliaMetrics: HADOOP-12588
* TestCopyPreserveFlag: it looks like to be permission issue or timing issue. 
Opened HADOOP-12589



> Increase the timeout of TestZKFailoverController
> 
>
> Key: HADOOP-11149
> URL: https://issues.apache.org/jira/browse/HADOOP-11149
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.8.0, 3.0.0
> Environment: Jenkins
>Reporter: Rajat Jain
>Assignee: Steve Loughran
> Attachments: HADOOP-11149-001.patch, HADOOP-11149-002.patch
>
>
> {code}
> Running org.apache.hadoop.ha.TestZKFailoverController
> Tests run: 19, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 56.875 sec 
> <<< FAILURE! - in org.apache.hadoop.ha.TestZKFailoverController
> testGracefulFailover(org.apache.hadoop.ha.TestZKFailoverController)  Time 
> elapsed: 25.045 sec  <<< ERROR!
> java.lang.Exception: test timed out after 25000 milliseconds
>   at java.lang.Object.wait(Native Method)
>   at 
> org.apache.hadoop.ha.ZKFailoverController.waitForActiveAttempt(ZKFailoverController.java:467)
>   at 
> org.apache.hadoop.ha.ZKFailoverController.doGracefulFailover(ZKFailoverController.java:657)
>   at 
> org.apache.hadoop.ha.ZKFailoverController.access$400(ZKFailoverController.java:61)
>   at 
> org.apache.hadoop.ha.ZKFailoverController$3.run(ZKFailoverController.java:602)
>   at 
> org.apache.hadoop.ha.ZKFailoverController$3.run(ZKFailoverController.java:599)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1621)
>   at 
> org.apache.hadoop.ha.ZKFailoverController.gracefulFailoverToYou(ZKFailoverController.java:599)
>   at 
> org.apache.hadoop.ha.ZKFCRpcServer.gracefulFailover(ZKFCRpcServer.java:94)
>   at 
> org.apache.hadoop.ha.TestZKFailoverController.testGracefulFailover(TestZKFailoverController.java:448)
> Results :
> Tests in error:
>   TestZKFailoverController.testGracefulFailover:448->Object.wait:-2 »  test 
> time...
> {code}
> Running on centos6.5



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11636) Several tests are not stable (OpenJDK - Ubuntu - x86_64) V2.6.0

2015-11-22 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15021176#comment-15021176
 ] 

Tsuyoshi Ozawa commented on HADOOP-11636:
-

[~tony.r...@atos.net] Thanks for reporting. It's good time to check these tests 
again now.

> Several tests are not stable (OpenJDK - Ubuntu - x86_64) V2.6.0
> ---
>
> Key: HADOOP-11636
> URL: https://issues.apache.org/jira/browse/HADOOP-11636
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.6.0
> Environment: OpenJDK 1.7 - Ubuntu - x86_64
>Reporter: Tony Reix
>
> I've run all the Hadoop 2.6.0 tests many times (16 for now).
> Using a tool, I can see that 30 tests are unstable.
> Unstable means that the result (Number of failures and errors) is not stable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11149) TestZKFailoverController times out

2015-11-22 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15021184#comment-15021184
 ] 

Tsuyoshi Ozawa commented on HADOOP-11149:
-

+1, checking this in.

> TestZKFailoverController times out
> --
>
> Key: HADOOP-11149
> URL: https://issues.apache.org/jira/browse/HADOOP-11149
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.8.0, 3.0.0
> Environment: Jenkins
>Reporter: Rajat Jain
>Assignee: Steve Loughran
> Attachments: HADOOP-11149-001.patch, HADOOP-11149-002.patch
>
>
> {code}
> Running org.apache.hadoop.ha.TestZKFailoverController
> Tests run: 19, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 56.875 sec 
> <<< FAILURE! - in org.apache.hadoop.ha.TestZKFailoverController
> testGracefulFailover(org.apache.hadoop.ha.TestZKFailoverController)  Time 
> elapsed: 25.045 sec  <<< ERROR!
> java.lang.Exception: test timed out after 25000 milliseconds
>   at java.lang.Object.wait(Native Method)
>   at 
> org.apache.hadoop.ha.ZKFailoverController.waitForActiveAttempt(ZKFailoverController.java:467)
>   at 
> org.apache.hadoop.ha.ZKFailoverController.doGracefulFailover(ZKFailoverController.java:657)
>   at 
> org.apache.hadoop.ha.ZKFailoverController.access$400(ZKFailoverController.java:61)
>   at 
> org.apache.hadoop.ha.ZKFailoverController$3.run(ZKFailoverController.java:602)
>   at 
> org.apache.hadoop.ha.ZKFailoverController$3.run(ZKFailoverController.java:599)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1621)
>   at 
> org.apache.hadoop.ha.ZKFailoverController.gracefulFailoverToYou(ZKFailoverController.java:599)
>   at 
> org.apache.hadoop.ha.ZKFCRpcServer.gracefulFailover(ZKFCRpcServer.java:94)
>   at 
> org.apache.hadoop.ha.TestZKFailoverController.testGracefulFailover(TestZKFailoverController.java:448)
> Results :
> Tests in error:
>   TestZKFailoverController.testGracefulFailover:448->Object.wait:-2 »  test 
> time...
> {code}
> Running on centos6.5



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HADOOP-11149) TestZKFailoverController times out

2015-11-22 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15021184#comment-15021184
 ] 

Tsuyoshi Ozawa edited comment on HADOOP-11149 at 11/22/15 8:53 PM:
---

+1, after Jenkins passes, checking this in.


was (Author: ozawa):
+1, checking this in.

> TestZKFailoverController times out
> --
>
> Key: HADOOP-11149
> URL: https://issues.apache.org/jira/browse/HADOOP-11149
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.8.0, 3.0.0
> Environment: Jenkins
>Reporter: Rajat Jain
>Assignee: Steve Loughran
> Attachments: HADOOP-11149-001.patch, HADOOP-11149-002.patch
>
>
> {code}
> Running org.apache.hadoop.ha.TestZKFailoverController
> Tests run: 19, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 56.875 sec 
> <<< FAILURE! - in org.apache.hadoop.ha.TestZKFailoverController
> testGracefulFailover(org.apache.hadoop.ha.TestZKFailoverController)  Time 
> elapsed: 25.045 sec  <<< ERROR!
> java.lang.Exception: test timed out after 25000 milliseconds
>   at java.lang.Object.wait(Native Method)
>   at 
> org.apache.hadoop.ha.ZKFailoverController.waitForActiveAttempt(ZKFailoverController.java:467)
>   at 
> org.apache.hadoop.ha.ZKFailoverController.doGracefulFailover(ZKFailoverController.java:657)
>   at 
> org.apache.hadoop.ha.ZKFailoverController.access$400(ZKFailoverController.java:61)
>   at 
> org.apache.hadoop.ha.ZKFailoverController$3.run(ZKFailoverController.java:602)
>   at 
> org.apache.hadoop.ha.ZKFailoverController$3.run(ZKFailoverController.java:599)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1621)
>   at 
> org.apache.hadoop.ha.ZKFailoverController.gracefulFailoverToYou(ZKFailoverController.java:599)
>   at 
> org.apache.hadoop.ha.ZKFCRpcServer.gracefulFailover(ZKFCRpcServer.java:94)
>   at 
> org.apache.hadoop.ha.TestZKFailoverController.testGracefulFailover(TestZKFailoverController.java:448)
> Results :
> Tests in error:
>   TestZKFailoverController.testGracefulFailover:448->Object.wait:-2 »  test 
> time...
> {code}
> Running on centos6.5



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12575) Build instruction for docker toolbox

2015-11-19 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12575?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15013152#comment-15013152
 ] 

Tsuyoshi Ozawa commented on HADOOP-12575:
-

+1, checking this in.

> Build instruction for docker toolbox
> 
>
> Key: HADOOP-12575
> URL: https://issues.apache.org/jira/browse/HADOOP-12575
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
>Priority: Trivial
>  Labels: docker, documentation
> Attachments: HADOOP-12575.01.patch, HADOOP-12575.02.patch, 
> HADOOP-12575.03.patch
>
>
> Currently docker on MacOSX is mainly used by docker toolbox and 
> docker-machine. These tools make boot2docker deprecated. 
> (https://docs.docker.com/engine/installation/mac/)
> It might be better to append the instruction of docker toolbox and 
> docker-machine when using {{start-build-env.sh}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12582) Using BytesWritable's getLength() and getBytes() instead of get() and getSize()

2015-11-19 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15013139#comment-15013139
 ] 

Tsuyoshi Ozawa commented on HADOOP-12582:
-

+1, checking this in.

> Using BytesWritable's getLength() and getBytes() instead of get() and 
> getSize()
> ---
>
> Key: HADOOP-12582
> URL: https://issues.apache.org/jira/browse/HADOOP-12582
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Tsuyoshi Ozawa
>Assignee: Akira AJISAKA
>  Labels: newbie
> Attachments: HADOOP-12582.00.patch
>
>
> BytesWritable's deprecated methods,  get() and getSize(), are still used in 
> some tests: TestTFileSeek, TestTFileSeqFileComparison, TestSequenceFile, and 
> so on. We can also remove them if targeting this to 3.0.0
> https://builds.apache.org/job/PreCommit-HADOOP-Build/8084/artifact/patchprocess/diff-compile-javac-root-jdk1.7.0_85.txt



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12575) Add build instruction for docker toolbox instead of boot2docker

2015-11-19 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12575?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HADOOP-12575:

Summary: Add build instruction for docker toolbox instead of boot2docker  
(was: Build instruction for docker toolbox)

> Add build instruction for docker toolbox instead of boot2docker
> ---
>
> Key: HADOOP-12575
> URL: https://issues.apache.org/jira/browse/HADOOP-12575
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
>Priority: Trivial
>  Labels: docker, documentation
> Attachments: HADOOP-12575.01.patch, HADOOP-12575.02.patch, 
> HADOOP-12575.03.patch
>
>
> Currently docker on MacOSX is mainly used by docker toolbox and 
> docker-machine. These tools make boot2docker deprecated. 
> (https://docs.docker.com/engine/installation/mac/)
> It might be better to append the instruction of docker toolbox and 
> docker-machine when using {{start-build-env.sh}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-12578) change from boot2docker to docker-machine

2015-11-19 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa resolved HADOOP-12578.
-
Resolution: Duplicate

[~lewuathe] fixed this on HADOOP-12575. Thanks for reporting, Allen :-)

> change from boot2docker to docker-machine
> -
>
> Key: HADOOP-12578
> URL: https://issues.apache.org/jira/browse/HADOOP-12578
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Allen Wittenauer
>
> boot2docker on OS X appears to be deprecated.  We should rewrite the 
> instructions, scripts, etc, to use docker-machine.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12575) Add build instruction for docker toolbox instead of boot2docker

2015-11-19 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12575?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HADOOP-12575:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

Committed this to trunk and branch-2. Thanks [~lewuathe] for your contribution.

> Add build instruction for docker toolbox instead of boot2docker
> ---
>
> Key: HADOOP-12575
> URL: https://issues.apache.org/jira/browse/HADOOP-12575
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
>Priority: Trivial
>  Labels: docker, documentation
> Fix For: 2.8.0
>
> Attachments: HADOOP-12575.01.patch, HADOOP-12575.02.patch, 
> HADOOP-12575.03.patch
>
>
> Currently docker on MacOSX is mainly used by docker toolbox and 
> docker-machine. These tools make boot2docker deprecated. 
> (https://docs.docker.com/engine/installation/mac/)
> It might be better to append the instruction of docker toolbox and 
> docker-machine when using {{start-build-env.sh}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12575) Build instruction for docker toolbox

2015-11-19 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12575?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15013114#comment-15013114
 ] 

Tsuyoshi Ozawa commented on HADOOP-12575:
-

[~lewuathe] thank you for quick updating! As [~mzuehlke] linked, change from 
boot2docker to docker-machine looks to be better. Could you remove the section 
about boot2docker and update the first sentence to {{You can use docker toolbox 
as described in http://docs.docker.com/mac/step_one/.}}? Also I found the 
indentation goes wrong after applying the patch. Could you fix it?

{quote}
On Mac:
...
Also you can use docker toolbox as described in 
http://docs.docker.com/mac/step_one/. #  <-- This line should be indended
First make sure Virtualbox and docker toolbox are installed.
$ docker-machine create --driver virtualbox \
...
{quote}

> Build instruction for docker toolbox
> 
>
> Key: HADOOP-12575
> URL: https://issues.apache.org/jira/browse/HADOOP-12575
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
>Priority: Trivial
>  Labels: docker, documentation
> Attachments: HADOOP-12575.01.patch, HADOOP-12575.02.patch
>
>
> Currently docker on MacOSX is mainly used by docker toolbox and 
> docker-machine. These tools make boot2docker deprecated. 
> (https://docs.docker.com/engine/installation/mac/)
> It might be better to append the instruction of docker toolbox and 
> docker-machine when using {{start-build-env.sh}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12575) Build instruction for docker toolbox

2015-11-19 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12575?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HADOOP-12575:

Target Version/s: 2.8.0

> Build instruction for docker toolbox
> 
>
> Key: HADOOP-12575
> URL: https://issues.apache.org/jira/browse/HADOOP-12575
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
>Priority: Trivial
>  Labels: docker, documentation
> Attachments: HADOOP-12575.01.patch, HADOOP-12575.02.patch, 
> HADOOP-12575.03.patch
>
>
> Currently docker on MacOSX is mainly used by docker toolbox and 
> docker-machine. These tools make boot2docker deprecated. 
> (https://docs.docker.com/engine/installation/mac/)
> It might be better to append the instruction of docker toolbox and 
> docker-machine when using {{start-build-env.sh}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12582) Using BytesWritable's getLength() and getBytes() instead of get() and getSize()

2015-11-19 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12582?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HADOOP-12582:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0
   2.8.0
   Status: Resolved  (was: Patch Available)

Committed this to trunk and branch-2. Thanks [~ajisakaa] for your contribution, 
and thanks [~aw] and [~eepayne] for your suggestions.

Opened HADOOP-12585 as a umbrella jira to address the problem.

> Using BytesWritable's getLength() and getBytes() instead of get() and 
> getSize()
> ---
>
> Key: HADOOP-12582
> URL: https://issues.apache.org/jira/browse/HADOOP-12582
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Tsuyoshi Ozawa
>Assignee: Akira AJISAKA
>  Labels: newbie
> Fix For: 2.8.0, 3.0.0
>
> Attachments: HADOOP-12582.00.patch
>
>
> BytesWritable's deprecated methods,  get() and getSize(), are still used in 
> some tests: TestTFileSeek, TestTFileSeqFileComparison, TestSequenceFile, and 
> so on. We can also remove them if targeting this to 3.0.0
> https://builds.apache.org/job/PreCommit-HADOOP-Build/8084/artifact/patchprocess/diff-compile-javac-root-jdk1.7.0_85.txt



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12586) Dockerfile cannot work correctly behind a proxy

2015-11-19 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12586?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HADOOP-12586:

Status: Open  (was: Patch Available)

> Dockerfile cannot work correctly behind a proxy
> ---
>
> Key: HADOOP-12586
> URL: https://issues.apache.org/jira/browse/HADOOP-12586
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Tsuyoshi Ozawa
>Assignee: Tsuyoshi Ozawa
> Attachments: HADOOP-12586.001.patch, HADOOP-12586.002.patch
>
>
> {{apt-get}} command fails because there are not way to change proxy.
> {quote}
> Step 7 : RUN apt-get update && apt-get install --no-install-recommends -y 
> git curl ant make maven cmake gcc g++ protobuf-compiler libprotoc-dev 
> protobuf-c-compiler libprotobuf-dev build-essential libtool 
> zlib1g-dev pkg-config libssl-dev snappy libsnappy-dev bzip2 
> libbz2-dev libjansson-dev fuse libfuse-dev libcurl4-openssl-dev   
>   python python2.7 pylint openjdk-7-jdk doxygen
>  ---> Running in 072a97b7fa45
> Err http://archive.ubuntu.com trusty InRelease
>   
> Err http://archive.ubuntu.com trusty-updates InRelease
>   
> Err http://archive.ubuntu.com trusty-security InRelease
>   
> Err http://archive.ubuntu.com trusty Release.gpg
>   Cannot initiate the connection to archive.ubuntu.com:80 
> (2001:67c:1360:8c01::19). - connect (101: Network is unreachable) [IP: 
> 2001:67c:1360:8c01::19 80]
> Err http://archive.ubuntu.com trusty-updates Release.gpg
>   Cannot initiate the connection to archive.ubuntu.com:80 
> (2001:67c:1360:8c01::19). - connect (101: Network is unreachable) [IP: 
> 2001:67c:1360:8c01::19 80]
> Err http://archive.ubuntu.com trusty-security Release.gpg
>   Cannot initiate the connection to archive.ubuntu.com:80 
> (2001:67c:1360:8c01::19). - connect (101: Network is unreachable) [IP: 
> 2001:67c:1360:8c01::19 80]
> Reading package lists...
> W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/trusty/InRelease  
> W: Failed to fetch 
> http://archive.ubuntu.com/ubuntu/dists/trusty-updates/InRelease  
> W: Failed to fetch 
> http://archive.ubuntu.com/ubuntu/dists/trusty-security/InRelease  
> W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/trusty/Release.gpg  
> Cannot initiate the connection to archive.ubuntu.com:80 
> (2001:67c:1360:8c01::19). - connect (101: Network is unreachable) [IP: 
> 2001:67c:1360:8c01::19 80]
> W: Failed to fetch 
> http://archive.ubuntu.com/ubuntu/dists/trusty-updates/Release.gpg  Cannot 
> initiate the connection to archive.ubuntu.com:80 (2001:67c:1360:8c01::19). - 
> connect (101: Network is unreachable) [IP: 2001:67c:1360:8c01::19 80]
> W: Failed to fetch 
> http://archive.ubuntu.com/ubuntu/dists/trusty-security/Release.gpg  Cannot 
> initiate the connection to archive.ubuntu.com:80 (2001:67c:1360:8c01::19). - 
> connect (101: Network is unreachable) [IP: 2001:67c:1360:8c01::19 80]
> W: Some index files failed to download. They have been ignored, or old ones 
> used instead.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12586) Dockerfile cannot work correctly behind a proxy

2015-11-19 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12586?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HADOOP-12586:

Attachment: HADOOP-12586.002.patch

Fixed warnings by shellcheck.

> Dockerfile cannot work correctly behind a proxy
> ---
>
> Key: HADOOP-12586
> URL: https://issues.apache.org/jira/browse/HADOOP-12586
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Tsuyoshi Ozawa
>Assignee: Tsuyoshi Ozawa
> Attachments: HADOOP-12586.001.patch, HADOOP-12586.002.patch
>
>
> {{apt-get}} command fails because there are not way to change proxy.
> {quote}
> Step 7 : RUN apt-get update && apt-get install --no-install-recommends -y 
> git curl ant make maven cmake gcc g++ protobuf-compiler libprotoc-dev 
> protobuf-c-compiler libprotobuf-dev build-essential libtool 
> zlib1g-dev pkg-config libssl-dev snappy libsnappy-dev bzip2 
> libbz2-dev libjansson-dev fuse libfuse-dev libcurl4-openssl-dev   
>   python python2.7 pylint openjdk-7-jdk doxygen
>  ---> Running in 072a97b7fa45
> Err http://archive.ubuntu.com trusty InRelease
>   
> Err http://archive.ubuntu.com trusty-updates InRelease
>   
> Err http://archive.ubuntu.com trusty-security InRelease
>   
> Err http://archive.ubuntu.com trusty Release.gpg
>   Cannot initiate the connection to archive.ubuntu.com:80 
> (2001:67c:1360:8c01::19). - connect (101: Network is unreachable) [IP: 
> 2001:67c:1360:8c01::19 80]
> Err http://archive.ubuntu.com trusty-updates Release.gpg
>   Cannot initiate the connection to archive.ubuntu.com:80 
> (2001:67c:1360:8c01::19). - connect (101: Network is unreachable) [IP: 
> 2001:67c:1360:8c01::19 80]
> Err http://archive.ubuntu.com trusty-security Release.gpg
>   Cannot initiate the connection to archive.ubuntu.com:80 
> (2001:67c:1360:8c01::19). - connect (101: Network is unreachable) [IP: 
> 2001:67c:1360:8c01::19 80]
> Reading package lists...
> W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/trusty/InRelease  
> W: Failed to fetch 
> http://archive.ubuntu.com/ubuntu/dists/trusty-updates/InRelease  
> W: Failed to fetch 
> http://archive.ubuntu.com/ubuntu/dists/trusty-security/InRelease  
> W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/trusty/Release.gpg  
> Cannot initiate the connection to archive.ubuntu.com:80 
> (2001:67c:1360:8c01::19). - connect (101: Network is unreachable) [IP: 
> 2001:67c:1360:8c01::19 80]
> W: Failed to fetch 
> http://archive.ubuntu.com/ubuntu/dists/trusty-updates/Release.gpg  Cannot 
> initiate the connection to archive.ubuntu.com:80 (2001:67c:1360:8c01::19). - 
> connect (101: Network is unreachable) [IP: 2001:67c:1360:8c01::19 80]
> W: Failed to fetch 
> http://archive.ubuntu.com/ubuntu/dists/trusty-security/Release.gpg  Cannot 
> initiate the connection to archive.ubuntu.com:80 (2001:67c:1360:8c01::19). - 
> connect (101: Network is unreachable) [IP: 2001:67c:1360:8c01::19 80]
> W: Some index files failed to download. They have been ignored, or old ones 
> used instead.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12586) Dockerfile cannot work correctly behind a proxy

2015-11-19 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12586?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15013229#comment-15013229
 ] 

Tsuyoshi Ozawa commented on HADOOP-12586:
-

Cancelling the patch since the feature only works on docker 1.9 or later. 
https://blog.docker.com/2015/11/docker-1-9-production-ready-swarm-multi-host-networking/

> Dockerfile cannot work correctly behind a proxy
> ---
>
> Key: HADOOP-12586
> URL: https://issues.apache.org/jira/browse/HADOOP-12586
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Tsuyoshi Ozawa
>Assignee: Tsuyoshi Ozawa
> Attachments: HADOOP-12586.001.patch, HADOOP-12586.002.patch
>
>
> {{apt-get}} command fails because there are not way to change proxy.
> {quote}
> Step 7 : RUN apt-get update && apt-get install --no-install-recommends -y 
> git curl ant make maven cmake gcc g++ protobuf-compiler libprotoc-dev 
> protobuf-c-compiler libprotobuf-dev build-essential libtool 
> zlib1g-dev pkg-config libssl-dev snappy libsnappy-dev bzip2 
> libbz2-dev libjansson-dev fuse libfuse-dev libcurl4-openssl-dev   
>   python python2.7 pylint openjdk-7-jdk doxygen
>  ---> Running in 072a97b7fa45
> Err http://archive.ubuntu.com trusty InRelease
>   
> Err http://archive.ubuntu.com trusty-updates InRelease
>   
> Err http://archive.ubuntu.com trusty-security InRelease
>   
> Err http://archive.ubuntu.com trusty Release.gpg
>   Cannot initiate the connection to archive.ubuntu.com:80 
> (2001:67c:1360:8c01::19). - connect (101: Network is unreachable) [IP: 
> 2001:67c:1360:8c01::19 80]
> Err http://archive.ubuntu.com trusty-updates Release.gpg
>   Cannot initiate the connection to archive.ubuntu.com:80 
> (2001:67c:1360:8c01::19). - connect (101: Network is unreachable) [IP: 
> 2001:67c:1360:8c01::19 80]
> Err http://archive.ubuntu.com trusty-security Release.gpg
>   Cannot initiate the connection to archive.ubuntu.com:80 
> (2001:67c:1360:8c01::19). - connect (101: Network is unreachable) [IP: 
> 2001:67c:1360:8c01::19 80]
> Reading package lists...
> W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/trusty/InRelease  
> W: Failed to fetch 
> http://archive.ubuntu.com/ubuntu/dists/trusty-updates/InRelease  
> W: Failed to fetch 
> http://archive.ubuntu.com/ubuntu/dists/trusty-security/InRelease  
> W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/trusty/Release.gpg  
> Cannot initiate the connection to archive.ubuntu.com:80 
> (2001:67c:1360:8c01::19). - connect (101: Network is unreachable) [IP: 
> 2001:67c:1360:8c01::19 80]
> W: Failed to fetch 
> http://archive.ubuntu.com/ubuntu/dists/trusty-updates/Release.gpg  Cannot 
> initiate the connection to archive.ubuntu.com:80 (2001:67c:1360:8c01::19). - 
> connect (101: Network is unreachable) [IP: 2001:67c:1360:8c01::19 80]
> W: Failed to fetch 
> http://archive.ubuntu.com/ubuntu/dists/trusty-security/Release.gpg  Cannot 
> initiate the connection to archive.ubuntu.com:80 (2001:67c:1360:8c01::19). - 
> connect (101: Network is unreachable) [IP: 2001:67c:1360:8c01::19 80]
> W: Some index files failed to download. They have been ignored, or old ones 
> used instead.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HADOOP-12578) Update hadoop_env_checks.sh to track changes from boot2docker to docker-machine

2015-11-19 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15015181#comment-15015181
 ] 

Tsuyoshi Ozawa edited comment on HADOOP-12578 at 11/20/15 4:17 AM:
---

[~aw] 

{quote}
Are we going to keep the message about boot2docker in hadoop_env_checks.sh then?
{quote}

I overlooked this point completely. +1 for the change. Let's track this problem 
on this issue. Thank you for the notification and sorry for my mistake.

{quote}
... and HADOOP-12575 assumes VB.  sigh
{quote}

Do you mean we should also add a description for vmware fusion? Any idea?


was (Author: ozawa):
[~aw] 

{quote}
Are we going to keep the message about boot2docker in hadoop_env_checks.sh then?
{quyote}

I overlooked this point completely. +1 for the change. Let's track this problem 
on this issue.

{quote}
... and HADOOP-12575 assumes VB.  sigh
{quote}

Do you mean we should also add a description for vmware fusion? Any idea?

> Update hadoop_env_checks.sh to track changes from boot2docker to 
> docker-machine
> ---
>
> Key: HADOOP-12578
> URL: https://issues.apache.org/jira/browse/HADOOP-12578
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Allen Wittenauer
>
> boot2docker on OS X appears to be deprecated.  We should rewrite the 
> instructions, scripts, etc, to use docker-machine.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (HADOOP-12578) change from boot2docker to docker-machine

2015-11-19 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa reopened HADOOP-12578:
-

> change from boot2docker to docker-machine
> -
>
> Key: HADOOP-12578
> URL: https://issues.apache.org/jira/browse/HADOOP-12578
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Allen Wittenauer
>
> boot2docker on OS X appears to be deprecated.  We should rewrite the 
> instructions, scripts, etc, to use docker-machine.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12578) change from boot2docker to docker-machine

2015-11-19 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15015181#comment-15015181
 ] 

Tsuyoshi Ozawa commented on HADOOP-12578:
-

[~aw] 

{quote}
Are we going to keep the message about boot2docker in hadoop_env_checks.sh then?
{quyote}

I overlooked this point completely. +1 for the change. Let's track this problem 
on this issue.

{quote}
... and HADOOP-12575 assumes VB.  sigh
{quote}

Do you mean we should also add a description for vmware fusion? Any idea?

> change from boot2docker to docker-machine
> -
>
> Key: HADOOP-12578
> URL: https://issues.apache.org/jira/browse/HADOOP-12578
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Allen Wittenauer
>
> boot2docker on OS X appears to be deprecated.  We should rewrite the 
> instructions, scripts, etc, to use docker-machine.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12578) Update hadoop_env_checks.sh to track changes from boot2docker to docker-machine

2015-11-19 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HADOOP-12578:

Summary: Update hadoop_env_checks.sh to track changes from boot2docker to 
docker-machine  (was: change from boot2docker to docker-machine)

> Update hadoop_env_checks.sh to track changes from boot2docker to 
> docker-machine
> ---
>
> Key: HADOOP-12578
> URL: https://issues.apache.org/jira/browse/HADOOP-12578
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Allen Wittenauer
>
> boot2docker on OS X appears to be deprecated.  We should rewrite the 
> instructions, scripts, etc, to use docker-machine.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12576) Same owner of maven repository on Docker container to build user

2015-11-18 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15011222#comment-15011222
 ] 

Tsuyoshi Ozawa commented on HADOOP-12576:
-

I've faced the same error when launching the script on a clean AWS instance. +1 
for the idea.

> Same owner of maven repository on Docker container to build user
> 
>
> Key: HADOOP-12576
> URL: https://issues.apache.org/jira/browse/HADOOP-12576
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
>Priority: Trivial
>  Labels: docker
> Attachments: HADOOP-12576.01.patch
>
>
> When local maven repository has not yet created, docker container launched by 
> {{start-build-env.sh}} create it owned by launching user. {{docker}} command 
> ofter be run by root user unless manipulating docker unix groups.
> In that case, maven local repository is created by root user and building 
> process inside container fails. 
> It is better to make sure to create maven local repository by just the user 
> who trying to build before launching docker container if not exist maven 
> local repository.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-8419) GzipCodec NPE upon reset with IBM JDK

2015-11-18 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15011352#comment-15011352
 ] 

Tsuyoshi Ozawa commented on HADOOP-8419:


Backported this to branch-2 for targeting 2.8.0.

> GzipCodec NPE upon reset with IBM JDK
> -
>
> Key: HADOOP-8419
> URL: https://issues.apache.org/jira/browse/HADOOP-8419
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Affects Versions: 1.0.3
>Reporter: Luke Lu
>Assignee: Yu Li
>  Labels: gzip, ibm-jdk
> Fix For: 1.1.2, 2.0.5-alpha, 2.8.0
>
> Attachments: HADOOP-8419-branch-1.patch, 
> HADOOP-8419-branch1-v2.patch, HADOOP-8419-trunk-v2.patch, 
> HADOOP-8419-trunk.patch
>
>
> The GzipCodec will NPE upon reset after finish when the native zlib codec is 
> not loaded. When the native zlib is loaded the codec creates a 
> CompressorOutputStream that doesn't have the problem, otherwise, the 
> GZipCodec uses GZIPOutputStream which is extended to provide the resetState 
> method. Since IBM JDK 6 SR9 FP2 including the current JDK 6 SR10, 
> GZIPOutputStream#finish will release the underlying deflater, which causes 
> NPE upon reset. This seems to be an IBM JDK quirk as Sun JDK and OpenJDK 
> doesn't have this issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12564) Upgrade JUnit3 TestCase to JUnit 4 in org.apache.hadoop.io package

2015-11-18 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HADOOP-12564:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0
   2.8.0
   Status: Resolved  (was: Patch Available)

Committed this to trunk and branch-2. Thanks [~cote] for your contribution and 
thanks [~ajisakaa] for your reviews.

>  Upgrade JUnit3 TestCase to JUnit 4 in org.apache.hadoop.io package
> ---
>
> Key: HADOOP-12564
> URL: https://issues.apache.org/jira/browse/HADOOP-12564
> Project: Hadoop Common
>  Issue Type: Test
>  Components: test
>Reporter: Dustin Cote
>Assignee: Dustin Cote
>Priority: Trivial
> Fix For: 2.8.0, 3.0.0
>
> Attachments: MAPREDUCE-6505-1.patch, MAPREDUCE-6505-2.patch, 
> MAPREDUCE-6505-3.patch, MAPREDUCE-6505-4.patch, MAPREDUCE-6505-5.patch, 
> MAPREDUCE-6505-6.patch
>
>
> Migrating just the io test cases 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12564) Upgrade JUnit3 TestCase to JUnit 4 in org.apache.hadoop.io package

2015-11-18 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HADOOP-12564:

Summary:  Upgrade JUnit3 TestCase to JUnit 4 in org.apache.hadoop.io 
package  (was: Upgrade JUnit3 TestCase to JUnit 4 for tests of 
org.apache.hadoop.io package)

>  Upgrade JUnit3 TestCase to JUnit 4 in org.apache.hadoop.io package
> ---
>
> Key: HADOOP-12564
> URL: https://issues.apache.org/jira/browse/HADOOP-12564
> Project: Hadoop Common
>  Issue Type: Test
>  Components: test
>Reporter: Dustin Cote
>Assignee: Dustin Cote
>Priority: Trivial
> Attachments: MAPREDUCE-6505-1.patch, MAPREDUCE-6505-2.patch, 
> MAPREDUCE-6505-3.patch, MAPREDUCE-6505-4.patch, MAPREDUCE-6505-5.patch, 
> MAPREDUCE-6505-6.patch
>
>
> Migrating just the io test cases 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12564) Upgrade JUnit3 TestCase to JUnit 4 for tests of org.apache.hadoop.io package

2015-11-18 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15011286#comment-15011286
 ] 

Tsuyoshi Ozawa commented on HADOOP-12564:
-

+1, checking this in.

The javac warning is not introduced by this jira. HADOOP-12582 is created to 
address the problem. I'm checking the test failure locally. 



> Upgrade JUnit3 TestCase to JUnit 4 for tests of org.apache.hadoop.io package
> 
>
> Key: HADOOP-12564
> URL: https://issues.apache.org/jira/browse/HADOOP-12564
> Project: Hadoop Common
>  Issue Type: Test
>  Components: test
>Reporter: Dustin Cote
>Assignee: Dustin Cote
>Priority: Trivial
> Attachments: MAPREDUCE-6505-1.patch, MAPREDUCE-6505-2.patch, 
> MAPREDUCE-6505-3.patch, MAPREDUCE-6505-4.patch, MAPREDUCE-6505-5.patch, 
> MAPREDUCE-6505-6.patch
>
>
> Migrating just the io test cases 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12564) Upgrade JUnit3 TestCase to JUnit 4 in org.apache.hadoop.io package

2015-11-18 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HADOOP-12564:

Target Version/s: 2.8.0, 3.0.0

>  Upgrade JUnit3 TestCase to JUnit 4 in org.apache.hadoop.io package
> ---
>
> Key: HADOOP-12564
> URL: https://issues.apache.org/jira/browse/HADOOP-12564
> Project: Hadoop Common
>  Issue Type: Test
>  Components: test
>Reporter: Dustin Cote
>Assignee: Dustin Cote
>Priority: Trivial
> Attachments: MAPREDUCE-6505-1.patch, MAPREDUCE-6505-2.patch, 
> MAPREDUCE-6505-3.patch, MAPREDUCE-6505-4.patch, MAPREDUCE-6505-5.patch, 
> MAPREDUCE-6505-6.patch
>
>
> Migrating just the io test cases 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12582) Using BytesWritable's getLength() and getBytes() instead of get() and getSize()

2015-11-18 Thread Tsuyoshi Ozawa (JIRA)
Tsuyoshi Ozawa created HADOOP-12582:
---

 Summary: Using BytesWritable's getLength() and getBytes() instead 
of get() and getSize()
 Key: HADOOP-12582
 URL: https://issues.apache.org/jira/browse/HADOOP-12582
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Tsuyoshi Ozawa


BytesWritable's deprecated methods,  get() and getSize(), are still used in 
some tests: TestTFileSeek, TestTFileSeqFileComparison, TestSequenceFile, and so 
on. We can also remove them if targeting this to 3.0.0

https://builds.apache.org/job/PreCommit-HADOOP-Build/8084/artifact/patchprocess/diff-compile-javac-root-jdk1.7.0_85.txt



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12564) Upgrade JUnit3 TestCase to JUnit 4 for tests of org.apache.hadoop.io package

2015-11-18 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15011293#comment-15011293
 ] 

Tsuyoshi Ozawa commented on HADOOP-12564:
-

I also confirmed that the test failures hadoop.metrics2.impl.TestGangliaMetrics 
and hadoop.fs.shell.TestCopyPreserveFlag pass locally. 

> Upgrade JUnit3 TestCase to JUnit 4 for tests of org.apache.hadoop.io package
> 
>
> Key: HADOOP-12564
> URL: https://issues.apache.org/jira/browse/HADOOP-12564
> Project: Hadoop Common
>  Issue Type: Test
>  Components: test
>Reporter: Dustin Cote
>Assignee: Dustin Cote
>Priority: Trivial
> Attachments: MAPREDUCE-6505-1.patch, MAPREDUCE-6505-2.patch, 
> MAPREDUCE-6505-3.patch, MAPREDUCE-6505-4.patch, MAPREDUCE-6505-5.patch, 
> MAPREDUCE-6505-6.patch
>
>
> Migrating just the io test cases 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-8419) GzipCodec NPE upon reset with IBM JDK

2015-11-18 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8419?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HADOOP-8419:
---
Fix Version/s: 2.8.0

> GzipCodec NPE upon reset with IBM JDK
> -
>
> Key: HADOOP-8419
> URL: https://issues.apache.org/jira/browse/HADOOP-8419
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Affects Versions: 1.0.3
>Reporter: Luke Lu
>Assignee: Yu Li
>  Labels: gzip, ibm-jdk
> Fix For: 1.1.2, 2.0.5-alpha, 2.8.0
>
> Attachments: HADOOP-8419-branch-1.patch, 
> HADOOP-8419-branch1-v2.patch, HADOOP-8419-trunk-v2.patch, 
> HADOOP-8419-trunk.patch
>
>
> The GzipCodec will NPE upon reset after finish when the native zlib codec is 
> not loaded. When the native zlib is loaded the codec creates a 
> CompressorOutputStream that doesn't have the problem, otherwise, the 
> GZipCodec uses GZIPOutputStream which is extended to provide the resetState 
> method. Since IBM JDK 6 SR9 FP2 including the current JDK 6 SR10, 
> GZIPOutputStream#finish will release the underlying deflater, which causes 
> NPE upon reset. This seems to be an IBM JDK quirk as Sun JDK and OpenJDK 
> doesn't have this issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12576) Same owner of maven repository on Docker container to build user

2015-11-18 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15011557#comment-15011557
 ] 

Tsuyoshi Ozawa commented on HADOOP-12576:
-

Haohui, thank you for the clarification. I got the point.

How about adding a description in BUILDING.txt to add the user which runs 
docker to docker group? A following way works on Ubuntu 14.04:

{code}
$ sudo gpasswd -a ${USER} docker
$ sudo service docker.io restart
logout and login the machine again.
{code}

> Same owner of maven repository on Docker container to build user
> 
>
> Key: HADOOP-12576
> URL: https://issues.apache.org/jira/browse/HADOOP-12576
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
>Priority: Trivial
>  Labels: docker
> Attachments: HADOOP-12576.01.patch
>
>
> When local maven repository has not yet created, docker container launched by 
> {{start-build-env.sh}} create it owned by launching user. {{docker}} command 
> ofter be run by root user unless manipulating docker unix groups.
> In that case, maven local repository is created by root user and building 
> process inside container fails. 
> It is better to make sure to create maven local repository by just the user 
> who trying to build before launching docker container if not exist maven 
> local repository.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12576) Same owner of maven repository on Docker container to build user

2015-11-18 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HADOOP-12576:

Status: Open  (was: Patch Available)

Cancelling the patch for the comment.

> Same owner of maven repository on Docker container to build user
> 
>
> Key: HADOOP-12576
> URL: https://issues.apache.org/jira/browse/HADOOP-12576
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
>Priority: Trivial
>  Labels: docker
> Attachments: HADOOP-12576.01.patch
>
>
> When local maven repository has not yet created, docker container launched by 
> {{start-build-env.sh}} create it owned by launching user. {{docker}} command 
> ofter be run by root user unless manipulating docker unix groups.
> In that case, maven local repository is created by root user and building 
> process inside container fails. 
> It is better to make sure to create maven local repository by just the user 
> who trying to build before launching docker container if not exist maven 
> local repository.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12576) Same owner of maven repository on Docker container to build user

2015-11-18 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HADOOP-12576:

Affects Version/s: 3.0.0

> Same owner of maven repository on Docker container to build user
> 
>
> Key: HADOOP-12576
> URL: https://issues.apache.org/jira/browse/HADOOP-12576
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
>Priority: Trivial
>  Labels: docker
> Attachments: HADOOP-12576.01.patch
>
>
> When local maven repository has not yet created, docker container launched by 
> {{start-build-env.sh}} create it owned by launching user. {{docker}} command 
> ofter be run by root user unless manipulating docker unix groups.
> In that case, maven local repository is created by root user and building 
> process inside container fails. 
> It is better to make sure to create maven local repository by just the user 
> who trying to build before launching docker container if not exist maven 
> local repository.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12576) Same owner of maven repository on Docker container to build user

2015-11-18 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15012789#comment-15012789
 ] 

Tsuyoshi Ozawa commented on HADOOP-12576:
-

[~kaisasak] 

{quote}
in my environment (Ubuntu 14.04) docker service name is docker
{quote}

Oh, that's my mistake. {{sudo service docker restart}} is correct. Do you mind 
creating a patch to update the document? I'll also check HADOOP-12575. Thank 
you for reporting.

> Same owner of maven repository on Docker container to build user
> 
>
> Key: HADOOP-12576
> URL: https://issues.apache.org/jira/browse/HADOOP-12576
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
>Priority: Trivial
>  Labels: docker
> Attachments: HADOOP-12576.01.patch
>
>
> When local maven repository has not yet created, docker container launched by 
> {{start-build-env.sh}} create it owned by launching user. {{docker}} command 
> ofter be run by root user unless manipulating docker unix groups.
> In that case, maven local repository is created by root user and building 
> process inside container fails. 
> It is better to make sure to create maven local repository by just the user 
> who trying to build before launching docker container if not exist maven 
> local repository.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12585) [Umbrella] Removing deprecated methods in 3.0.0 release

2015-11-18 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HADOOP-12585:

Summary: [Umbrella] Removing deprecated methods in 3.0.0 release  (was: 
Removing deprecated methods in 3.0.0 release)

> [Umbrella] Removing deprecated methods in 3.0.0 release
> ---
>
> Key: HADOOP-12585
> URL: https://issues.apache.org/jira/browse/HADOOP-12585
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Tsuyoshi Ozawa
>
> There are lots deprecated methods in hadoop - 3.0.0 release is a good time to 
> remove them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12585) Removing deprecated methods in 3.0.0 release

2015-11-18 Thread Tsuyoshi Ozawa (JIRA)
Tsuyoshi Ozawa created HADOOP-12585:
---

 Summary: Removing deprecated methods in 3.0.0 release
 Key: HADOOP-12585
 URL: https://issues.apache.org/jira/browse/HADOOP-12585
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Tsuyoshi Ozawa


There are lots deprecated methods in hadoop - 3.0.0 release is a good time to 
remove them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12585) [Umbrella] Removing deprecated methods in 3.0.0 release

2015-11-18 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HADOOP-12585:

Target Version/s: 3.0.0

> [Umbrella] Removing deprecated methods in 3.0.0 release
> ---
>
> Key: HADOOP-12585
> URL: https://issues.apache.org/jira/browse/HADOOP-12585
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Tsuyoshi Ozawa
>
> There are lots deprecated methods in hadoop - 3.0.0 release is a good time to 
> remove them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12585) [Umbrella] Removing deprecated methods in 3.0.0 release

2015-11-18 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15012808#comment-15012808
 ] 

Tsuyoshi Ozawa commented on HADOOP-12585:
-

This suggestion is by [~aw] on HADOOP-12582. At first, I prefer to remove {{the 
usages}} of all deprecated methods in branch-2 for making clear that we have 
substitution of deprecated methods. After that, we can remove all deprecated 
methods on trunk.

> [Umbrella] Removing deprecated methods in 3.0.0 release
> ---
>
> Key: HADOOP-12585
> URL: https://issues.apache.org/jira/browse/HADOOP-12585
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Tsuyoshi Ozawa
>
> There are lots deprecated methods in hadoop - 3.0.0 release is a good time to 
> remove them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12585) [Umbrella] Removing the usages of deprecated methods

2015-11-18 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HADOOP-12585:

Summary: [Umbrella] Removing the usages of deprecated methods  (was: 
[Umbrella] Removing the suages of deprecated methods)

> [Umbrella] Removing the usages of deprecated methods
> 
>
> Key: HADOOP-12585
> URL: https://issues.apache.org/jira/browse/HADOOP-12585
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Tsuyoshi Ozawa
>
> There are lots usages of deprecated methods in hadoop - we should avoid using 
> them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HADOOP-12585) [Umbrella] Removing the suages of deprecated methods

2015-11-18 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15012808#comment-15012808
 ] 

Tsuyoshi Ozawa edited comment on HADOOP-12585 at 11/19/15 5:14 AM:
---

This suggestion is by [~aw] on HADOOP-12582.

This can be a first step for removing deprecated methods themselves. Anyway, at 
first, I prefer to remove {{the usages}} of all deprecated methods in branch-2 
for making clear that we have substitution of deprecated methods. After that, 
we can remove all deprecated methods on trunk.


was (Author: ozawa):
This suggestion is by [~aw] on HADOOP-12582. At first, I prefer to remove {{the 
usages}} of all deprecated methods in branch-2 for making clear that we have 
substitution of deprecated methods. After that, we can remove all deprecated 
methods on trunk.

> [Umbrella] Removing the suages of deprecated methods
> 
>
> Key: HADOOP-12585
> URL: https://issues.apache.org/jira/browse/HADOOP-12585
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Tsuyoshi Ozawa
>
> There are lots usages of deprecated methods in hadoop - we should avoid using 
> them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12378) Fix findbugs warnings in hadoop-tools module

2015-11-18 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15012825#comment-15012825
 ] 

Tsuyoshi Ozawa commented on HADOOP-12378:
-

{code}
the patch removes protected fields in abstract class 
{code}

In this case, DataJoinMapperBase class is a public class which assumes to be 
extended. These fields are useful from MapReduce jobs, so we can say the 
warnings are false-positive. For the reason, we can ignore the warning here. 
Could you update to create findbugs-exclude.xml?

> Fix findbugs warnings in hadoop-tools module
> 
>
> Key: HADOOP-12378
> URL: https://issues.apache.org/jira/browse/HADOOP-12378
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Reporter: Akira AJISAKA
>Assignee: Akira AJISAKA
> Attachments: HADOOP-12378.001.patch, findbugsAnt.html, 
> findbugsDatajoin.html
>
>
> There are 2 warnings in hadoop-datajoin module and 4 warnings in hadoop-ant 
> module.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12586) Dockerfile cannot work correctly behind a proxy

2015-11-18 Thread Tsuyoshi Ozawa (JIRA)
Tsuyoshi Ozawa created HADOOP-12586:
---

 Summary: Dockerfile cannot work correctly behind a proxy
 Key: HADOOP-12586
 URL: https://issues.apache.org/jira/browse/HADOOP-12586
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Reporter: Tsuyoshi Ozawa


{{apt-get}} command fails because there are not way to change proxy.

{quote}
Step 7 : RUN apt-get update && apt-get install --no-install-recommends -y 
git curl ant make maven cmake gcc g++ protobuf-compiler libprotoc-dev   
  protobuf-c-compiler libprotobuf-dev build-essential libtool 
zlib1g-dev pkg-config libssl-dev snappy libsnappy-dev bzip2 libbz2-dev  
   libjansson-dev fuse libfuse-dev libcurl4-openssl-dev python 
python2.7 pylint openjdk-7-jdk doxygen
 ---> Running in 072a97b7fa45
Err http://archive.ubuntu.com trusty InRelease
  
Err http://archive.ubuntu.com trusty-updates InRelease
  
Err http://archive.ubuntu.com trusty-security InRelease
  
Err http://archive.ubuntu.com trusty Release.gpg
  Cannot initiate the connection to archive.ubuntu.com:80 
(2001:67c:1360:8c01::19). - connect (101: Network is unreachable) [IP: 
2001:67c:1360:8c01::19 80]
Err http://archive.ubuntu.com trusty-updates Release.gpg
  Cannot initiate the connection to archive.ubuntu.com:80 
(2001:67c:1360:8c01::19). - connect (101: Network is unreachable) [IP: 
2001:67c:1360:8c01::19 80]
Err http://archive.ubuntu.com trusty-security Release.gpg
  Cannot initiate the connection to archive.ubuntu.com:80 
(2001:67c:1360:8c01::19). - connect (101: Network is unreachable) [IP: 
2001:67c:1360:8c01::19 80]
Reading package lists...
W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/trusty/InRelease  

W: Failed to fetch 
http://archive.ubuntu.com/ubuntu/dists/trusty-updates/InRelease  

W: Failed to fetch 
http://archive.ubuntu.com/ubuntu/dists/trusty-security/InRelease  

W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/trusty/Release.gpg  
Cannot initiate the connection to archive.ubuntu.com:80 
(2001:67c:1360:8c01::19). - connect (101: Network is unreachable) [IP: 
2001:67c:1360:8c01::19 80]

W: Failed to fetch 
http://archive.ubuntu.com/ubuntu/dists/trusty-updates/Release.gpg  Cannot 
initiate the connection to archive.ubuntu.com:80 (2001:67c:1360:8c01::19). - 
connect (101: Network is unreachable) [IP: 2001:67c:1360:8c01::19 80]

W: Failed to fetch 
http://archive.ubuntu.com/ubuntu/dists/trusty-security/Release.gpg  Cannot 
initiate the connection to archive.ubuntu.com:80 (2001:67c:1360:8c01::19). - 
connect (101: Network is unreachable) [IP: 2001:67c:1360:8c01::19 80]

W: Some index files failed to download. They have been ignored, or old ones 
used instead.
{quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HADOOP-12586) Dockerfile cannot work correctly behind a proxy

2015-11-18 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12586?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa reassigned HADOOP-12586:
---

Assignee: Tsuyoshi Ozawa

> Dockerfile cannot work correctly behind a proxy
> ---
>
> Key: HADOOP-12586
> URL: https://issues.apache.org/jira/browse/HADOOP-12586
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Tsuyoshi Ozawa
>Assignee: Tsuyoshi Ozawa
>
> {{apt-get}} command fails because there are not way to change proxy.
> {quote}
> Step 7 : RUN apt-get update && apt-get install --no-install-recommends -y 
> git curl ant make maven cmake gcc g++ protobuf-compiler libprotoc-dev 
> protobuf-c-compiler libprotobuf-dev build-essential libtool 
> zlib1g-dev pkg-config libssl-dev snappy libsnappy-dev bzip2 
> libbz2-dev libjansson-dev fuse libfuse-dev libcurl4-openssl-dev   
>   python python2.7 pylint openjdk-7-jdk doxygen
>  ---> Running in 072a97b7fa45
> Err http://archive.ubuntu.com trusty InRelease
>   
> Err http://archive.ubuntu.com trusty-updates InRelease
>   
> Err http://archive.ubuntu.com trusty-security InRelease
>   
> Err http://archive.ubuntu.com trusty Release.gpg
>   Cannot initiate the connection to archive.ubuntu.com:80 
> (2001:67c:1360:8c01::19). - connect (101: Network is unreachable) [IP: 
> 2001:67c:1360:8c01::19 80]
> Err http://archive.ubuntu.com trusty-updates Release.gpg
>   Cannot initiate the connection to archive.ubuntu.com:80 
> (2001:67c:1360:8c01::19). - connect (101: Network is unreachable) [IP: 
> 2001:67c:1360:8c01::19 80]
> Err http://archive.ubuntu.com trusty-security Release.gpg
>   Cannot initiate the connection to archive.ubuntu.com:80 
> (2001:67c:1360:8c01::19). - connect (101: Network is unreachable) [IP: 
> 2001:67c:1360:8c01::19 80]
> Reading package lists...
> W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/trusty/InRelease  
> W: Failed to fetch 
> http://archive.ubuntu.com/ubuntu/dists/trusty-updates/InRelease  
> W: Failed to fetch 
> http://archive.ubuntu.com/ubuntu/dists/trusty-security/InRelease  
> W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/trusty/Release.gpg  
> Cannot initiate the connection to archive.ubuntu.com:80 
> (2001:67c:1360:8c01::19). - connect (101: Network is unreachable) [IP: 
> 2001:67c:1360:8c01::19 80]
> W: Failed to fetch 
> http://archive.ubuntu.com/ubuntu/dists/trusty-updates/Release.gpg  Cannot 
> initiate the connection to archive.ubuntu.com:80 (2001:67c:1360:8c01::19). - 
> connect (101: Network is unreachable) [IP: 2001:67c:1360:8c01::19 80]
> W: Failed to fetch 
> http://archive.ubuntu.com/ubuntu/dists/trusty-security/Release.gpg  Cannot 
> initiate the connection to archive.ubuntu.com:80 (2001:67c:1360:8c01::19). - 
> connect (101: Network is unreachable) [IP: 2001:67c:1360:8c01::19 80]
> W: Some index files failed to download. They have been ignored, or old ones 
> used instead.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12575) Build instruction for docker toolbox

2015-11-18 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12575?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15012994#comment-15012994
 ] 

Tsuyoshi Ozawa commented on HADOOP-12575:
-

[~lewuathe] thank you for taking this issue. Followings are my review comments 
- could you update the patch?

1. The configuration looks to be for a driver of softlayer. We can remove it.
{code}
+--softlayer-memory "4096" \
{code}

2. The docker machine name {{default}} can be conflicted. For a workaround, 
it's better to name {{hadoopdev}.
3. Why 4094? Typo of 4096?
{code}
+--virtualbox-memory "4094" default
{code}



> Build instruction for docker toolbox
> 
>
> Key: HADOOP-12575
> URL: https://issues.apache.org/jira/browse/HADOOP-12575
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
>Priority: Trivial
>  Labels: docker, documentation
> Attachments: HADOOP-12575.01.patch
>
>
> Currently docker on MacOSX is mainly used by docker toolbox and 
> docker-machine. These tools make boot2docker deprecated. 
> (https://docs.docker.com/engine/installation/mac/)
> It might be better to append the instruction of docker toolbox and 
> docker-machine when using {{start-build-env.sh}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12586) Dockerfile cannot work correctly behind a proxy

2015-11-18 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12586?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HADOOP-12586:

Status: Patch Available  (was: Open)

> Dockerfile cannot work correctly behind a proxy
> ---
>
> Key: HADOOP-12586
> URL: https://issues.apache.org/jira/browse/HADOOP-12586
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Tsuyoshi Ozawa
>Assignee: Tsuyoshi Ozawa
> Attachments: HADOOP-12586.001.patch
>
>
> {{apt-get}} command fails because there are not way to change proxy.
> {quote}
> Step 7 : RUN apt-get update && apt-get install --no-install-recommends -y 
> git curl ant make maven cmake gcc g++ protobuf-compiler libprotoc-dev 
> protobuf-c-compiler libprotobuf-dev build-essential libtool 
> zlib1g-dev pkg-config libssl-dev snappy libsnappy-dev bzip2 
> libbz2-dev libjansson-dev fuse libfuse-dev libcurl4-openssl-dev   
>   python python2.7 pylint openjdk-7-jdk doxygen
>  ---> Running in 072a97b7fa45
> Err http://archive.ubuntu.com trusty InRelease
>   
> Err http://archive.ubuntu.com trusty-updates InRelease
>   
> Err http://archive.ubuntu.com trusty-security InRelease
>   
> Err http://archive.ubuntu.com trusty Release.gpg
>   Cannot initiate the connection to archive.ubuntu.com:80 
> (2001:67c:1360:8c01::19). - connect (101: Network is unreachable) [IP: 
> 2001:67c:1360:8c01::19 80]
> Err http://archive.ubuntu.com trusty-updates Release.gpg
>   Cannot initiate the connection to archive.ubuntu.com:80 
> (2001:67c:1360:8c01::19). - connect (101: Network is unreachable) [IP: 
> 2001:67c:1360:8c01::19 80]
> Err http://archive.ubuntu.com trusty-security Release.gpg
>   Cannot initiate the connection to archive.ubuntu.com:80 
> (2001:67c:1360:8c01::19). - connect (101: Network is unreachable) [IP: 
> 2001:67c:1360:8c01::19 80]
> Reading package lists...
> W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/trusty/InRelease  
> W: Failed to fetch 
> http://archive.ubuntu.com/ubuntu/dists/trusty-updates/InRelease  
> W: Failed to fetch 
> http://archive.ubuntu.com/ubuntu/dists/trusty-security/InRelease  
> W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/trusty/Release.gpg  
> Cannot initiate the connection to archive.ubuntu.com:80 
> (2001:67c:1360:8c01::19). - connect (101: Network is unreachable) [IP: 
> 2001:67c:1360:8c01::19 80]
> W: Failed to fetch 
> http://archive.ubuntu.com/ubuntu/dists/trusty-updates/Release.gpg  Cannot 
> initiate the connection to archive.ubuntu.com:80 (2001:67c:1360:8c01::19). - 
> connect (101: Network is unreachable) [IP: 2001:67c:1360:8c01::19 80]
> W: Failed to fetch 
> http://archive.ubuntu.com/ubuntu/dists/trusty-security/Release.gpg  Cannot 
> initiate the connection to archive.ubuntu.com:80 (2001:67c:1360:8c01::19). - 
> connect (101: Network is unreachable) [IP: 2001:67c:1360:8c01::19 80]
> W: Some index files failed to download. They have been ignored, or old ones 
> used instead.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12586) Dockerfile cannot work correctly behind a proxy

2015-11-18 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12586?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HADOOP-12586:

Attachment: HADOOP-12586.001.patch

Changed to pass environment variables via --build-arg parameters.

> Dockerfile cannot work correctly behind a proxy
> ---
>
> Key: HADOOP-12586
> URL: https://issues.apache.org/jira/browse/HADOOP-12586
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Tsuyoshi Ozawa
>Assignee: Tsuyoshi Ozawa
> Attachments: HADOOP-12586.001.patch
>
>
> {{apt-get}} command fails because there are not way to change proxy.
> {quote}
> Step 7 : RUN apt-get update && apt-get install --no-install-recommends -y 
> git curl ant make maven cmake gcc g++ protobuf-compiler libprotoc-dev 
> protobuf-c-compiler libprotobuf-dev build-essential libtool 
> zlib1g-dev pkg-config libssl-dev snappy libsnappy-dev bzip2 
> libbz2-dev libjansson-dev fuse libfuse-dev libcurl4-openssl-dev   
>   python python2.7 pylint openjdk-7-jdk doxygen
>  ---> Running in 072a97b7fa45
> Err http://archive.ubuntu.com trusty InRelease
>   
> Err http://archive.ubuntu.com trusty-updates InRelease
>   
> Err http://archive.ubuntu.com trusty-security InRelease
>   
> Err http://archive.ubuntu.com trusty Release.gpg
>   Cannot initiate the connection to archive.ubuntu.com:80 
> (2001:67c:1360:8c01::19). - connect (101: Network is unreachable) [IP: 
> 2001:67c:1360:8c01::19 80]
> Err http://archive.ubuntu.com trusty-updates Release.gpg
>   Cannot initiate the connection to archive.ubuntu.com:80 
> (2001:67c:1360:8c01::19). - connect (101: Network is unreachable) [IP: 
> 2001:67c:1360:8c01::19 80]
> Err http://archive.ubuntu.com trusty-security Release.gpg
>   Cannot initiate the connection to archive.ubuntu.com:80 
> (2001:67c:1360:8c01::19). - connect (101: Network is unreachable) [IP: 
> 2001:67c:1360:8c01::19 80]
> Reading package lists...
> W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/trusty/InRelease  
> W: Failed to fetch 
> http://archive.ubuntu.com/ubuntu/dists/trusty-updates/InRelease  
> W: Failed to fetch 
> http://archive.ubuntu.com/ubuntu/dists/trusty-security/InRelease  
> W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/trusty/Release.gpg  
> Cannot initiate the connection to archive.ubuntu.com:80 
> (2001:67c:1360:8c01::19). - connect (101: Network is unreachable) [IP: 
> 2001:67c:1360:8c01::19 80]
> W: Failed to fetch 
> http://archive.ubuntu.com/ubuntu/dists/trusty-updates/Release.gpg  Cannot 
> initiate the connection to archive.ubuntu.com:80 (2001:67c:1360:8c01::19). - 
> connect (101: Network is unreachable) [IP: 2001:67c:1360:8c01::19 80]
> W: Failed to fetch 
> http://archive.ubuntu.com/ubuntu/dists/trusty-security/Release.gpg  Cannot 
> initiate the connection to archive.ubuntu.com:80 (2001:67c:1360:8c01::19). - 
> connect (101: Network is unreachable) [IP: 2001:67c:1360:8c01::19 80]
> W: Some index files failed to download. They have been ignored, or old ones 
> used instead.
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12585) [Umbrella] Removing the suages of deprecated methods

2015-11-18 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HADOOP-12585:

Summary: [Umbrella] Removing the suages of deprecated methods  (was: 
[Umbrella] Removing deprecated methods in 3.0.0 release)

> [Umbrella] Removing the suages of deprecated methods
> 
>
> Key: HADOOP-12585
> URL: https://issues.apache.org/jira/browse/HADOOP-12585
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Tsuyoshi Ozawa
>
> There are lots deprecated methods in hadoop - 3.0.0 release is a good time to 
> remove them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12585) [Umbrella] Removing the suages of deprecated methods

2015-11-18 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HADOOP-12585:

Description: There are lots usages of deprecated methods in hadoop - we 
should avoid using them.  (was: There are lots deprecated methods in hadoop - 
3.0.0 release is a good time to remove them.)

> [Umbrella] Removing the suages of deprecated methods
> 
>
> Key: HADOOP-12585
> URL: https://issues.apache.org/jira/browse/HADOOP-12585
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Tsuyoshi Ozawa
>
> There are lots usages of deprecated methods in hadoop - we should avoid using 
> them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12582) Using BytesWritable's getLength() and getBytes() instead of get() and getSize()

2015-11-18 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12582?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HADOOP-12582:

Issue Type: Sub-task  (was: Improvement)
Parent: HADOOP-12585

> Using BytesWritable's getLength() and getBytes() instead of get() and 
> getSize()
> ---
>
> Key: HADOOP-12582
> URL: https://issues.apache.org/jira/browse/HADOOP-12582
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Tsuyoshi Ozawa
>Assignee: Akira AJISAKA
>  Labels: newbie
> Attachments: HADOOP-12582.00.patch
>
>
> BytesWritable's deprecated methods,  get() and getSize(), are still used in 
> some tests: TestTFileSeek, TestTFileSeqFileComparison, TestSequenceFile, and 
> so on. We can also remove them if targeting this to 3.0.0
> https://builds.apache.org/job/PreCommit-HADOOP-Build/8084/artifact/patchprocess/diff-compile-javac-root-jdk1.7.0_85.txt



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12348) MetricsSystemImpl creates MetricsSourceAdapter with wrong time unit parameter.

2015-11-17 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15010043#comment-15010043
 ] 

Tsuyoshi Ozawa commented on HADOOP-12348:
-

[~brahmareddy] thank you for the notification. I'll backport them.

> MetricsSystemImpl creates MetricsSourceAdapter with wrong time unit parameter.
> --
>
> Key: HADOOP-12348
> URL: https://issues.apache.org/jira/browse/HADOOP-12348
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: metrics
>Reporter: zhihai xu
>Assignee: zhihai xu
> Fix For: 2.8.0, 2.7.3
>
> Attachments: HADOOP-12348.000.patch, HADOOP-12348.001.patch, 
> HADOOP-12348.branch-2.patch
>
>
> MetricsSystemImpl creates MetricsSourceAdapter with wrong time unit 
> parameter. MetricsSourceAdapter expects time unit millisecond  for 
> jmxCacheTTL but MetricsSystemImpl  passes time unit second to 
> MetricsSourceAdapter constructor.
> {code}
> jmxCacheTS = Time.now() + jmxCacheTTL;
>   /**
>* Current system time.  Do not use this to calculate a duration or interval
>* to sleep, because it will be broken by settimeofday.  Instead, use
>* monotonicNow.
>* @return current time in msec.
>*/
>   public static long now() {
> return System.currentTimeMillis();
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12566) Add NullGroupMapping

2015-11-16 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15006342#comment-15006342
 ] 

Tsuyoshi Ozawa commented on HADOOP-12566:
-

[~d@gmx.net] thank you for taking this issue. I've not looked at the patch 
deeper, but could you fix javac warnings?

{quote}
[WARNING] 
/testptch/hadoop/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestNullGroupsMapping.java:
 
/testptch/hadoop/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestNullGroupsMapping.java
 uses unchecked or unsafe operations.
[WARNING] 
/testptch/hadoop/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestNullGroupsMapping.java:
 Recompile with -Xlint:unchecked for details.
{quote}

> Add NullGroupMapping
> 
>
> Key: HADOOP-12566
> URL: https://issues.apache.org/jira/browse/HADOOP-12566
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
> Attachments: HADOOP-12566.001.patch
>
>
> Add a {{NullGroupMapping}} for cases where user groups are not used.  
> {{ShellBasedUnixGroupMapping}} can be used in places where latency is not 
> important.  In places like starting a container, it's worth in to avoid the 
> extra fork and exec.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12482) Race condition in JMX cache update

2015-11-15 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15005951#comment-15005951
 ] 

Tsuyoshi Ozawa commented on HADOOP-12482:
-

Oops, it's my mistake. Thank you for fixing it, Akira.

> Race condition in JMX cache update
> --
>
> Key: HADOOP-12482
> URL: https://issues.apache.org/jira/browse/HADOOP-12482
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: Tony Wu
>Assignee: Tony Wu
> Fix For: 2.8.0, 3.0.0, 2.7.3
>
> Attachments: HADOOP-12482.001.patch, HADOOP-12482.002.patch, 
> HADOOP-12482.003.patch, HADOOP-12482.004.patch, HADOOP-12482.005.patch, 
> HADOOP-12482.006.patch
>
>
> updateJmxCache() was updated in HADOOP-11301. However the patch introduced a 
> race condition. In updateJmxCache() function in MetricsSourceAdapter.java:
> {code:java}
>   private void updateJmxCache() {
> boolean getAllMetrics = false;
> synchronized (this) {
>   if (Time.now() - jmxCacheTS >= jmxCacheTTL) {
> // temporarilly advance the expiry while updating the cache
> jmxCacheTS = Time.now() + jmxCacheTTL;
> if (lastRecs == null) {
>   getAllMetrics = true;
> }
>   } else {
> return;
>   }
>   if (getAllMetrics) {
> MetricsCollectorImpl builder = new MetricsCollectorImpl();
> getMetrics(builder, true);
>   }
>   updateAttrCache();
>   if (getAllMetrics) {
> updateInfoCache();
>   }
>   jmxCacheTS = Time.now();
>   lastRecs = null; // in case regular interval update is not running
> }
>   }
> {code}
> Notice that getAllMetrics is set to true when:
> # jmxCacheTTL has passed
> # lastRecs == null
> lastRecs is set to null in the same function, but gets reassigned by 
> getMetrics().
> However getMetrics() can be called from a different thread:
> # MetricsSystemImpl.onTimerEvent()
> # MetricsSystemImpl.publishMetricsNow()
> Consider the following sequence:
> # updateJmxCache() is called by getMBeanInfo() from a thread getting cached 
> info. 
> ** lastRecs is set to null.
> # metrics sources is updated with new value/field.
> # getMetrics() is called by publishMetricsNow() or onTimerEvent() from a 
> different thread getting the latest metrics. 
> ** lastRecs is updated (!= null).
> # jmxCacheTTL passed.
> # updateJmxCache() is called again via getMBeanInfo().
> ** However because lastRecs is already updated (!= null), getAllMetrics will 
> not be set to true. So updateInfoCache() is not called and getMBeanInfo() 
> returns the old cached info.
> We ran into this issue on a cluster where a new metric did not get published 
> until much later.
> The case can be made worse by a periodic call to getMetrics() (driven by an 
> external program or script). In such case getMBeanInfo() may never be able to 
> retrieve the new record.
> The desired behavior should be that updateJmxCache() will guarantee to call 
> updateInfoCache() once after jmxCacheTTL, if lastRecs has been set to null by 
> updateJmxCache() itself.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12348) MetricsSystemImpl creates MetricsSourceAdapter with wrong time unit parameter.

2015-11-13 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HADOOP-12348:

Fix Version/s: 2.7.3

> MetricsSystemImpl creates MetricsSourceAdapter with wrong time unit parameter.
> --
>
> Key: HADOOP-12348
> URL: https://issues.apache.org/jira/browse/HADOOP-12348
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: metrics
>Reporter: zhihai xu
>Assignee: zhihai xu
> Fix For: 2.8.0, 2.7.3
>
> Attachments: HADOOP-12348.000.patch, HADOOP-12348.001.patch, 
> HADOOP-12348.branch-2.patch
>
>
> MetricsSystemImpl creates MetricsSourceAdapter with wrong time unit 
> parameter. MetricsSourceAdapter expects time unit millisecond  for 
> jmxCacheTTL but MetricsSystemImpl  passes time unit second to 
> MetricsSourceAdapter constructor.
> {code}
> jmxCacheTS = Time.now() + jmxCacheTTL;
>   /**
>* Current system time.  Do not use this to calculate a duration or interval
>* to sleep, because it will be broken by settimeofday.  Instead, use
>* monotonicNow.
>* @return current time in msec.
>*/
>   public static long now() {
> return System.currentTimeMillis();
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12482) Race condition in JMX cache update

2015-11-13 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HADOOP-12482:

Fix Version/s: 2.7.3

> Race condition in JMX cache update
> --
>
> Key: HADOOP-12482
> URL: https://issues.apache.org/jira/browse/HADOOP-12482
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: Tony Wu
>Assignee: Tony Wu
> Fix For: 2.8.0, 3.0.0, 2.7.3
>
> Attachments: HADOOP-12482.001.patch, HADOOP-12482.002.patch, 
> HADOOP-12482.003.patch, HADOOP-12482.004.patch, HADOOP-12482.005.patch, 
> HADOOP-12482.006.patch
>
>
> updateJmxCache() was updated in HADOOP-11301. However the patch introduced a 
> race condition. In updateJmxCache() function in MetricsSourceAdapter.java:
> {code:java}
>   private void updateJmxCache() {
> boolean getAllMetrics = false;
> synchronized (this) {
>   if (Time.now() - jmxCacheTS >= jmxCacheTTL) {
> // temporarilly advance the expiry while updating the cache
> jmxCacheTS = Time.now() + jmxCacheTTL;
> if (lastRecs == null) {
>   getAllMetrics = true;
> }
>   } else {
> return;
>   }
>   if (getAllMetrics) {
> MetricsCollectorImpl builder = new MetricsCollectorImpl();
> getMetrics(builder, true);
>   }
>   updateAttrCache();
>   if (getAllMetrics) {
> updateInfoCache();
>   }
>   jmxCacheTS = Time.now();
>   lastRecs = null; // in case regular interval update is not running
> }
>   }
> {code}
> Notice that getAllMetrics is set to true when:
> # jmxCacheTTL has passed
> # lastRecs == null
> lastRecs is set to null in the same function, but gets reassigned by 
> getMetrics().
> However getMetrics() can be called from a different thread:
> # MetricsSystemImpl.onTimerEvent()
> # MetricsSystemImpl.publishMetricsNow()
> Consider the following sequence:
> # updateJmxCache() is called by getMBeanInfo() from a thread getting cached 
> info. 
> ** lastRecs is set to null.
> # metrics sources is updated with new value/field.
> # getMetrics() is called by publishMetricsNow() or onTimerEvent() from a 
> different thread getting the latest metrics. 
> ** lastRecs is updated (!= null).
> # jmxCacheTTL passed.
> # updateJmxCache() is called again via getMBeanInfo().
> ** However because lastRecs is already updated (!= null), getAllMetrics will 
> not be set to true. So updateInfoCache() is not called and getMBeanInfo() 
> returns the old cached info.
> We ran into this issue on a cluster where a new metric did not get published 
> until much later.
> The case can be made worse by a periodic call to getMetrics() (driven by an 
> external program or script). In such case getMBeanInfo() may never be able to 
> retrieve the new record.
> The desired behavior should be that updateJmxCache() will guarantee to call 
> updateInfoCache() once after jmxCacheTTL, if lastRecs has been set to null by 
> updateJmxCache() itself.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12348) MetricsSystemImpl creates MetricsSourceAdapter with wrong time unit parameter.

2015-11-13 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15005177#comment-15005177
 ] 

Tsuyoshi Ozawa commented on HADOOP-12348:
-

Cherrypicked this to branch-2.7.

> MetricsSystemImpl creates MetricsSourceAdapter with wrong time unit parameter.
> --
>
> Key: HADOOP-12348
> URL: https://issues.apache.org/jira/browse/HADOOP-12348
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: metrics
>Reporter: zhihai xu
>Assignee: zhihai xu
> Fix For: 2.8.0, 2.7.3
>
> Attachments: HADOOP-12348.000.patch, HADOOP-12348.001.patch, 
> HADOOP-12348.branch-2.patch
>
>
> MetricsSystemImpl creates MetricsSourceAdapter with wrong time unit 
> parameter. MetricsSourceAdapter expects time unit millisecond  for 
> jmxCacheTTL but MetricsSystemImpl  passes time unit second to 
> MetricsSourceAdapter constructor.
> {code}
> jmxCacheTS = Time.now() + jmxCacheTTL;
>   /**
>* Current system time.  Do not use this to calculate a duration or interval
>* to sleep, because it will be broken by settimeofday.  Instead, use
>* monotonicNow.
>* @return current time in msec.
>*/
>   public static long now() {
> return System.currentTimeMillis();
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11361) Fix a race condition in MetricsSourceAdapter.updateJmxCache

2015-11-13 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HADOOP-11361:

Fix Version/s: 2.7.3

> Fix a race condition in MetricsSourceAdapter.updateJmxCache
> ---
>
> Key: HADOOP-11361
> URL: https://issues.apache.org/jira/browse/HADOOP-11361
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.4.1, 2.5.1, 2.6.0
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Fix For: 2.8.0, 2.7.3
>
> Attachments: HADOOP-111361-003.patch, HADOOP-11361-002.patch, 
> HADOOP-11361.patch, HDFS-7487.patch
>
>
> {noformat}
> Caused by: java.lang.NullPointerException
>   at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateAttrCache(MetricsSourceAdapter.java:247)
>   at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateJmxCache(MetricsSourceAdapter.java:177)
>   at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getAttribute(MetricsSourceAdapter.java:102)
>   at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:647)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12564) Upgrade JUnit3 TestCase to JUnit 4 for tests of org.apache.hadoop.io package

2015-11-13 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15005232#comment-15005232
 ] 

Tsuyoshi Ozawa commented on HADOOP-12564:
-

[~cote] Thank you for updating. We're almost there. In addition to Akira's 
comment, please check following comments:

{code:title=TestSequenceFile.java|}
   @Test
  public void testCreateWriterOnExistingFile() throws IOException {
{code}
Please fix indentation(Please remote single whitespace before Test annotation).

{code:titile=TestVLong.java|}
  @Test
  public void testVLong6Bytes() throws IOException {
verifySixOrMoreBytes(6);
  }
  @Test
  public void testVLong7Bytes() throws IOException {
verifySixOrMoreBytes(7);
  }
  @Test
  public void testVLong8Bytes() throws IOException {
verifySixOrMoreBytes(8);
  }
  @Test
  public void testVLongRandom() throws IOException {
{code}
Please add a line break between each test cases because of consistent coding 
style.

{code:title=TestTFileSplit.java}
import org.junit.After;
import org.junit.Before;
{code}

{code:title=TestTFileSeqFileComparison.java}
import static org.junit.Assert.assertEquals;
import static org.junit.Assert.assertTrue;
import static org.junit.Assert.assertFalse;
{code}

Please remove unused imports.



> Upgrade JUnit3 TestCase to JUnit 4 for tests of org.apache.hadoop.io package
> 
>
> Key: HADOOP-12564
> URL: https://issues.apache.org/jira/browse/HADOOP-12564
> Project: Hadoop Common
>  Issue Type: Test
>  Components: test
>Reporter: Dustin Cote
>Assignee: Dustin Cote
>Priority: Trivial
> Attachments: MAPREDUCE-6505-1.patch, MAPREDUCE-6505-2.patch, 
> MAPREDUCE-6505-3.patch, MAPREDUCE-6505-4.patch, MAPREDUCE-6505-5.patch
>
>
> Migrating just the io test cases 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12482) Race condition in JMX cache update

2015-11-13 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15005179#comment-15005179
 ] 

Tsuyoshi Ozawa commented on HADOOP-12482:
-

Cherrypicked this to branch-2.7 with HADOOP-12348 and HADOOP-11361.

> Race condition in JMX cache update
> --
>
> Key: HADOOP-12482
> URL: https://issues.apache.org/jira/browse/HADOOP-12482
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: Tony Wu
>Assignee: Tony Wu
> Fix For: 2.8.0, 3.0.0, 2.7.3
>
> Attachments: HADOOP-12482.001.patch, HADOOP-12482.002.patch, 
> HADOOP-12482.003.patch, HADOOP-12482.004.patch, HADOOP-12482.005.patch, 
> HADOOP-12482.006.patch
>
>
> updateJmxCache() was updated in HADOOP-11301. However the patch introduced a 
> race condition. In updateJmxCache() function in MetricsSourceAdapter.java:
> {code:java}
>   private void updateJmxCache() {
> boolean getAllMetrics = false;
> synchronized (this) {
>   if (Time.now() - jmxCacheTS >= jmxCacheTTL) {
> // temporarilly advance the expiry while updating the cache
> jmxCacheTS = Time.now() + jmxCacheTTL;
> if (lastRecs == null) {
>   getAllMetrics = true;
> }
>   } else {
> return;
>   }
>   if (getAllMetrics) {
> MetricsCollectorImpl builder = new MetricsCollectorImpl();
> getMetrics(builder, true);
>   }
>   updateAttrCache();
>   if (getAllMetrics) {
> updateInfoCache();
>   }
>   jmxCacheTS = Time.now();
>   lastRecs = null; // in case regular interval update is not running
> }
>   }
> {code}
> Notice that getAllMetrics is set to true when:
> # jmxCacheTTL has passed
> # lastRecs == null
> lastRecs is set to null in the same function, but gets reassigned by 
> getMetrics().
> However getMetrics() can be called from a different thread:
> # MetricsSystemImpl.onTimerEvent()
> # MetricsSystemImpl.publishMetricsNow()
> Consider the following sequence:
> # updateJmxCache() is called by getMBeanInfo() from a thread getting cached 
> info. 
> ** lastRecs is set to null.
> # metrics sources is updated with new value/field.
> # getMetrics() is called by publishMetricsNow() or onTimerEvent() from a 
> different thread getting the latest metrics. 
> ** lastRecs is updated (!= null).
> # jmxCacheTTL passed.
> # updateJmxCache() is called again via getMBeanInfo().
> ** However because lastRecs is already updated (!= null), getAllMetrics will 
> not be set to true. So updateInfoCache() is not called and getMBeanInfo() 
> returns the old cached info.
> We ran into this issue on a cluster where a new metric did not get published 
> until much later.
> The case can be made worse by a periodic call to getMetrics() (driven by an 
> external program or script). In such case getMBeanInfo() may never be able to 
> retrieve the new record.
> The desired behavior should be that updateJmxCache() will guarantee to call 
> updateInfoCache() once after jmxCacheTTL, if lastRecs has been set to null by 
> updateJmxCache() itself.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11361) Fix a race condition in MetricsSourceAdapter.updateJmxCache

2015-11-13 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15005180#comment-15005180
 ] 

Tsuyoshi Ozawa commented on HADOOP-11361:
-

Cherrypicked this to branch-2.7 for HADOOP-12482. 

> Fix a race condition in MetricsSourceAdapter.updateJmxCache
> ---
>
> Key: HADOOP-11361
> URL: https://issues.apache.org/jira/browse/HADOOP-11361
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.4.1, 2.5.1, 2.6.0
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Fix For: 2.8.0, 2.7.3
>
> Attachments: HADOOP-111361-003.patch, HADOOP-11361-002.patch, 
> HADOOP-11361.patch, HDFS-7487.patch
>
>
> {noformat}
> Caused by: java.lang.NullPointerException
>   at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateAttrCache(MetricsSourceAdapter.java:247)
>   at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateJmxCache(MetricsSourceAdapter.java:177)
>   at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getAttribute(MetricsSourceAdapter.java:102)
>   at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:647)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12571) [JDK8] Remove XX:MaxPermSize setting from pom.xml

2015-11-13 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12571?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15005183#comment-15005183
 ] 

Tsuyoshi Ozawa commented on HADOOP-12571:
-

Can we merge this change with HADOOP-11858?

> [JDK8] Remove XX:MaxPermSize setting from pom.xml
> -
>
> Key: HADOOP-12571
> URL: https://issues.apache.org/jira/browse/HADOOP-12571
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Akira AJISAKA
>Priority: Minor
>
> {code:title=hadoop-project/pom.xml}
> -Xmx2048m -XX:MaxPermSize=768m 
> -XX:+HeapDumpOnOutOfMemoryError
> {code}
> {{-XX:MaxPermSize}} is not supported in JDK8. It should be removed after 
> dropping support of JDK7.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12564) Upgrade JUnit3 TestCase to JUnit 4 in io test cases

2015-11-11 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HADOOP-12564:

Summary: Upgrade JUnit3 TestCase to JUnit 4 in io test cases  (was: Migrate 
io test cases)

> Upgrade JUnit3 TestCase to JUnit 4 in io test cases
> ---
>
> Key: HADOOP-12564
> URL: https://issues.apache.org/jira/browse/HADOOP-12564
> Project: Hadoop Common
>  Issue Type: Test
>  Components: test
>Reporter: Dustin Cote
>Assignee: Dustin Cote
>Priority: Trivial
> Attachments: MAPREDUCE-6505-1.patch, MAPREDUCE-6505-2.patch, 
> MAPREDUCE-6505-3.patch, MAPREDUCE-6505-4.patch
>
>
> Migrating just the io test cases 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12564) Upgrade JUnit3 TestCase to JUnit 4 for tests of org.apache.hadoop.io package

2015-11-11 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15000445#comment-15000445
 ] 

Tsuyoshi Ozawa commented on HADOOP-12564:
-

Good catch, Junping. Moved this issue to hadoop-common.

> Upgrade JUnit3 TestCase to JUnit 4 for tests of org.apache.hadoop.io package
> 
>
> Key: HADOOP-12564
> URL: https://issues.apache.org/jira/browse/HADOOP-12564
> Project: Hadoop Common
>  Issue Type: Test
>  Components: test
>Reporter: Dustin Cote
>Assignee: Dustin Cote
>Priority: Trivial
> Attachments: MAPREDUCE-6505-1.patch, MAPREDUCE-6505-2.patch, 
> MAPREDUCE-6505-3.patch, MAPREDUCE-6505-4.patch
>
>
> Migrating just the io test cases 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12564) Upgrade JUnit3 TestCase to JUnit 4 for tests of org.apache.hadoop.io package

2015-11-11 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15000461#comment-15000461
 ] 

Tsuyoshi Ozawa commented on HADOOP-12564:
-

[~cote] Thank you for updating. The patch itself looks good to me overall. I 
grepped under test/java/org/apache/hadoop/io - it still remains usages of JUnit 
3:

1. ./AvroTestUtil.java:import static junit.framework.TestCase.assertEquals;
2. ./compress/TestCodecFactory.java:public class TestCodecFactory extends 
TestCase {
3. ./compress/TestCompressionStreamReuse.java:public class 
TestCompressionStreamReuse extends TestCase {
4. ./file/tfile/TestVLong.java:import junit.framework.TestCase;

Could you update it again? Thanks!

> Upgrade JUnit3 TestCase to JUnit 4 for tests of org.apache.hadoop.io package
> 
>
> Key: HADOOP-12564
> URL: https://issues.apache.org/jira/browse/HADOOP-12564
> Project: Hadoop Common
>  Issue Type: Test
>  Components: test
>Reporter: Dustin Cote
>Assignee: Dustin Cote
>Priority: Trivial
> Attachments: MAPREDUCE-6505-1.patch, MAPREDUCE-6505-2.patch, 
> MAPREDUCE-6505-3.patch, MAPREDUCE-6505-4.patch
>
>
> Migrating just the io test cases 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12564) Upgrade JUnit3 TestCase to JUnit 4 for tests of org.apache.hadoop.io package

2015-11-11 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HADOOP-12564:

Summary: Upgrade JUnit3 TestCase to JUnit 4 for tests of 
org.apache.hadoop.io package  (was: Upgrade JUnit3 TestCase to JUnit 4 in io 
test cases)

> Upgrade JUnit3 TestCase to JUnit 4 for tests of org.apache.hadoop.io package
> 
>
> Key: HADOOP-12564
> URL: https://issues.apache.org/jira/browse/HADOOP-12564
> Project: Hadoop Common
>  Issue Type: Test
>  Components: test
>Reporter: Dustin Cote
>Assignee: Dustin Cote
>Priority: Trivial
> Attachments: MAPREDUCE-6505-1.patch, MAPREDUCE-6505-2.patch, 
> MAPREDUCE-6505-3.patch, MAPREDUCE-6505-4.patch
>
>
> Migrating just the io test cases 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Moved] (HADOOP-12564) Migrate io test cases

2015-11-11 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa moved MAPREDUCE-6505 to HADOOP-12564:


Component/s: (was: test)
 test
 Issue Type: Test  (was: Bug)
Key: HADOOP-12564  (was: MAPREDUCE-6505)
Project: Hadoop Common  (was: Hadoop Map/Reduce)

> Migrate io test cases
> -
>
> Key: HADOOP-12564
> URL: https://issues.apache.org/jira/browse/HADOOP-12564
> Project: Hadoop Common
>  Issue Type: Test
>  Components: test
>Reporter: Dustin Cote
>Assignee: Dustin Cote
>Priority: Trivial
> Attachments: MAPREDUCE-6505-1.patch, MAPREDUCE-6505-2.patch, 
> MAPREDUCE-6505-3.patch, MAPREDUCE-6505-4.patch
>
>
> Migrating just the io test cases 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12526) [Branch-2] there are duplicate dependency definitions in pom's

2015-11-11 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15000430#comment-15000430
 ] 

Tsuyoshi Ozawa commented on HADOOP-12526:
-

[~sjlee0]  Moved entries of CHANGES.txt to 2.6.3 of based on the rule to update 
CHANGES.txt described on Hadoop wiki since it can cause conflicts when 
cherrypicking.

> [Branch-2] there are duplicate dependency definitions in pom's
> --
>
> Key: HADOOP-12526
> URL: https://issues.apache.org/jira/browse/HADOOP-12526
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.8.0, 2.7.1, 2.6.2
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
> Fix For: 2.8.0, 2.6.3, 2.7.3
>
> Attachments: HADOOP-12526-branch-2.001.patch, 
> HADOOP-12526-branch-2.6.001.patch
>
>
> There are several places where dependencies are defined multiple times within 
> pom's, and are causing maven build warnings. They should be fixed. This is 
> specific to branch-2.6.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12482) Race condition in JMX cache update

2015-11-09 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14997888#comment-14997888
 ] 

Tsuyoshi Ozawa commented on HADOOP-12482:
-

[~twu] thank you for updating. The patch looks to be almost there:

* SourceUpdater still have e.printStackTrace()
{quote}
+e.printStackTrace();
{quote}

> Race condition in JMX cache update
> --
>
> Key: HADOOP-12482
> URL: https://issues.apache.org/jira/browse/HADOOP-12482
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: Tony Wu
>Assignee: Tony Wu
> Attachments: HADOOP-12482.001.patch, HADOOP-12482.002.patch, 
> HADOOP-12482.003.patch, HADOOP-12482.004.patch, HADOOP-12482.005.patch
>
>
> updateJmxCache() was updated in HADOOP-11301. However the patch introduced a 
> race condition. In updateJmxCache() function in MetricsSourceAdapter.java:
> {code:java}
>   private void updateJmxCache() {
> boolean getAllMetrics = false;
> synchronized (this) {
>   if (Time.now() - jmxCacheTS >= jmxCacheTTL) {
> // temporarilly advance the expiry while updating the cache
> jmxCacheTS = Time.now() + jmxCacheTTL;
> if (lastRecs == null) {
>   getAllMetrics = true;
> }
>   } else {
> return;
>   }
>   if (getAllMetrics) {
> MetricsCollectorImpl builder = new MetricsCollectorImpl();
> getMetrics(builder, true);
>   }
>   updateAttrCache();
>   if (getAllMetrics) {
> updateInfoCache();
>   }
>   jmxCacheTS = Time.now();
>   lastRecs = null; // in case regular interval update is not running
> }
>   }
> {code}
> Notice that getAllMetrics is set to true when:
> # jmxCacheTTL has passed
> # lastRecs == null
> lastRecs is set to null in the same function, but gets reassigned by 
> getMetrics().
> However getMetrics() can be called from a different thread:
> # MetricsSystemImpl.onTimerEvent()
> # MetricsSystemImpl.publishMetricsNow()
> Consider the following sequence:
> # updateJmxCache() is called by getMBeanInfo() from a thread getting cached 
> info. 
> ** lastRecs is set to null.
> # metrics sources is updated with new value/field.
> # getMetrics() is called by publishMetricsNow() or onTimerEvent() from a 
> different thread getting the latest metrics. 
> ** lastRecs is updated (!= null).
> # jmxCacheTTL passed.
> # updateJmxCache() is called again via getMBeanInfo().
> ** However because lastRecs is already updated (!= null), getAllMetrics will 
> not be set to true. So updateInfoCache() is not called and getMBeanInfo() 
> returns the old cached info.
> We ran into this issue on a cluster where a new metric did not get published 
> until much later.
> The case can be made worse by a periodic call to getMetrics() (driven by an 
> external program or script). In such case getMBeanInfo() may never be able to 
> retrieve the new record.
> The desired behavior should be that updateJmxCache() will guarantee to call 
> updateInfoCache() once after jmxCacheTTL, if lastRecs has been set to null by 
> updateJmxCache() itself.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12482) Race condition in JMX cache update

2015-11-01 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14984424#comment-14984424
 ] 

Tsuyoshi Ozawa commented on HADOOP-12482:
-

Good catch. updateThread should be also UpdateThread(using CamelCase). 

> Race condition in JMX cache update
> --
>
> Key: HADOOP-12482
> URL: https://issues.apache.org/jira/browse/HADOOP-12482
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: Tony Wu
>Assignee: Tony Wu
> Attachments: HADOOP-12482.001.patch, HADOOP-12482.002.patch, 
> HADOOP-12482.003.patch
>
>
> updateJmxCache() was updated in HADOOP-11301. However the patch introduced a 
> race condition. In updateJmxCache() function in MetricsSourceAdapter.java:
> {code:java}
>   private void updateJmxCache() {
> boolean getAllMetrics = false;
> synchronized (this) {
>   if (Time.now() - jmxCacheTS >= jmxCacheTTL) {
> // temporarilly advance the expiry while updating the cache
> jmxCacheTS = Time.now() + jmxCacheTTL;
> if (lastRecs == null) {
>   getAllMetrics = true;
> }
>   } else {
> return;
>   }
>   if (getAllMetrics) {
> MetricsCollectorImpl builder = new MetricsCollectorImpl();
> getMetrics(builder, true);
>   }
>   updateAttrCache();
>   if (getAllMetrics) {
> updateInfoCache();
>   }
>   jmxCacheTS = Time.now();
>   lastRecs = null; // in case regular interval update is not running
> }
>   }
> {code}
> Notice that getAllMetrics is set to true when:
> # jmxCacheTTL has passed
> # lastRecs == null
> lastRecs is set to null in the same function, but gets reassigned by 
> getMetrics().
> However getMetrics() can be called from a different thread:
> # MetricsSystemImpl.onTimerEvent()
> # MetricsSystemImpl.publishMetricsNow()
> Consider the following sequence:
> # updateJmxCache() is called by getMBeanInfo() from a thread getting cached 
> info. 
> ** lastRecs is set to null.
> # metrics sources is updated with new value/field.
> # getMetrics() is called by publishMetricsNow() or onTimerEvent() from a 
> different thread getting the latest metrics. 
> ** lastRecs is updated (!= null).
> # jmxCacheTTL passed.
> # updateJmxCache() is called again via getMBeanInfo().
> ** However because lastRecs is already updated (!= null), getAllMetrics will 
> not be set to true. So updateInfoCache() is not called and getMBeanInfo() 
> returns the old cached info.
> We ran into this issue on a cluster where a new metric did not get published 
> until much later.
> The case can be made worse by a periodic call to getMetrics() (driven by an 
> external program or script). In such case getMBeanInfo() may never be able to 
> retrieve the new record.
> The desired behavior should be that updateJmxCache() will guarantee to call 
> updateInfoCache() once after jmxCacheTTL, if lastRecs has been set to null by 
> updateJmxCache() itself.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12482) Race condition in JMX cache update

2015-10-30 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14982127#comment-14982127
 ] 

Tsuyoshi Ozawa commented on HADOOP-12482:
-

[~twu] Thank you for taking this issue. LGTM overall. Minor nits: could you 
rename lasRecsCleared to *lastRecsCleared*?(t is missing)

{code}
+  private boolean lasRecsCleared;
{code}

> Race condition in JMX cache update
> --
>
> Key: HADOOP-12482
> URL: https://issues.apache.org/jira/browse/HADOOP-12482
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: Tony Wu
>Assignee: Tony Wu
> Attachments: HADOOP-12482.001.patch, HADOOP-12482.002.patch
>
>
> updateJmxCache() was updated in HADOOP-11301. However the patch introduced a 
> race condition. In updateJmxCache() function in MetricsSourceAdapter.java:
> {code:java}
>   private void updateJmxCache() {
> boolean getAllMetrics = false;
> synchronized (this) {
>   if (Time.now() - jmxCacheTS >= jmxCacheTTL) {
> // temporarilly advance the expiry while updating the cache
> jmxCacheTS = Time.now() + jmxCacheTTL;
> if (lastRecs == null) {
>   getAllMetrics = true;
> }
>   } else {
> return;
>   }
>   if (getAllMetrics) {
> MetricsCollectorImpl builder = new MetricsCollectorImpl();
> getMetrics(builder, true);
>   }
>   updateAttrCache();
>   if (getAllMetrics) {
> updateInfoCache();
>   }
>   jmxCacheTS = Time.now();
>   lastRecs = null; // in case regular interval update is not running
> }
>   }
> {code}
> Notice that getAllMetrics is set to true when:
> # jmxCacheTTL has passed
> # lastRecs == null
> lastRecs is set to null in the same function, but gets reassigned by 
> getMetrics().
> However getMetrics() can be called from a different thread:
> # MetricsSystemImpl.onTimerEvent()
> # MetricsSystemImpl.publishMetricsNow()
> Consider the following sequence:
> # updateJmxCache() is called by getMBeanInfo() from a thread getting cached 
> info. 
> ** lastRecs is set to null.
> # metrics sources is updated with new value/field.
> # getMetrics() is called by publishMetricsNow() or onTimerEvent() from a 
> different thread getting the latest metrics. 
> ** lastRecs is updated (!= null).
> # jmxCacheTTL passed.
> # updateJmxCache() is called again via getMBeanInfo().
> ** However because lastRecs is already updated (!= null), getAllMetrics will 
> not be set to true. So updateInfoCache() is not called and getMBeanInfo() 
> returns the old cached info.
> We ran into this issue on a cluster where a new metric did not get published 
> until much later.
> The case can be made worse by a periodic call to getMetrics() (driven by an 
> external program or script). In such case getMBeanInfo() may never be able to 
> retrieve the new record.
> The desired behavior should be that updateJmxCache() will guarantee to call 
> updateInfoCache() once after jmxCacheTTL, if lastRecs has been set to null by 
> updateJmxCache() itself.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-9613) [JDK8] Update jersey version to latest 1.x release

2015-10-29 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14980024#comment-14980024
 ] 

Tsuyoshi Ozawa commented on HADOOP-9613:


Steve, In the context of JDK8 support, we can still use Jersey 1.19, which 
supports JDK 8 with compatibility for client-side API. In the patch, 
ClientResponse.getClientResponseStatus, which is a deprecated method, is 
replaced with reposense.getStatusInfo().getStatusCode to remove the usages of 
deprecated methods, but getClientResponseStatus still remain. 
https://jersey.java.net/nonav/apidocs/1.19/jersey/com/sun/jersey/api/client/ClientResponse.html

About a incompatibility of return value when JSON object is null, we need a 
workaround to keep compatibility.
{quote}
- assertEquals("jobs is not null", JSONObject.NULL, json.get("jobs"));
\+ assertEquals("jobs is not empty",
\+ new JSONObject().toString(), json.get("jobs").toString());
{quote}


> [JDK8] Update jersey version to latest 1.x release
> --
>
> Key: HADOOP-9613
> URL: https://issues.apache.org/jira/browse/HADOOP-9613
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: 2.4.0, 3.0.0
>Reporter: Timothy St. Clair
>Assignee: Tsuyoshi Ozawa
>  Labels: maven
> Attachments: HADOOP-2.2.0-9613.patch, 
> HADOOP-9613.004.incompatible.patch, HADOOP-9613.005.incompatible.patch, 
> HADOOP-9613.006.incompatible.patch, HADOOP-9613.007.incompatible.patch, 
> HADOOP-9613.1.patch, HADOOP-9613.2.patch, HADOOP-9613.3.patch, 
> HADOOP-9613.patch
>
>
> Update pom.xml dependencies exposed when running a mvn-rpmbuild against 
> system dependencies on Fedora 18.  
> The existing version is 1.8 which is quite old. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-9613) [JDK8] Update jersey version to latest 1.x release

2015-10-28 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14978608#comment-14978608
 ] 

Tsuyoshi Ozawa commented on HADOOP-9613:


[~aw] do you have any inspiration from following error message? It looks 
failing to fetch hadoop-common/hadoop-auth packages. 

{quote}
\[ERROR\] 
/testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/api/impl/TimelineClientImpl.java:\[50,56\]
 package org.apache.hadoop.security.authentication.client does not exist
\[ERROR\] 
/testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/api/impl/TimelineClientImpl.java:\[51,56\]
 package org.apache.hadoop.security.authentication.client does not exist
\[ERROR\] 
/testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/api/impl/TimelineClientImpl.java:\[108,11\]
 cannot find symbol
  symbol:   class ConnectionConfigurator
  location: class org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl
\[ERROR\] 
/testptch/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/api/impl/TimelineClientImpl.java:\[506,18\]
 cannot find symbol
{quote}

> [JDK8] Update jersey version to latest 1.x release
> --
>
> Key: HADOOP-9613
> URL: https://issues.apache.org/jira/browse/HADOOP-9613
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: 2.4.0, 3.0.0
>Reporter: Timothy St. Clair
>Assignee: Tsuyoshi Ozawa
>  Labels: maven
> Attachments: HADOOP-2.2.0-9613.patch, 
> HADOOP-9613.004.incompatible.patch, HADOOP-9613.005.incompatible.patch, 
> HADOOP-9613.006.incompatible.patch, HADOOP-9613.1.patch, HADOOP-9613.2.patch, 
> HADOOP-9613.3.patch, HADOOP-9613.patch
>
>
> Update pom.xml dependencies exposed when running a mvn-rpmbuild against 
> system dependencies on Fedora 18.  
> The existing version is 1.8 which is quite old. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12427) [JDK8] Upgrade Mockito version to 1.10.19

2015-10-28 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12427?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HADOOP-12427:

Description: 
The current version is 1.8.5 - inserted in 2011.

JDK 8 has been supported since 1.10.0. 
https://github.com/mockito/mockito/blob/master/doc/release-notes/official.md


  was:
The current version is 1.8.5 - inserted in 2011.



> [JDK8] Upgrade Mockito version to 1.10.19
> -
>
> Key: HADOOP-12427
> URL: https://issues.apache.org/jira/browse/HADOOP-12427
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Giovanni Matteo Fumarola
>Assignee: Giovanni Matteo Fumarola
>Priority: Minor
> Attachments: HADOOP-12427.v0.patch
>
>
> The current version is 1.8.5 - inserted in 2011.
> JDK 8 has been supported since 1.10.0. 
> https://github.com/mockito/mockito/blob/master/doc/release-notes/official.md



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12427) [JDK8] Upgrade Mockito version to 1.10.19

2015-10-28 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12427?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HADOOP-12427:

Summary: [JDK8] Upgrade Mockito version to 1.10.19  (was: Upgrade Mockito 
version to 1.10.19)

> [JDK8] Upgrade Mockito version to 1.10.19
> -
>
> Key: HADOOP-12427
> URL: https://issues.apache.org/jira/browse/HADOOP-12427
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Giovanni Matteo Fumarola
>Assignee: Giovanni Matteo Fumarola
>Priority: Minor
> Attachments: HADOOP-12427.v0.patch
>
>
> The current version is 1.8.5 - inserted in 2011.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12427) [JDK8] Upgrade Mockito version to 1.10.19

2015-10-28 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12427?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HADOOP-12427:

Description: 
The current version is 1.8.5 - inserted in 2011.

JDK 8 has been supported since 1.10.0. 
https://github.com/mockito/mockito/blob/master/doc/release-notes/official.md

"Compatible with JDK8 with exception of defender methods, JDK8 support will 
improve in 2.0"
http://mockito.org/


  was:
The current version is 1.8.5 - inserted in 2011.

JDK 8 has been supported since 1.10.0. 
https://github.com/mockito/mockito/blob/master/doc/release-notes/official.md



> [JDK8] Upgrade Mockito version to 1.10.19
> -
>
> Key: HADOOP-12427
> URL: https://issues.apache.org/jira/browse/HADOOP-12427
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Giovanni Matteo Fumarola
>Assignee: Giovanni Matteo Fumarola
>Priority: Minor
> Attachments: HADOOP-12427.v0.patch
>
>
> The current version is 1.8.5 - inserted in 2011.
> JDK 8 has been supported since 1.10.0. 
> https://github.com/mockito/mockito/blob/master/doc/release-notes/official.md
> "Compatible with JDK8 with exception of defender methods, JDK8 support will 
> improve in 2.0"
> http://mockito.org/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12427) [JDK8] Upgrade Mockito version to 1.10.19

2015-10-28 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12427?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14977894#comment-14977894
 ] 

Tsuyoshi Ozawa commented on HADOOP-12427:
-

[~giovanni.fumarola] I got the same error even when I upgraded mockito version 
to 1.9.0. One possibility is that some incompatible change happen in 1.9.0.

https://code.google.com/p/mockito/issues/list?can=1=label%3AMilestone-Release1.9=ID+Type+Status+Priority+Milestone+Owner+Summary=tiles



> [JDK8] Upgrade Mockito version to 1.10.19
> -
>
> Key: HADOOP-12427
> URL: https://issues.apache.org/jira/browse/HADOOP-12427
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Giovanni Matteo Fumarola
>Assignee: Giovanni Matteo Fumarola
>Priority: Minor
> Attachments: HADOOP-12427.v0.patch
>
>
> The current version is 1.8.5 - inserted in 2011.
> JDK 8 has been supported since 1.10.0. 
> https://github.com/mockito/mockito/blob/master/doc/release-notes/official.md
> "Compatible with JDK8 with exception of defender methods, JDK8 support will 
> improve in 2.0"
> http://mockito.org/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-9613) [JDK8] Update jersey version to latest 1.x release

2015-10-28 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HADOOP-9613:
---
Attachment: HADOOP-9613.007.incompatible.patch

Thanks a lot, your advice helps me a lot. Let me test with the change against 
pom.xml of yarn-common, thought this change should be addressed on another jira.

> [JDK8] Update jersey version to latest 1.x release
> --
>
> Key: HADOOP-9613
> URL: https://issues.apache.org/jira/browse/HADOOP-9613
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: 2.4.0, 3.0.0
>Reporter: Timothy St. Clair
>Assignee: Tsuyoshi Ozawa
>  Labels: maven
> Attachments: HADOOP-2.2.0-9613.patch, 
> HADOOP-9613.004.incompatible.patch, HADOOP-9613.005.incompatible.patch, 
> HADOOP-9613.006.incompatible.patch, HADOOP-9613.007.incompatible.patch, 
> HADOOP-9613.1.patch, HADOOP-9613.2.patch, HADOOP-9613.3.patch, 
> HADOOP-9613.patch
>
>
> Update pom.xml dependencies exposed when running a mvn-rpmbuild against 
> system dependencies on Fedora 18.  
> The existing version is 1.8 which is quite old. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-9613) [JDK8] Update jersey version to latest 1.x release

2015-10-26 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14973815#comment-14973815
 ] 

Tsuyoshi Ozawa commented on HADOOP-9613:


Steve, thank you for taking a look.

{quote}
Why so many duplicate guice servlet context classes?
{quote}

It's for sharing an object of com.google.inject.Injector within test classes. 
The test cases which extends JerseyTest are initialized in its constructor, 
which accepts a guice servlet context class.
{code}
  public TestAMWebServices() {
super(new WebAppDescriptor.Builder(
"org.apache.hadoop.mapreduce.v2.app.webapp")
.contextListenerClass(GuiceServletConfig.class) // <-- servlet context 
class
.filterClass(com.google.inject.servlet.GuiceFilter.class)
.contextPath("jersey-guice-filter").servletPath("/").build());
  }
{code}

The guice servlet context classes are used to initialize JerseyTest and servlet 
containers with guice's DI: 
{code}
private Injector injector = Guice.createInjector(new ServletModule() {
  @Override
  protected void configureServlets() {
   appContext = new MockAppContext(0, 1, 1, 1);
   appContext.setBlacklistedNodes(Sets.newHashSet("badnode1", "badnode2"));
   bind(JAXBContextResolver.class);
   bind(AMWebServices.class);
   bind(GenericExceptionHandler.class);
 
   serve("/*").with(GuiceContainer.class);
  }
}
 
public class GuiceServletConfig extends GuiceServletContextListener {
  @Override
  protected Injector getInjector() {
return injector;
  }
}
{code}

The latest patch fixes test failures by changes of the initialization sequence 
after upgrading jersey-test-framework-grizzly2 to 1.13 or later.  It happens 
because grizzly2 has started to use reflection since 2.2.16 and the logic to 
create ServletModule in Webappcontext doesn't handle [formal 
parameter|https://issues.apache.org/jira/browse/HADOOP-9613?focusedCommentId=14573457=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14573457]
 of inner class. 

I thought it's better to make these inner classes static, but it happens test 
failures because some test changes states of servlets and the states remain 
still after executing tests. That's why I changed GuiceServletConfig to normal 
class(not inner class) and be able to initialize its module for each test cases 
as follows.
{code}
  static {
GuiceServletConfig.injector = Guice.createInjector(new WebServletModule());
  }

  @Before
  @Override
  public void setUp() throws Exception {
super.setUp();
GuiceServletConfig.injector = Guice.createInjector(new WebServletModule());
  }
{code}]

> [JDK8] Update jersey version to latest 1.x release
> --
>
> Key: HADOOP-9613
> URL: https://issues.apache.org/jira/browse/HADOOP-9613
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: 2.4.0, 3.0.0
>Reporter: Timothy St. Clair
>Assignee: Tsuyoshi Ozawa
>  Labels: maven
> Attachments: HADOOP-2.2.0-9613.patch, 
> HADOOP-9613.004.incompatible.patch, HADOOP-9613.005.incompatible.patch, 
> HADOOP-9613.1.patch, HADOOP-9613.2.patch, HADOOP-9613.3.patch, 
> HADOOP-9613.patch
>
>
> Update pom.xml dependencies exposed when running a mvn-rpmbuild against 
> system dependencies on Fedora 18.  
> The existing version is 1.8 which is quite old. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-9613) [JDK8] Update jersey version to latest 1.x release

2015-10-26 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14975464#comment-14975464
 ] 

Tsuyoshi Ozawa commented on HADOOP-9613:


You're right. 

> [JDK8] Update jersey version to latest 1.x release
> --
>
> Key: HADOOP-9613
> URL: https://issues.apache.org/jira/browse/HADOOP-9613
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: 2.4.0, 3.0.0
>Reporter: Timothy St. Clair
>Assignee: Tsuyoshi Ozawa
>  Labels: maven
> Attachments: HADOOP-2.2.0-9613.patch, 
> HADOOP-9613.004.incompatible.patch, HADOOP-9613.005.incompatible.patch, 
> HADOOP-9613.006.incompatible.patch, HADOOP-9613.1.patch, HADOOP-9613.2.patch, 
> HADOOP-9613.3.patch, HADOOP-9613.patch
>
>
> Update pom.xml dependencies exposed when running a mvn-rpmbuild against 
> system dependencies on Fedora 18.  
> The existing version is 1.8 which is quite old. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-9613) [JDK8] Update jersey version to latest 1.x release

2015-10-26 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HADOOP-9613:
---
Attachment: HADOOP-9613.006.incompatible.patch

Attaching a patch for fixing a failure of TestTimelineClient.

> [JDK8] Update jersey version to latest 1.x release
> --
>
> Key: HADOOP-9613
> URL: https://issues.apache.org/jira/browse/HADOOP-9613
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: 2.4.0, 3.0.0
>Reporter: Timothy St. Clair
>Assignee: Tsuyoshi Ozawa
>  Labels: maven
> Attachments: HADOOP-2.2.0-9613.patch, 
> HADOOP-9613.004.incompatible.patch, HADOOP-9613.005.incompatible.patch, 
> HADOOP-9613.006.incompatible.patch, HADOOP-9613.1.patch, HADOOP-9613.2.patch, 
> HADOOP-9613.3.patch, HADOOP-9613.patch
>
>
> Update pom.xml dependencies exposed when running a mvn-rpmbuild against 
> system dependencies on Fedora 18.  
> The existing version is 1.8 which is quite old. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-9613) [JDK8] Update jersey version to latest 1.x release

2015-10-26 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14975465#comment-14975465
 ] 

Tsuyoshi Ozawa commented on HADOOP-9613:


Thank you for the comment. Hmm, let me try again. 

> [JDK8] Update jersey version to latest 1.x release
> --
>
> Key: HADOOP-9613
> URL: https://issues.apache.org/jira/browse/HADOOP-9613
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: 2.4.0, 3.0.0
>Reporter: Timothy St. Clair
>Assignee: Tsuyoshi Ozawa
>  Labels: maven
> Attachments: HADOOP-2.2.0-9613.patch, 
> HADOOP-9613.004.incompatible.patch, HADOOP-9613.005.incompatible.patch, 
> HADOOP-9613.006.incompatible.patch, HADOOP-9613.1.patch, HADOOP-9613.2.patch, 
> HADOOP-9613.3.patch, HADOOP-9613.patch
>
>
> Update pom.xml dependencies exposed when running a mvn-rpmbuild against 
> system dependencies on Fedora 18.  
> The existing version is 1.8 which is quite old. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12457) [JDK8] Fix compilation of common by javadoc

2015-10-26 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14975491#comment-14975491
 ] 

Tsuyoshi Ozawa commented on HADOOP-12457:
-

This looks like a bug reported by [~gtCarrera9] on HADOOP-11776. 
https://issues.apache.org/jira/browse/HADOOP-11776?focusedCommentId=14391868=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14391868

I will create new issue to address the problem.
The patch itself works well. +1, checking this in.

> [JDK8] Fix compilation of common by javadoc
> ---
>
> Key: HADOOP-12457
> URL: https://issues.apache.org/jira/browse/HADOOP-12457
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Tsuyoshi Ozawa
>Assignee: Akira AJISAKA
> Attachments: HADOOP-12457.00.patch, HADOOP-12457.01.patch, 
> HADOOP-12457.02.patch
>
>
> Delete.java and Server.java cannot be compiled with "unmappable character ofr 
> encoding ASCII". 
> {quote}
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
>   [javadoc] ^
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
>   [javadoc]  ^
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
>   [javadoc]   ^
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
>   [javadoc]  ^
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
>   [javadoc]   ^
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12457) [JDK8] Fix a failure of compiling common by javadoc

2015-10-26 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HADOOP-12457:

Summary: [JDK8] Fix a failure of compiling common by javadoc  (was: [JDK8] 
Fix compilation of common by javadoc)

> [JDK8] Fix a failure of compiling common by javadoc
> ---
>
> Key: HADOOP-12457
> URL: https://issues.apache.org/jira/browse/HADOOP-12457
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Tsuyoshi Ozawa
>Assignee: Akira AJISAKA
> Attachments: HADOOP-12457.00.patch, HADOOP-12457.01.patch, 
> HADOOP-12457.02.patch
>
>
> Delete.java and Server.java cannot be compiled with "unmappable character ofr 
> encoding ASCII". 
> {quote}
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
>   [javadoc] ^
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
>   [javadoc]  ^
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
>   [javadoc]   ^
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
>   [javadoc]  ^
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
>   [javadoc]   ^
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12516) jdiff fails with error 'duplicate comment id' about MetricsSystem.register_changed

2015-10-26 Thread Tsuyoshi Ozawa (JIRA)
Tsuyoshi Ozawa created HADOOP-12516:
---

 Summary: jdiff fails with error 'duplicate comment id' about 
MetricsSystem.register_changed
 Key: HADOOP-12516
 URL: https://issues.apache.org/jira/browse/HADOOP-12516
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Tsuyoshi Ozawa


"mvn package -Pdist,docs -DskipTests" fails with following error. It looks like 
jdiff problem as Li Lu mentioned on HADOOP-11776.

{quote}
  [javadoc] ExcludePrivateAnnotationsJDiffDoclet
  [javadoc] JDiff: doclet started ...
  [javadoc] JDiff: reading the old API in from file 
'/home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/dev-support/jdiff/Apache_Hadoop_Common_2.6.0.xml'...Warning:
 API identifier in the XML file (hadoop-core 2.6.0) differs from the name of 
the file 'Apache_Hadoop_Common_2.6.0.xml'

  ...

  [javadoc] JDiff: reading the new API in from file 
'/home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/target/site/jdiff/xml/Apache_Hadoop_Common_2.8.0-SNAPSHOT.xml'...Warning:
 incorrectly formatted @link in text: Options to be used by the {@link Find} 
command and its {@link Expression}s.

  

  [javadoc] Error: duplicate comment id: 
org.apache.hadoop.metrics2.MetricsSystem.register_changed(java.lang.String, 
java.lang.String, T)
{quote}

A link to the comment by Li lu is [here| 
https://issues.apache.org/jira/browse/HADOOP-11776?focusedCommentId=14391868=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14391868]




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12516) jdiff fails with error 'duplicate comment id' about MetricsSystem.register_changed

2015-10-26 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HADOOP-12516:

Description: 
"mvn package -Pdist,docs -DskipTests" fails with following error. It looks like 
jdiff problem as Li Lu mentioned on HADOOP-11776.

{quote}
  [javadoc] ExcludePrivateAnnotationsJDiffDoclet
  [javadoc] JDiff: doclet started ...
  [javadoc] JDiff: reading the old API in from file 
'/home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/dev-support/jdiff/Apache_Hadoop_Common_2.6.0.xml'...Warning:
 API identifier in the XML file (hadoop-core 2.6.0) differs from the name of 
the file 'Apache_Hadoop_Common_2.6.0.xml'

  ...

  [javadoc] JDiff: reading the new API in from file 
'/home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/target/site/jdiff/xml/Apache_Hadoop_Common_2.8.0-SNAPSHOT.xml'...Warning:
 incorrectly formatted @link in text: Options to be used by the \{@link Find\} 
command and its \{@link Expression\}s.

  

  [javadoc] Error: duplicate comment id: 
org.apache.hadoop.metrics2.MetricsSystem.register_changed(java.lang.String, 
java.lang.String, T)
{quote}

A link to the comment by Li lu is [here| 
https://issues.apache.org/jira/browse/HADOOP-11776?focusedCommentId=14391868=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14391868].


  was:
"mvn package -Pdist,docs -DskipTests" fails with following error. It looks like 
jdiff problem as Li Lu mentioned on HADOOP-11776.

{quote}
  [javadoc] ExcludePrivateAnnotationsJDiffDoclet
  [javadoc] JDiff: doclet started ...
  [javadoc] JDiff: reading the old API in from file 
'/home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/dev-support/jdiff/Apache_Hadoop_Common_2.6.0.xml'...Warning:
 API identifier in the XML file (hadoop-core 2.6.0) differs from the name of 
the file 'Apache_Hadoop_Common_2.6.0.xml'

  ...

  [javadoc] JDiff: reading the new API in from file 
'/home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/target/site/jdiff/xml/Apache_Hadoop_Common_2.8.0-SNAPSHOT.xml'...Warning:
 incorrectly formatted @link in text: Options to be used by the {@link Find} 
command and its {@link Expression}s.

  

  [javadoc] Error: duplicate comment id: 
org.apache.hadoop.metrics2.MetricsSystem.register_changed(java.lang.String, 
java.lang.String, T)
{quote}

A link to the comment by Li lu is [here| 
https://issues.apache.org/jira/browse/HADOOP-11776?focusedCommentId=14391868=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14391868]



> jdiff fails with error 'duplicate comment id' about 
> MetricsSystem.register_changed
> --
>
> Key: HADOOP-12516
> URL: https://issues.apache.org/jira/browse/HADOOP-12516
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Tsuyoshi Ozawa
>
> "mvn package -Pdist,docs -DskipTests" fails with following error. It looks 
> like jdiff problem as Li Lu mentioned on HADOOP-11776.
> {quote}
>   [javadoc] ExcludePrivateAnnotationsJDiffDoclet
>   [javadoc] JDiff: doclet started ...
>   [javadoc] JDiff: reading the old API in from file 
> '/home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/dev-support/jdiff/Apache_Hadoop_Common_2.6.0.xml'...Warning:
>  API identifier in the XML file (hadoop-core 2.6.0) differs from the name of 
> the file 'Apache_Hadoop_Common_2.6.0.xml'
>   ...
>   [javadoc] JDiff: reading the new API in from file 
> '/home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/target/site/jdiff/xml/Apache_Hadoop_Common_2.8.0-SNAPSHOT.xml'...Warning:
>  incorrectly formatted @link in text: Options to be used by the \{@link 
> Find\} command and its \{@link Expression\}s.
>   
>   [javadoc] Error: duplicate comment id: 
> org.apache.hadoop.metrics2.MetricsSystem.register_changed(java.lang.String, 
> java.lang.String, T)
> {quote}
> A link to the comment by Li lu is [here| 
> https://issues.apache.org/jira/browse/HADOOP-11776?focusedCommentId=14391868=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14391868].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12457) [JDK8] Fix a failure of compiling common by javadoc

2015-10-26 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14975552#comment-14975552
 ] 

Tsuyoshi Ozawa commented on HADOOP-12457:
-

[~gtCarrera9] Opened HADOOP-12516. Could you give us the detail of the problem 
there?

> [JDK8] Fix a failure of compiling common by javadoc
> ---
>
> Key: HADOOP-12457
> URL: https://issues.apache.org/jira/browse/HADOOP-12457
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Tsuyoshi Ozawa
>Assignee: Akira AJISAKA
> Fix For: 2.8.0
>
> Attachments: HADOOP-12457.00.patch, HADOOP-12457.01.patch, 
> HADOOP-12457.02.patch
>
>
> Delete.java and Server.java cannot be compiled with "unmappable character ofr 
> encoding ASCII". 
> {quote}
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
>   [javadoc] ^
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
>   [javadoc]  ^
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
>   [javadoc]   ^
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
>   [javadoc]  ^
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
>   [javadoc]   ^
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12457) [JDK8] Fix a failure of compiling common by javadoc

2015-10-26 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HADOOP-12457:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

Committed this to trunk and branch-2. Thanks [~ajisakaa] for your contribution 
and thanks [~ste...@apache.org]  for your review.

> [JDK8] Fix a failure of compiling common by javadoc
> ---
>
> Key: HADOOP-12457
> URL: https://issues.apache.org/jira/browse/HADOOP-12457
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Tsuyoshi Ozawa
>Assignee: Akira AJISAKA
> Fix For: 2.8.0
>
> Attachments: HADOOP-12457.00.patch, HADOOP-12457.01.patch, 
> HADOOP-12457.02.patch
>
>
> Delete.java and Server.java cannot be compiled with "unmappable character ofr 
> encoding ASCII". 
> {quote}
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
>   [javadoc] ^
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
>   [javadoc]  ^
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
>   [javadoc]   ^
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
>   [javadoc]  ^
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
>   [javadoc]   ^
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-9613) [JDK8] Update jersey version to latest 1.x release

2015-10-26 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14974302#comment-14974302
 ] 

Tsuyoshi Ozawa commented on HADOOP-9613:


{quote}
[ERROR] 
/testptch/hadoop/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-hs/src/test/java/org/apache/hadoop/mapreduce/v2/hs/webapp/TestHsWebServicesJobsQuery.java:[92,5]
 cannot find symbol
{quote}

This problem happens when a new class, in this case GuiceServletConfig, is 
added on separate jar but the test jar cannot find it. Should we create new 
separate jira to deal with this problem?

> [JDK8] Update jersey version to latest 1.x release
> --
>
> Key: HADOOP-9613
> URL: https://issues.apache.org/jira/browse/HADOOP-9613
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: 2.4.0, 3.0.0
>Reporter: Timothy St. Clair
>Assignee: Tsuyoshi Ozawa
>  Labels: maven
> Attachments: HADOOP-2.2.0-9613.patch, 
> HADOOP-9613.004.incompatible.patch, HADOOP-9613.005.incompatible.patch, 
> HADOOP-9613.1.patch, HADOOP-9613.2.patch, HADOOP-9613.3.patch, 
> HADOOP-9613.patch
>
>
> Update pom.xml dependencies exposed when running a mvn-rpmbuild against 
> system dependencies on Fedora 18.  
> The existing version is 1.8 which is quite old. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12457) [JDK8] Fix compilation of common by javadoc

2015-10-26 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14974307#comment-14974307
 ] 

Tsuyoshi Ozawa commented on HADOOP-12457:
-

[~ajisakaa] On my local, a next failure appeared - could you also fix it?

{quote}
  [javadoc] 
/home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java:182:
 error: unmappable character for encoding ASCII
  [javadoc]  * The former is resolved to ???default??? if 
${NAME} environment variable is undefined
{quote}

> [JDK8] Fix compilation of common by javadoc
> ---
>
> Key: HADOOP-12457
> URL: https://issues.apache.org/jira/browse/HADOOP-12457
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Tsuyoshi Ozawa
>Assignee: Akira AJISAKA
> Attachments: HADOOP-12457.00.patch, HADOOP-12457.01.patch
>
>
> Delete.java and Server.java cannot be compiled with "unmappable character ofr 
> encoding ASCII". 
> {quote}
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
>   [javadoc] ^
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
>   [javadoc]  ^
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
>   [javadoc]   ^
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
>   [javadoc]  ^
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
>   [javadoc]   ^
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12513) Dockerfile lacks initial `apt-get update`

2015-10-26 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14974246#comment-14974246
 ] 

Tsuyoshi Ozawa commented on HADOOP-12513:
-

+1, checking this in.

> Dockerfile lacks initial `apt-get update`
> -
>
> Key: HADOOP-12513
> URL: https://issues.apache.org/jira/browse/HADOOP-12513
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akihiro Suda
>Priority: Trivial
> Attachments: HADOOP-12513.patch
>
>
> [Dockerfile|https://github.com/apache/hadoop/blob/1aa735c188a308ca608694546c595e3c51f38612/dev-support/docker/Dockerfile#l27]
>  executes {{apt-get install -y software-properties-common}} without an 
> initial {{apt-get update}}.
> This can fail depending on the local Docker build cache.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12457) [JDK8] Fix compilation of common by javadoc

2015-10-26 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14974351#comment-14974351
 ] 

Tsuyoshi Ozawa commented on HADOOP-12457:
-

My bashrc
{code}
export LANG=C
export LC_ALL=C
{code}

Command: mvn package -Pdist,docs -DskipTests

> [JDK8] Fix compilation of common by javadoc
> ---
>
> Key: HADOOP-12457
> URL: https://issues.apache.org/jira/browse/HADOOP-12457
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Tsuyoshi Ozawa
>Assignee: Akira AJISAKA
> Attachments: HADOOP-12457.00.patch, HADOOP-12457.01.patch
>
>
> Delete.java and Server.java cannot be compiled with "unmappable character ofr 
> encoding ASCII". 
> {quote}
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
>   [javadoc] ^
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
>   [javadoc]  ^
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
>   [javadoc]   ^
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
>   [javadoc]  ^
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
>   [javadoc]   ^
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12513) Dockerfile lacks initial 'apt-get update'

2015-10-26 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HADOOP-12513:

Summary: Dockerfile lacks initial 'apt-get update'  (was: Dockerfile lacks 
initial `apt-get update`)

> Dockerfile lacks initial 'apt-get update'
> -
>
> Key: HADOOP-12513
> URL: https://issues.apache.org/jira/browse/HADOOP-12513
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akihiro Suda
>Priority: Trivial
> Attachments: HADOOP-12513.patch
>
>
> [Dockerfile|https://github.com/apache/hadoop/blob/1aa735c188a308ca608694546c595e3c51f38612/dev-support/docker/Dockerfile#l27]
>  executes {{apt-get install -y software-properties-common}} without an 
> initial {{apt-get update}}.
> This can fail depending on the local Docker build cache.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12513) Dockerfile lacks initial 'apt-get update'

2015-10-26 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HADOOP-12513:

  Resolution: Fixed
Assignee: Akihiro Suda
Hadoop Flags: Reviewed
   Fix Version/s: 2.8.0
Target Version/s: 2.8.0
  Status: Resolved  (was: Patch Available)

Committed this to trunk and branch-2. Thanks for your contribution, [~suda]!

> Dockerfile lacks initial 'apt-get update'
> -
>
> Key: HADOOP-12513
> URL: https://issues.apache.org/jira/browse/HADOOP-12513
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akihiro Suda
>Assignee: Akihiro Suda
>Priority: Trivial
> Fix For: 2.8.0
>
> Attachments: HADOOP-12513.patch
>
>
> [Dockerfile|https://github.com/apache/hadoop/blob/1aa735c188a308ca608694546c595e3c51f38612/dev-support/docker/Dockerfile#l27]
>  executes {{apt-get install -y software-properties-common}} without an 
> initial {{apt-get update}}.
> This can fail depending on the local Docker build cache.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


<    1   2   3   4   5   6   7   8   9   10   >