[jira] [Updated] (HADOOP-12240) Fix tests requiring native library to be skipped in non-native profile

2015-07-15 Thread Masatake Iwasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki updated HADOOP-12240:
--
Status: Patch Available  (was: Open)

 Fix tests requiring native library to be skipped in non-native profile
 --

 Key: HADOOP-12240
 URL: https://issues.apache.org/jira/browse/HADOOP-12240
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Reporter: Masatake Iwasaki
Assignee: Masatake Iwasaki
Priority: Minor
 Attachments: HADOOP-12240.001.patch


 3 tests in TestSequenceFileAppend require native library.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12060) Address some issues related to ByteBuffer type input/output buffers for raw erasure coders

2015-07-15 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14629130#comment-14629130
 ] 

Kai Zheng commented on HADOOP-12060:


Hi [~jingzhao], 

I'm sorry for the very late on this. I just updated the patch and hope all your 
above comments are addressed properly. Would you help review one more time? 
Please also advise if we should include this for the merge or not. Thanks.

 Address some issues related to ByteBuffer type input/output buffers for raw 
 erasure coders
 --

 Key: HADOOP-12060
 URL: https://issues.apache.org/jira/browse/HADOOP-12060
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: io
Reporter: Kai Zheng
Assignee: Kai Zheng
 Attachments: HADOOP-12060-HDFS-7285-v1.patch, 
 HADOOP-12060-HDFS-7285-v2.patch


 In HDFS-8319 [~jingzhao] raised some issues about ByteBuffer type 
 input/output buffers for raw erasure coders:
 * Should support ByteBuffers originated from {{ByteBuffer#slice}} calls;
 * Should clearly spec in Javadoc that no mixing of on-heap buffers and direct 
 buffers are allowed, and with necessary checking codes ensuring the same type 
 of buffers are used.
 In HDFS-8319 patch by [~jingzhao] there are some good refactoring codes that 
 would be incorporated here.
 As discussed open this to address the issues separately.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12229) Fix inconsistent subsection titles in filesystem.md

2015-07-15 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14629230#comment-14629230
 ] 

Tsuyoshi Ozawa commented on HADOOP-12229:
-

[~iwasakims] thank you for updating.

{quote}
I removed the enrty of isSymlink. I also removed inEncryptionZone not existing 
in current source code.
{quote}

I think we should not remove them since it exists in FileStatus and FileStatus 
class is a part of the specification - it's added in HDFS-6843. Instead, should 
we pull the items of FileStatus' class together by creating FileStatus section?

 Fix inconsistent subsection titles in filesystem.md
 ---

 Key: HADOOP-12229
 URL: https://issues.apache.org/jira/browse/HADOOP-12229
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: documentation
Reporter: Masatake Iwasaki
Assignee: Masatake Iwasaki
Priority: Minor
 Attachments: HADOOP-12229.001.patch, HADOOP-12229.002.patch, 
 HADOOP-12229.003.patch


 * Some API signatures does not have return value.
 * Some API signatures have {{FileSystem.}} prefix.
 * Some subsection has wrong level.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10615) FileInputStream in JenkinsHash#main() is never closed

2015-07-15 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14629201#comment-14629201
 ] 

Tsuyoshi Ozawa commented on HADOOP-10615:
-

+1

 FileInputStream in JenkinsHash#main() is never closed
 -

 Key: HADOOP-10615
 URL: https://issues.apache.org/jira/browse/HADOOP-10615
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Chen He
Priority: Minor
  Labels: BB2015-05-TBR
 Attachments: HADOOP-10615-2.patch, HADOOP-10615.003.patch, 
 HADOOP-10615.patch


 {code}
 FileInputStream in = new FileInputStream(args[0]);
 {code}
 The above FileInputStream is not closed upon exit of main.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12181) Fix intermittent test failure of TestZKSignerSecretProvider

2015-07-15 Thread Masatake Iwasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki updated HADOOP-12181:
--
Summary: Fix intermittent test failure of TestZKSignerSecretProvider  (was: 
TestZKSignerSecretProvider intermittently fails)

 Fix intermittent test failure of TestZKSignerSecretProvider
 ---

 Key: HADOOP-12181
 URL: https://issues.apache.org/jira/browse/HADOOP-12181
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Masatake Iwasaki
Assignee: Masatake Iwasaki
Priority: Minor
 Attachments: HADOOP-12181.001.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12060) Address some issues related to ByteBuffer type input/output buffers for raw erasure coders

2015-07-15 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HADOOP-12060:
---
Status: Patch Available  (was: In Progress)

To trigger the building for this.

 Address some issues related to ByteBuffer type input/output buffers for raw 
 erasure coders
 --

 Key: HADOOP-12060
 URL: https://issues.apache.org/jira/browse/HADOOP-12060
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: io
Reporter: Kai Zheng
Assignee: Kai Zheng
 Attachments: HADOOP-12060-HDFS-7285-v1.patch, 
 HADOOP-12060-HDFS-7285-v2.patch


 In HDFS-8319 [~jingzhao] raised some issues about ByteBuffer type 
 input/output buffers for raw erasure coders:
 * Should support ByteBuffers originated from {{ByteBuffer#slice}} calls;
 * Should clearly spec in Javadoc that no mixing of on-heap buffers and direct 
 buffers are allowed, and with necessary checking codes ensuring the same type 
 of buffers are used.
 In HDFS-8319 patch by [~jingzhao] there are some good refactoring codes that 
 would be incorporated here.
 As discussed open this to address the issues separately.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12240) Fix tests requiring native library to be skipped in non-native profile

2015-07-15 Thread Masatake Iwasaki (JIRA)
Masatake Iwasaki created HADOOP-12240:
-

 Summary: Fix tests requiring native library to be skipped in 
non-native profile
 Key: HADOOP-12240
 URL: https://issues.apache.org/jira/browse/HADOOP-12240
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Reporter: Masatake Iwasaki
Assignee: Masatake Iwasaki
Priority: Minor






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12060) Address some issues related to ByteBuffer type input/output buffers for raw erasure coders

2015-07-15 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HADOOP-12060:
---
Attachment: HADOOP-12060-HDFS-7285-v2.patch

Updated the patch according to Jing's review comments.

 Address some issues related to ByteBuffer type input/output buffers for raw 
 erasure coders
 --

 Key: HADOOP-12060
 URL: https://issues.apache.org/jira/browse/HADOOP-12060
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: io
Reporter: Kai Zheng
Assignee: Kai Zheng
 Attachments: HADOOP-12060-HDFS-7285-v1.patch, 
 HADOOP-12060-HDFS-7285-v2.patch


 In HDFS-8319 [~jingzhao] raised some issues about ByteBuffer type 
 input/output buffers for raw erasure coders:
 * Should support ByteBuffers originated from {{ByteBuffer#slice}} calls;
 * Should clearly spec in Javadoc that no mixing of on-heap buffers and direct 
 buffers are allowed, and with necessary checking codes ensuring the same type 
 of buffers are used.
 In HDFS-8319 patch by [~jingzhao] there are some good refactoring codes that 
 would be incorporated here.
 As discussed open this to address the issues separately.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10615) FileInputStream in JenkinsHash#main() is never closed

2015-07-15 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10615?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HADOOP-10615:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

Committed this to trunk and branch-2. Thanks [~airbots] for your contribution 
and thansk [~ted_yu] for your report.

 FileInputStream in JenkinsHash#main() is never closed
 -

 Key: HADOOP-10615
 URL: https://issues.apache.org/jira/browse/HADOOP-10615
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Chen He
Priority: Minor
  Labels: BB2015-05-TBR
 Fix For: 2.8.0

 Attachments: HADOOP-10615-2.patch, HADOOP-10615.003.patch, 
 HADOOP-10615.patch


 {code}
 FileInputStream in = new FileInputStream(args[0]);
 {code}
 The above FileInputStream is not closed upon exit of main.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10615) FileInputStream in JenkinsHash#main() is never closed

2015-07-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14629221#comment-14629221
 ] 

Hudson commented on HADOOP-10615:
-

FAILURE: Integrated in Hadoop-trunk-Commit #8170 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8170/])
HADOOP-10615. FileInputStream in JenkinsHash#main() is never closed. 
Contributed by Chen He. (ozawa: rev 111e6a3fdf613767782817836c42810bf2bda5e8)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/hash/JenkinsHash.java
* hadoop-common-project/hadoop-common/CHANGES.txt


 FileInputStream in JenkinsHash#main() is never closed
 -

 Key: HADOOP-10615
 URL: https://issues.apache.org/jira/browse/HADOOP-10615
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Chen He
Priority: Minor
  Labels: BB2015-05-TBR
 Fix For: 2.8.0

 Attachments: HADOOP-10615-2.patch, HADOOP-10615.003.patch, 
 HADOOP-10615.patch


 {code}
 FileInputStream in = new FileInputStream(args[0]);
 {code}
 The above FileInputStream is not closed upon exit of main.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12181) Fix intermittent test failure of TestZKSignerSecretProvider

2015-07-15 Thread Masatake Iwasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki updated HADOOP-12181:
--
Description: TestZKSignerSecretProvider wait for the condition by just 
sleeping {{rolloverFrequency + 2000}} millisecs. Depending on timing only is 
fragile and it makes test time unnecessary long.

 Fix intermittent test failure of TestZKSignerSecretProvider
 ---

 Key: HADOOP-12181
 URL: https://issues.apache.org/jira/browse/HADOOP-12181
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Masatake Iwasaki
Assignee: Masatake Iwasaki
Priority: Minor
 Attachments: HADOOP-12181.001.patch


 TestZKSignerSecretProvider wait for the condition by just sleeping 
 {{rolloverFrequency + 2000}} millisecs. Depending on timing only is fragile 
 and it makes test time unnecessary long.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12240) Fix tests requiring native library to be skipped in non-native profile

2015-07-15 Thread Masatake Iwasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki updated HADOOP-12240:
--
Attachment: HADOOP-12240.001.patch

 Fix tests requiring native library to be skipped in non-native profile
 --

 Key: HADOOP-12240
 URL: https://issues.apache.org/jira/browse/HADOOP-12240
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Reporter: Masatake Iwasaki
Assignee: Masatake Iwasaki
Priority: Minor
 Attachments: HADOOP-12240.001.patch


 3 tests in TestSequenceFileAppend require native library.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12240) Fix tests requiring native library to be skipped in non-native profile

2015-07-15 Thread Masatake Iwasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki updated HADOOP-12240:
--
Description: 3 tests in TestSequenceFileAppend require native library.

 Fix tests requiring native library to be skipped in non-native profile
 --

 Key: HADOOP-12240
 URL: https://issues.apache.org/jira/browse/HADOOP-12240
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Reporter: Masatake Iwasaki
Assignee: Masatake Iwasaki
Priority: Minor

 3 tests in TestSequenceFileAppend require native library.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12240) Fix tests requiring native library to be skipped in non-native profile

2015-07-15 Thread Masatake Iwasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14629244#comment-14629244
 ] 

Masatake Iwasaki commented on HADOOP-12240:
---

{noformat}
Running org.apache.hadoop.io.TestSequenceFileAppend
Tests run: 4, Failures: 0, Errors: 3, Skipped: 0, Time elapsed: 1.203 sec  
FAILURE! - in org.apache.hadoop.io.TestSequenceFileAppend
testAppendBlockCompression(org.apache.hadoop.io.TestSequenceFileAppend)  Time 
elapsed: 0.118 sec   ERROR!
java.lang.IllegalArgumentException: SequenceFile doesn't work with GzipCodec 
without native-hadoop code!
at 
org.apache.hadoop.io.SequenceFile$Writer.init(SequenceFile.java:1152)
at 
org.apache.hadoop.io.SequenceFile$BlockCompressWriter.init(SequenceFile.java:1511)
at org.apache.hadoop.io.SequenceFile.createWriter(SequenceFile.java:277)
at 
org.apache.hadoop.io.TestSequenceFileAppend.testAppendBlockCompression(TestSequenceFileAppend.java:182)

testAppendSort(org.apache.hadoop.io.TestSequenceFileAppend)  Time elapsed: 
0.011 sec   ERROR!
java.lang.IllegalArgumentException: SequenceFile doesn't work with GzipCodec 
without native-hadoop code!
at 
org.apache.hadoop.io.SequenceFile$Writer.init(SequenceFile.java:1152)
at 
org.apache.hadoop.io.SequenceFile$BlockCompressWriter.init(SequenceFile.java:1511)
at org.apache.hadoop.io.SequenceFile.createWriter(SequenceFile.java:277)
at 
org.apache.hadoop.io.TestSequenceFileAppend.testAppendSort(TestSequenceFileAppend.java:261)

testAppendRecordCompression(org.apache.hadoop.io.TestSequenceFileAppend)  Time 
elapsed: 0.006 sec   ERROR!
java.lang.IllegalArgumentException: SequenceFile doesn't work with GzipCodec 
without native-hadoop code!
at 
org.apache.hadoop.io.SequenceFile$Writer.init(SequenceFile.java:1152)
at 
org.apache.hadoop.io.SequenceFile$RecordCompressWriter.init(SequenceFile.java:1441)
at org.apache.hadoop.io.SequenceFile.createWriter(SequenceFile.java:275)
at 
org.apache.hadoop.io.TestSequenceFileAppend.testAppendRecordCompression(TestSequenceFileAppend.java:149)
{noformat}


 Fix tests requiring native library to be skipped in non-native profile
 --

 Key: HADOOP-12240
 URL: https://issues.apache.org/jira/browse/HADOOP-12240
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Reporter: Masatake Iwasaki
Assignee: Masatake Iwasaki
Priority: Minor

 3 tests in TestSequenceFileAppend require native library.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11343) Overflow is not properly handled in caclulating final iv for AES CTR

2015-07-15 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated HADOOP-11343:
-
Labels: 2.6.1-candidate  (was: )

 Overflow is not properly handled in caclulating final iv for AES CTR
 

 Key: HADOOP-11343
 URL: https://issues.apache.org/jira/browse/HADOOP-11343
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.6.0
Reporter: Jerry Chen
Assignee: Jerry Chen
Priority: Blocker
  Labels: 2.6.1-candidate
 Fix For: 2.7.0

 Attachments: HADOOP-11343.001.patch, HADOOP-11343.002.patch, 
 HADOOP-11343.003.patch, HADOOP-11343.patch


 In the AesCtrCryptoCodec calculateIV, as the init IV is a random generated 16 
 bytes, 
 final byte[] iv = new byte[cc.getCipherSuite().getAlgorithmBlockSize()];
   cc.generateSecureRandom(iv);
 Then the following calculation of iv and counter on 8 bytes (64bit) space 
 would easily cause overflow and this overflow gets lost.  The result would be 
 the 128 bit data block was encrypted with a wrong counter and cannot be 
 decrypted by standard aes-ctr.
 {code}
 /**
* The IV is produced by adding the initial IV to the counter. IV length 
* should be the same as {@link #AES_BLOCK_SIZE}
*/
   @Override
   public void calculateIV(byte[] initIV, long counter, byte[] IV) {
 Preconditions.checkArgument(initIV.length == AES_BLOCK_SIZE);
 Preconditions.checkArgument(IV.length == AES_BLOCK_SIZE);
 
 System.arraycopy(initIV, 0, IV, 0, CTR_OFFSET);
 long l = 0;
 for (int i = 0; i  8; i++) {
   l = ((l  8) | (initIV[CTR_OFFSET + i]  0xff));
 }
 l += counter;
 IV[CTR_OFFSET + 0] = (byte) (l  56);
 IV[CTR_OFFSET + 1] = (byte) (l  48);
 IV[CTR_OFFSET + 2] = (byte) (l  40);
 IV[CTR_OFFSET + 3] = (byte) (l  32);
 IV[CTR_OFFSET + 4] = (byte) (l  24);
 IV[CTR_OFFSET + 5] = (byte) (l  16);
 IV[CTR_OFFSET + 6] = (byte) (l  8);
 IV[CTR_OFFSET + 7] = (byte) (l);
   }
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11710) Make CryptoOutputStream behave like DFSOutputStream wrt synchronization

2015-07-15 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11710?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated HADOOP-11710:
-
Labels: 2.6.1-candidate  (was: )

 Make CryptoOutputStream behave like DFSOutputStream wrt synchronization
 ---

 Key: HADOOP-11710
 URL: https://issues.apache.org/jira/browse/HADOOP-11710
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs
Affects Versions: 2.6.0
Reporter: Sean Busbey
Assignee: Sean Busbey
Priority: Critical
  Labels: 2.6.1-candidate
 Fix For: 2.7.0

 Attachments: HADOOP-11710.1.patch.txt, HADOOP-11710.2.patch.txt, 
 HADOOP-11710.3.patch.txt


 per discussion on parent, as an intermediate solution make CryptoOutputStream 
 behave like DFSOutputStream



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11674) oneByteBuf in CryptoInputStream and CryptoOutputStream should be non static

2015-07-15 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated HADOOP-11674:
-
Labels: 2.6.1-candidate  (was: )

 oneByteBuf in CryptoInputStream and CryptoOutputStream should be non static
 ---

 Key: HADOOP-11674
 URL: https://issues.apache.org/jira/browse/HADOOP-11674
 Project: Hadoop Common
  Issue Type: Bug
  Components: io
Affects Versions: 2.6.0
Reporter: Sean Busbey
Assignee: Sean Busbey
Priority: Critical
  Labels: 2.6.1-candidate
 Fix For: 2.7.0

 Attachments: HADOOP-11674.1.patch


 A common optimization in the io classes for Input/Output Streams is to save a 
 single length-1 byte array to use in single byte read/write calls.
 CryptoInputStream and CryptoOutputStream both attempt to follow this practice 
 but mistakenly mark the array as static. That means that only a single 
 instance of each can be present in a JVM safely.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12235) hadoop-openstack junit mockito dependencies should be provided

2015-07-15 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-12235:
---

 Summary: hadoop-openstack junit  mockito dependencies should be 
provided
 Key: HADOOP-12235
 URL: https://issues.apache.org/jira/browse/HADOOP-12235
 Project: Hadoop Common
  Issue Type: Bug
  Components: build, fs/swift
Affects Versions: 2.6.0
Reporter: Steve Loughran
Priority: Minor


The scope for JUnit  mockito in hadoop-openstack is compile, which means it 
ends up on the downstream classpath unless excluded.

it should be provided, which was the original intent



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12017) Hadoop archives command should use configurable replication factor when closing

2015-07-15 Thread Bibin A Chundatt (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14627668#comment-14627668
 ] 

Bibin A Chundatt commented on HADOOP-12017:
---

Hi [~vinayrpet] updated patch as per your  comments. please review

 Hadoop archives command should use configurable replication factor when 
 closing
 ---

 Key: HADOOP-12017
 URL: https://issues.apache.org/jira/browse/HADOOP-12017
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Zhe Zhang
Assignee: Bibin A Chundatt
 Attachments: 0002-HADOOP-12017.patch, 0003-HADOOP-12017.patch, 
 0003-HADOOP-12017.patch, 0004-HADOOP-12017.patch, 0005-HADOOP-12017.patch, 
 0006-HADOOP-12017.patch, 0007-HADOOP-12017.patch, 0008-HADOOP-12017.patch


 {{HadoopArchives#HArchivesReducer#close}} uses hard-coded replication factor. 
 It should use {{repl}} instead, which is parsed from command line parameters.
 {code}
   // try increasing the replication 
   fs.setReplication(index, (short) 5);
   fs.setReplication(masterIndex, (short) 5);
 }
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10615) FileInputStream in JenkinsHash#main() is never closed

2015-07-15 Thread Chen He (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14628351#comment-14628351
 ] 

Chen He commented on HADOOP-10615:
--

Sure, thank you for the suggestion, [~ozawa].

 FileInputStream in JenkinsHash#main() is never closed
 -

 Key: HADOOP-10615
 URL: https://issues.apache.org/jira/browse/HADOOP-10615
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Chen He
Priority: Minor
  Labels: BB2015-05-TBR
 Attachments: HADOOP-10615-2.patch, HADOOP-10615.patch


 {code}
 FileInputStream in = new FileInputStream(args[0]);
 {code}
 The above FileInputStream is not closed upon exit of main.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11887) Introduce Intel ISA-L erasure coding library for the native support

2015-07-15 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14628449#comment-14628449
 ] 

Colin Patrick McCabe commented on HADOOP-11887:
---

bq. I totally got your concern. I'm using the general name erasurecode instead 
of isal directly because I wish the overall work won't couple with ISA-L too 
tightly. In future other native libraries like Jerasure or even hardware based 
ones could also be supported as well without too much change. I'm thinking that 
the native APIs defined in erasure_code.h should be general enough so other 
native libraries could also be easily mapped to it, thus when building, other 
libraries could also be passed to via the mentioned options. Note 
require.erasurecode is used to enable it, and if enabled, erasurecode.prefix 
should be specified to provide the library place; If not enabled (by default 
for now), the building should go as normally, and the result won't contain any 
erasure code related symbols. The logic is similar to existing codes like for 
snappy library.

I think you are confusing two different things: how to configure ISA-L, and 
supporting multiple different erasure encoding libraries.

ISA-L configuration includes:
* Where to find it (\-Disal.prefix, \-Disal.lib)
* Whether to bundle it (\-Dbundle.isal)
* Whether to fail the build if it is not found (\-Dbundle.isal)

These things should have ISA-L in the name since they pertain only to that 
library, and not to any other libraries.  Naming them erasurecode rather than 
isal will actually make it hard to support more than one erasure encoding 
library in the future, since we would only have one set of configuration knobs 
for both libraries, whereas we would need at least two.

If you want to support multiple erasure encoding libraries, you will need some 
kind of codec interface.  This is in addition to however you would configure 
the other libraries, not in replacement of it.

In any case, I think it would be unwise to try to write a plugin interface 
until we have support for at least one other erasure encoding library.  Let's 
keep the scope of this JIRA focused just on ISA-L.  If people want to come back 
later and add more libraries, they certainly can.

bq. You found another place I need to change. Yes I need to add an entry for 
erasrue code in the tool. The question here is, I'm wondering if it can serve 
the purpose of the new tool here, because executing of hadoop checknative may 
need some configuration or tweak to make it work, the new tool can directly run 
just after it's out, so can be used in maven unit tests cleanly. I understand 
introducing a new tool just for ONE native test may be too heavy, if you agree, 
maybe we could go simple, say, if no native library is available, the native 
test program could just exit with a warning message? Do we need more native 
tests anyway in future? If so the checking with the new tool may sound more 
reasonable.

Please don't add a new tool just for this.  Add support to {{hadoop 
checknative}}.  If hadoop checknative... need\[s\] some configuration or tweak 
to make it work then the admin should know that their libraries are not being 
properly found.  This is important information.

Thanks

 Introduce Intel ISA-L erasure coding library for the native support
 ---

 Key: HADOOP-11887
 URL: https://issues.apache.org/jira/browse/HADOOP-11887
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: io
Reporter: Kai Zheng
Assignee: Kai Zheng
 Attachments: HADOOP-11887-v1.patch, HADOOP-11887-v2.patch, 
 HADOOP-11887-v3.patch, HADOOP-11887-v4.patch


 This is to introduce Intel ISA-L erasure coding library for the native 
 support, via dynamic loading mechanism (dynamic module, like *.so in *nix and 
 *.dll on Windows).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10615) FileInputStream in JenkinsHash#main() is never closed

2015-07-15 Thread Chen He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10615?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen He updated HADOOP-10615:
-
Attachment: HADOOP-10615.003.patch

patch updated.

 FileInputStream in JenkinsHash#main() is never closed
 -

 Key: HADOOP-10615
 URL: https://issues.apache.org/jira/browse/HADOOP-10615
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Chen He
Priority: Minor
  Labels: BB2015-05-TBR
 Attachments: HADOOP-10615-2.patch, HADOOP-10615.003.patch, 
 HADOOP-10615.patch


 {code}
 FileInputStream in = new FileInputStream(args[0]);
 {code}
 The above FileInputStream is not closed upon exit of main.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12239) StorageException complaining no lease ID when updating FolderLastModifiedTime in WASB

2015-07-15 Thread Duo Xu (JIRA)
Duo Xu created HADOOP-12239:
---

 Summary: StorageException complaining  no lease ID when updating 
FolderLastModifiedTime in WASB
 Key: HADOOP-12239
 URL: https://issues.apache.org/jira/browse/HADOOP-12239
 Project: Hadoop Common
  Issue Type: Bug
  Components: tools
Affects Versions: 2.7.0
Reporter: Duo Xu
Assignee: Duo Xu


This is a similar issue as HADOOP-11523 and HADOOP-12089, which I found in a 
customer's HBase cluster logs, but the piece of code is in a different place.
{code}
2015-07-09 13:38:57,388 INFO org.apache.hadoop.hbase.master.SplitLogManager: 
dead splitlog workers 
[workernode3.skypedataprdhbaeus00.b6.internal.cloudapp.net,60020,1436448555180]
2015-07-09 13:38:57,466 ERROR org.apache.hadoop.hbase.executor.EventHandler: 
Caught throwable while processing event M_SERVER_SHUTDOWN
java.io.IOException: failed log splitting for 
workernode12.skypedataprdhbaeus00.b6.internal.cloudapp.net,60020,1436448566374, 
will retry
at 
org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.resubmit(ServerShutdownHandler.java:343)
at 
org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.process(ServerShutdownHandler.java:211)
at 
org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: Unable to write RenamePending file for folder 
rename from 
hbase/WALs/workernode12.skypedataprdhbaeus00.b6.internal.cloudapp.net,60020,1436448566374
 to 
hbase/WALs/workernode12.skypedataprdhbaeus00.b6.internal.cloudapp.net,60020,1436448566374-splitting
at 
org.apache.hadoop.fs.azure.NativeAzureFileSystem$FolderRenamePending.writeFile(NativeAzureFileSystem.java:258)
at 
org.apache.hadoop.fs.azure.NativeAzureFileSystem.prepareAtomicFolderRename(NativeAzureFileSystem.java:2110)
at 
org.apache.hadoop.fs.azure.NativeAzureFileSystem.rename(NativeAzureFileSystem.java:1998)
at 
org.apache.hadoop.hbase.master.MasterFileSystem.getLogDirs(MasterFileSystem.java:325)
at 
org.apache.hadoop.hbase.master.MasterFileSystem.splitLog(MasterFileSystem.java:412)
at 
org.apache.hadoop.hbase.master.MasterFileSystem.splitLog(MasterFileSystem.java:390)
at 
org.apache.hadoop.hbase.master.MasterFileSystem.splitLog(MasterFileSystem.java:288)
at 
org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.process(ServerShutdownHandler.java:204)
... 4 more
Caused by: org.apache.hadoop.fs.azure.AzureException: 
com.microsoft.azure.storage.StorageException: There is currently a lease on the 
blob and no lease ID was specified in the request.
at 
org.apache.hadoop.fs.azure.AzureNativeFileSystemStore.updateFolderLastModifiedTime(AzureNativeFileSystemStore.java:2598)
at 
org.apache.hadoop.fs.azure.AzureNativeFileSystemStore.updateFolderLastModifiedTime(AzureNativeFileSystemStore.java:2609)
at 
org.apache.hadoop.fs.azure.NativeAzureFileSystem.create(NativeAzureFileSystem.java:1366)
at 
org.apache.hadoop.fs.azure.NativeAzureFileSystem.create(NativeAzureFileSystem.java:1195)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:908)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:889)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:786)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:775)
at 
org.apache.hadoop.fs.azure.NativeAzureFileSystem$FolderRenamePending.writeFile(NativeAzureFileSystem.java:255)
... 11 more
Caused by: com.microsoft.azure.storage.StorageException: There is currently a 
lease on the blob and no lease ID was specified in the request.
at 
com.microsoft.azure.storage.StorageException.translateException(StorageException.java:89)
at 
com.microsoft.azure.storage.core.StorageRequest.materializeException(StorageRequest.java:307)
at 
com.microsoft.azure.storage.core.ExecutionEngine.executeWithRetry(ExecutionEngine.java:182)
at 
com.microsoft.azure.storage.blob.CloudBlob.uploadProperties(CloudBlob.java:2892)
at 
org.apache.hadoop.fs.azure.StorageInterfaceImpl$CloudBlobWrapperImpl.uploadProperties(StorageInterfaceImpl.java:372)
at 
org.apache.hadoop.fs.azure.AzureNativeFileSystemStore.updateFolderLastModifiedTime(AzureNativeFileSystemStore.java:2593)
... 19 more
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12234) Web UI Framable Page

2015-07-15 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14628538#comment-14628538
 ] 

Haohui Mai commented on HADOOP-12234:
-

Can you specify the concrete attack scenario that you're defending against?

I don't think it makes sense to make it available through a filter due to the 
variety requirements of the different projects. A better approach is to change 
the HTML code to ensure that the UI runs on the top frame.

 Web UI Framable Page
 

 Key: HADOOP-12234
 URL: https://issues.apache.org/jira/browse/HADOOP-12234
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Apekshit Sharma
Assignee: Apekshit Sharma
 Attachments: HADOOP-12234.patch


 The web UIs do not include the X-Frame-Options header to prevent the pages 
 from being framed from another site.  
 Reference:
 https://www.owasp.org/index.php/Clickjacking
 https://www.owasp.org/index.php/Clickjacking_Defense_Cheat_Sheet
 https://developer.mozilla.org/en-US/docs/Web/HTTP/X-Frame-Options



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12153) ByteBufferReadable doesn't declare @InterfaceAudience and @InterfaceStability

2015-07-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14627882#comment-14627882
 ] 

Hudson commented on HADOOP-12153:
-

FAILURE: Integrated in Hadoop-Yarn-trunk #987 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/987/])
HADOOP-12153. ByteBufferReadable doesn't declare @InterfaceAudience and 
@InterfaceStability. Contributed by Brahma Reddy Battula. (ozawa: rev 
cec1d43db026e66a9e84b5c3e8476dfd33f17ecb)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ByteBufferReadable.java


 ByteBufferReadable doesn't declare @InterfaceAudience and @InterfaceStability
 -

 Key: HADOOP-12153
 URL: https://issues.apache.org/jira/browse/HADOOP-12153
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.7.1
Reporter: Steve Loughran
Assignee: Brahma Reddy Battula
Priority: Minor
 Fix For: 2.8.0

 Attachments: HADOOP-12153-002.patch, HADOOP-12153.patch


 {{org.apache.hadoop.fs.ByteBufferReadable}} doesn't set any 
 {{@InterfaceAudience}} attributes. Is it intended for public consumption? If 
 so, it should declare it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12236) mvn site -Preleasedoc doesn't work behind proxy

2015-07-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14627918#comment-14627918
 ] 

Hadoop QA commented on HADOOP-12236:


\\
\\
| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |   0m  0s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| | |   0m 26s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12745426/HADOOP-12236.001.patch 
|
| Optional Tests |  |
| git revision | trunk / edcaae4 |
| Java | 1.7.0_55 |
| uname | Linux asf903.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7282/console |


This message was automatically generated.

 mvn site -Preleasedoc doesn't work behind proxy
 ---

 Key: HADOOP-12236
 URL: https://issues.apache.org/jira/browse/HADOOP-12236
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Tsuyoshi Ozawa
Assignee: Tsuyoshi Ozawa
 Attachments: HADOOP-12236.001.patch


 releasedocmaker.py doesn't work behind a proxy because urllib.urlopen doesn't 
 care environment varialibes like $http_proxy or $https_proxy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12236) mvn site -Preleasedoc doesn't work behind proxy

2015-07-15 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14627962#comment-14627962
 ] 

Allen Wittenauer commented on HADOOP-12236:
---

releasedocmaker was moved to yetus.  See the HADOOP-12111 branch for the latest 
version.

 mvn site -Preleasedoc doesn't work behind proxy
 ---

 Key: HADOOP-12236
 URL: https://issues.apache.org/jira/browse/HADOOP-12236
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Tsuyoshi Ozawa
Assignee: Tsuyoshi Ozawa
 Attachments: HADOOP-12236.001.patch


 releasedocmaker.py doesn't work behind a proxy because urllib.urlopen doesn't 
 care environment varialibes like $http_proxy or $https_proxy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12232) Upgrade Tomcat dependency to 6.0.44.

2015-07-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14627888#comment-14627888
 ] 

Hudson commented on HADOOP-12232:
-

FAILURE: Integrated in Hadoop-Yarn-trunk #987 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/987/])
HADOOP-12232. Upgrade Tomcat dependency to 6.0.44. Contributed by Chris 
Nauroth. (cnauroth: rev 0a16ee60174b15e3df653bb107cb2d0c2d606330)
* hadoop-project/pom.xml
* hadoop-common-project/hadoop-common/CHANGES.txt


 Upgrade Tomcat dependency to 6.0.44.
 

 Key: HADOOP-12232
 URL: https://issues.apache.org/jira/browse/HADOOP-12232
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Fix For: 2.7.2

 Attachments: HADOOP-12232.001.patch


 The Hadoop distro currently bundles Tomcat version 6.0.41 by default.  The 
 current Tomcat 6 version is 6.0.44, which includes a few incremental bug 
 fixes.  Let's update our default version so that our users get the latest bug 
 fixes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12236) mvn site -Preleasedoc doesn't work behind proxy

2015-07-15 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HADOOP-12236:

Description: releasedocmaker.py doesn't work behind a proxy because 
urllib.urlopen doesn't care environment varialibes like $http_proxy or 
$https_proxy.  (was: releasedocmaker.py doesn't work behind a proxy because 
urllib.urlopen doesn't care http_proxy and https_proxy.)

 mvn site -Preleasedoc doesn't work behind proxy
 ---

 Key: HADOOP-12236
 URL: https://issues.apache.org/jira/browse/HADOOP-12236
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Tsuyoshi Ozawa
Assignee: Tsuyoshi Ozawa
 Attachments: HADOOP-12236.001.patch


 releasedocmaker.py doesn't work behind a proxy because urllib.urlopen doesn't 
 care environment varialibes like $http_proxy or $https_proxy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12236) mvn site -Preleasedoc doesn't work behind proxy

2015-07-15 Thread Tsuyoshi Ozawa (JIRA)
Tsuyoshi Ozawa created HADOOP-12236:
---

 Summary: mvn site -Preleasedoc doesn't work behind proxy
 Key: HADOOP-12236
 URL: https://issues.apache.org/jira/browse/HADOOP-12236
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Tsuyoshi Ozawa
Assignee: Tsuyoshi Ozawa


releasedocmaker.py doesn't work behind a proxy because urllib.urlopen doesn't 
care http_proxy and https_proxy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12077) Provide a muti-URI replication Inode for ViewFs

2015-07-15 Thread Gera Shegalov (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14628644#comment-14628644
 ] 

Gera Shegalov commented on HADOOP-12077:


Since FindBugs -1 report does not actually report anything wrong, I consider 
004 reviewable.

 Provide a muti-URI replication Inode for ViewFs
 ---

 Key: HADOOP-12077
 URL: https://issues.apache.org/jira/browse/HADOOP-12077
 Project: Hadoop Common
  Issue Type: New Feature
  Components: fs
Reporter: Gera Shegalov
Assignee: Gera Shegalov
 Attachments: HADOOP-12077.001.patch, HADOOP-12077.002.patch, 
 HADOOP-12077.003.patch, HADOOP-12077.004.patch


 This JIRA is to provide simple replication capabilities for applications 
 that maintain logically equivalent paths in multiple locations for caching or 
 failover (e.g., S3 and HDFS). We noticed a simple common HDFS usage pattern 
 in our applications. They host their data on some logical cluster C. There 
 are corresponding HDFS clusters in multiple datacenters. When the application 
 runs in DC1, it prefers to read from C in DC1, and the applications prefers 
 to failover to C in DC2 if the application is migrated to DC2 or when C in 
 DC1 is unavailable. New application data versions are created 
 periodically/relatively infrequently. 
 In order to address many common scenarios in a general fashion, and to avoid 
 unnecessary code duplication, we implement this functionality in ViewFs (our 
 default FileSystem spanning all clusters in all datacenters) in a project 
 code-named Nfly (N as in N datacenters). Currently each ViewFs Inode points 
 to a single URI via ChRootedFileSystem. Consequently, we introduce a new type 
 of links that points to a list of URIs that are each going to be wrapped in 
 ChRootedFileSystem. A typical usage: 
 /nfly/C/user-/DC1/C/user,/DC2/C/user,... This collection of 
 ChRootedFileSystem instances is fronted by the Nfly filesystem object that is 
 actually used for the mount point/Inode. Nfly filesystems backs a single 
 logical path /nfly/C/user/user/path by multiple physical paths.
 Nfly filesystem supports setting minReplication. As long as the number of 
 URIs on which an update has succeeded is greater than or equal to 
 minReplication exceptions are only logged but not thrown. Each update 
 operation is currently executed serially (client-bandwidth driven parallelism 
 will be added later). 
 A file create/write: 
 # Creates a temporary invisible _nfly_tmp_file in the intended chrooted 
 filesystem. 
 # Returns a FSDataOutputStream that wraps output streams returned by 1
 # All writes are forwarded to each output stream.
 # On close of stream created by 2, all n streams are closed, and the files 
 are renamed from _nfly_tmp_file to file. All files receive the same mtime 
 corresponding to the client system time as of beginning of this step. 
 # If at least minReplication destinations has gone through steps 1-4 without 
 failures the transaction is considered logically committed, otherwise a 
 best-effort attempt of cleaning up the temporary files is attempted.
 As for reads, we support a notion of locality similar to HDFS  /DC/rack/node. 
 We sort Inode URIs using NetworkTopology by their authorities. These are 
 typically host names in simple HDFS URIs. If the authority is missing as is 
 the case with the local file:/// the local host name is assumed 
 InetAddress.getLocalHost(). This makes sure that the local file system is 
 always the closest one to the reader in this approach. For our Hadoop 2 hdfs 
 URIs that are based on nameservice ids instead of hostnames it is very easy 
 to adjust the topology script since our nameservice ids already contain the 
 datacenter. As for rack and node we can simply output any string such as 
 /DC/rack-nsid/node-nsid, since we only care about datacenter-locality for 
 such filesystem clients.
 There are 2 policies/additions to the read call path that makes it more 
 expensive, but improve user experience:
 - readMostRecent - when this policy is enabled, Nfly first checks mtime for 
 the path under all URIs, sorts them from most recent to least recent. Nfly 
 then sorts the set of most recent URIs topologically in the same manner as 
 described above.
 - repairOnRead - when readMostRecent is enabled Nfly already has to RPC all 
 underlying destinations. With repairOnRead, Nfly filesystem would 
 additionally attempt to refresh destinations with the path missing or a stale 
 version of the path using the nearest available most recent destination. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10615) FileInputStream in JenkinsHash#main() is never closed

2015-07-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14628614#comment-14628614
 ] 

Hadoop QA commented on HADOOP-10615:


\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  16m 24s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   7m 38s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 34s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 22s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   1m  4s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 21s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 32s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   1m 51s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | common tests |  22m  9s | Tests passed in 
hadoop-common. |
| | |  60m 57s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12745497/HADOOP-10615.003.patch 
|
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / edcaae4 |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7285/artifact/patchprocess/testrun_hadoop-common.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7285/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf901.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7285/console |


This message was automatically generated.

 FileInputStream in JenkinsHash#main() is never closed
 -

 Key: HADOOP-10615
 URL: https://issues.apache.org/jira/browse/HADOOP-10615
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Chen He
Priority: Minor
  Labels: BB2015-05-TBR
 Attachments: HADOOP-10615-2.patch, HADOOP-10615.003.patch, 
 HADOOP-10615.patch


 {code}
 FileInputStream in = new FileInputStream(args[0]);
 {code}
 The above FileInputStream is not closed upon exit of main.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-692) Rack-aware Replica Placement

2015-07-15 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14628670#comment-14628670
 ] 

Tsz Wo Nicholas Sze commented on HADOOP-692:


Do you really need four or more levels?  How about /CoreDC-DC/rack/host?

 Rack-aware Replica Placement
 

 Key: HADOOP-692
 URL: https://issues.apache.org/jira/browse/HADOOP-692
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 0.10.1
Reporter: Hairong Kuang
Assignee: Hairong Kuang
 Fix For: 0.11.0

 Attachments: Rack_aware_HDFS_proposal.pdf, rack.patch


 This issue assumes that HDFS runs on a cluster of computers that spread 
 across many racks. Communication between two nodes on different racks needs 
 to go through switches. Bandwidth in/out of a rack may be less than the total 
 bandwidth of machines in the rack. The purpose of rack-aware replica 
 placement is to improve data reliability, availability, and network bandwidth 
 utilization. The basic idea is that each data node determines to which rack 
 it belongs at the startup time and notifies the name node of the rack id upon 
 registration. The name node maintains a rackid-to-datanode map and tries to 
 place replicas across racks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (HADOOP-12239) StorageException complaining no lease ID when updating FolderLastModifiedTime in WASB

2015-07-15 Thread Duo Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-12239 started by Duo Xu.
---
 StorageException complaining  no lease ID when updating 
 FolderLastModifiedTime in WASB
 

 Key: HADOOP-12239
 URL: https://issues.apache.org/jira/browse/HADOOP-12239
 Project: Hadoop Common
  Issue Type: Bug
  Components: tools
Affects Versions: 2.7.0
Reporter: Duo Xu
Assignee: Duo Xu

 This is a similar issue as HADOOP-11523 and HADOOP-12089, which I found in a 
 customer's HBase cluster logs, but the piece of code is in a different place.
 {code}
 2015-07-09 13:38:57,388 INFO org.apache.hadoop.hbase.master.SplitLogManager: 
 dead splitlog workers 
 [workernode3.xxx.b6.internal.cloudapp.net,60020,1436448555180]
 2015-07-09 13:38:57,466 ERROR org.apache.hadoop.hbase.executor.EventHandler: 
 Caught throwable while processing event M_SERVER_SHUTDOWN
 java.io.IOException: failed log splitting for 
 workernode12.xxx.b6.internal.cloudapp.net,60020,1436448566374, will retry
   at 
 org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.resubmit(ServerShutdownHandler.java:343)
   at 
 org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.process(ServerShutdownHandler.java:211)
   at 
 org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at java.lang.Thread.run(Thread.java:745)
 Caused by: java.io.IOException: Unable to write RenamePending file for folder 
 rename from 
 hbase/WALs/workernode12.xxx.b6.internal.cloudapp.net,60020,1436448566374 to 
 hbase/WALs/workernode12.xxx.b6.internal.cloudapp.net,60020,1436448566374-splitting
   at 
 org.apache.hadoop.fs.azure.NativeAzureFileSystem$FolderRenamePending.writeFile(NativeAzureFileSystem.java:258)
   at 
 org.apache.hadoop.fs.azure.NativeAzureFileSystem.prepareAtomicFolderRename(NativeAzureFileSystem.java:2110)
   at 
 org.apache.hadoop.fs.azure.NativeAzureFileSystem.rename(NativeAzureFileSystem.java:1998)
   at 
 org.apache.hadoop.hbase.master.MasterFileSystem.getLogDirs(MasterFileSystem.java:325)
   at 
 org.apache.hadoop.hbase.master.MasterFileSystem.splitLog(MasterFileSystem.java:412)
   at 
 org.apache.hadoop.hbase.master.MasterFileSystem.splitLog(MasterFileSystem.java:390)
   at 
 org.apache.hadoop.hbase.master.MasterFileSystem.splitLog(MasterFileSystem.java:288)
   at 
 org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.process(ServerShutdownHandler.java:204)
   ... 4 more
 Caused by: org.apache.hadoop.fs.azure.AzureException: 
 com.microsoft.azure.storage.StorageException: There is currently a lease on 
 the blob and no lease ID was specified in the request.
   at 
 org.apache.hadoop.fs.azure.AzureNativeFileSystemStore.updateFolderLastModifiedTime(AzureNativeFileSystemStore.java:2598)
   at 
 org.apache.hadoop.fs.azure.AzureNativeFileSystemStore.updateFolderLastModifiedTime(AzureNativeFileSystemStore.java:2609)
   at 
 org.apache.hadoop.fs.azure.NativeAzureFileSystem.create(NativeAzureFileSystem.java:1366)
   at 
 org.apache.hadoop.fs.azure.NativeAzureFileSystem.create(NativeAzureFileSystem.java:1195)
   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:908)
   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:889)
   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:786)
   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:775)
   at 
 org.apache.hadoop.fs.azure.NativeAzureFileSystem$FolderRenamePending.writeFile(NativeAzureFileSystem.java:255)
   ... 11 more
 Caused by: com.microsoft.azure.storage.StorageException: There is currently a 
 lease on the blob and no lease ID was specified in the request.
   at 
 com.microsoft.azure.storage.StorageException.translateException(StorageException.java:89)
   at 
 com.microsoft.azure.storage.core.StorageRequest.materializeException(StorageRequest.java:307)
   at 
 com.microsoft.azure.storage.core.ExecutionEngine.executeWithRetry(ExecutionEngine.java:182)
   at 
 com.microsoft.azure.storage.blob.CloudBlob.uploadProperties(CloudBlob.java:2892)
   at 
 org.apache.hadoop.fs.azure.StorageInterfaceImpl$CloudBlobWrapperImpl.uploadProperties(StorageInterfaceImpl.java:372)
   at 
 org.apache.hadoop.fs.azure.AzureNativeFileSystemStore.updateFolderLastModifiedTime(AzureNativeFileSystemStore.java:2593)
   ... 19 more
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12239) StorageException complaining no lease ID when updating FolderLastModifiedTime in WASB

2015-07-15 Thread Duo Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Xu updated HADOOP-12239:

Attachment: HADOOP-12239.01.patch

[~cnauroth]

I think this is the minimum change without touching too much code. Please take 
a look.

 StorageException complaining  no lease ID when updating 
 FolderLastModifiedTime in WASB
 

 Key: HADOOP-12239
 URL: https://issues.apache.org/jira/browse/HADOOP-12239
 Project: Hadoop Common
  Issue Type: Bug
  Components: tools
Affects Versions: 2.7.0
Reporter: Duo Xu
Assignee: Duo Xu
 Attachments: HADOOP-12239.01.patch


 This is a similar issue as HADOOP-11523 and HADOOP-12089, which I found in a 
 customer's HBase cluster logs, but the piece of code is in a different place.
 {code}
 2015-07-09 13:38:57,388 INFO org.apache.hadoop.hbase.master.SplitLogManager: 
 dead splitlog workers 
 [workernode3.xxx.b6.internal.cloudapp.net,60020,1436448555180]
 2015-07-09 13:38:57,466 ERROR org.apache.hadoop.hbase.executor.EventHandler: 
 Caught throwable while processing event M_SERVER_SHUTDOWN
 java.io.IOException: failed log splitting for 
 workernode12.xxx.b6.internal.cloudapp.net,60020,1436448566374, will retry
   at 
 org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.resubmit(ServerShutdownHandler.java:343)
   at 
 org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.process(ServerShutdownHandler.java:211)
   at 
 org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at java.lang.Thread.run(Thread.java:745)
 Caused by: java.io.IOException: Unable to write RenamePending file for folder 
 rename from 
 hbase/WALs/workernode12.xxx.b6.internal.cloudapp.net,60020,1436448566374 to 
 hbase/WALs/workernode12.xxx.b6.internal.cloudapp.net,60020,1436448566374-splitting
   at 
 org.apache.hadoop.fs.azure.NativeAzureFileSystem$FolderRenamePending.writeFile(NativeAzureFileSystem.java:258)
   at 
 org.apache.hadoop.fs.azure.NativeAzureFileSystem.prepareAtomicFolderRename(NativeAzureFileSystem.java:2110)
   at 
 org.apache.hadoop.fs.azure.NativeAzureFileSystem.rename(NativeAzureFileSystem.java:1998)
   at 
 org.apache.hadoop.hbase.master.MasterFileSystem.getLogDirs(MasterFileSystem.java:325)
   at 
 org.apache.hadoop.hbase.master.MasterFileSystem.splitLog(MasterFileSystem.java:412)
   at 
 org.apache.hadoop.hbase.master.MasterFileSystem.splitLog(MasterFileSystem.java:390)
   at 
 org.apache.hadoop.hbase.master.MasterFileSystem.splitLog(MasterFileSystem.java:288)
   at 
 org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.process(ServerShutdownHandler.java:204)
   ... 4 more
 Caused by: org.apache.hadoop.fs.azure.AzureException: 
 com.microsoft.azure.storage.StorageException: There is currently a lease on 
 the blob and no lease ID was specified in the request.
   at 
 org.apache.hadoop.fs.azure.AzureNativeFileSystemStore.updateFolderLastModifiedTime(AzureNativeFileSystemStore.java:2598)
   at 
 org.apache.hadoop.fs.azure.AzureNativeFileSystemStore.updateFolderLastModifiedTime(AzureNativeFileSystemStore.java:2609)
   at 
 org.apache.hadoop.fs.azure.NativeAzureFileSystem.create(NativeAzureFileSystem.java:1366)
   at 
 org.apache.hadoop.fs.azure.NativeAzureFileSystem.create(NativeAzureFileSystem.java:1195)
   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:908)
   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:889)
   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:786)
   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:775)
   at 
 org.apache.hadoop.fs.azure.NativeAzureFileSystem$FolderRenamePending.writeFile(NativeAzureFileSystem.java:255)
   ... 11 more
 Caused by: com.microsoft.azure.storage.StorageException: There is currently a 
 lease on the blob and no lease ID was specified in the request.
   at 
 com.microsoft.azure.storage.StorageException.translateException(StorageException.java:89)
   at 
 com.microsoft.azure.storage.core.StorageRequest.materializeException(StorageRequest.java:307)
   at 
 com.microsoft.azure.storage.core.ExecutionEngine.executeWithRetry(ExecutionEngine.java:182)
   at 
 com.microsoft.azure.storage.blob.CloudBlob.uploadProperties(CloudBlob.java:2892)
   at 
 org.apache.hadoop.fs.azure.StorageInterfaceImpl$CloudBlobWrapperImpl.uploadProperties(StorageInterfaceImpl.java:372)
   at 
 org.apache.hadoop.fs.azure.AzureNativeFileSystemStore.updateFolderLastModifiedTime(AzureNativeFileSystemStore.java:2593)
   ... 

[jira] [Updated] (HADOOP-12239) StorageException complaining no lease ID when updating FolderLastModifiedTime in WASB

2015-07-15 Thread Duo Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Xu updated HADOOP-12239:

Status: Patch Available  (was: In Progress)

 StorageException complaining  no lease ID when updating 
 FolderLastModifiedTime in WASB
 

 Key: HADOOP-12239
 URL: https://issues.apache.org/jira/browse/HADOOP-12239
 Project: Hadoop Common
  Issue Type: Bug
  Components: tools
Affects Versions: 2.7.0
Reporter: Duo Xu
Assignee: Duo Xu
 Attachments: HADOOP-12239.01.patch


 This is a similar issue as HADOOP-11523 and HADOOP-12089, which I found in a 
 customer's HBase cluster logs, but the piece of code is in a different place.
 {code}
 2015-07-09 13:38:57,388 INFO org.apache.hadoop.hbase.master.SplitLogManager: 
 dead splitlog workers 
 [workernode3.xxx.b6.internal.cloudapp.net,60020,1436448555180]
 2015-07-09 13:38:57,466 ERROR org.apache.hadoop.hbase.executor.EventHandler: 
 Caught throwable while processing event M_SERVER_SHUTDOWN
 java.io.IOException: failed log splitting for 
 workernode12.xxx.b6.internal.cloudapp.net,60020,1436448566374, will retry
   at 
 org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.resubmit(ServerShutdownHandler.java:343)
   at 
 org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.process(ServerShutdownHandler.java:211)
   at 
 org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at java.lang.Thread.run(Thread.java:745)
 Caused by: java.io.IOException: Unable to write RenamePending file for folder 
 rename from 
 hbase/WALs/workernode12.xxx.b6.internal.cloudapp.net,60020,1436448566374 to 
 hbase/WALs/workernode12.xxx.b6.internal.cloudapp.net,60020,1436448566374-splitting
   at 
 org.apache.hadoop.fs.azure.NativeAzureFileSystem$FolderRenamePending.writeFile(NativeAzureFileSystem.java:258)
   at 
 org.apache.hadoop.fs.azure.NativeAzureFileSystem.prepareAtomicFolderRename(NativeAzureFileSystem.java:2110)
   at 
 org.apache.hadoop.fs.azure.NativeAzureFileSystem.rename(NativeAzureFileSystem.java:1998)
   at 
 org.apache.hadoop.hbase.master.MasterFileSystem.getLogDirs(MasterFileSystem.java:325)
   at 
 org.apache.hadoop.hbase.master.MasterFileSystem.splitLog(MasterFileSystem.java:412)
   at 
 org.apache.hadoop.hbase.master.MasterFileSystem.splitLog(MasterFileSystem.java:390)
   at 
 org.apache.hadoop.hbase.master.MasterFileSystem.splitLog(MasterFileSystem.java:288)
   at 
 org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.process(ServerShutdownHandler.java:204)
   ... 4 more
 Caused by: org.apache.hadoop.fs.azure.AzureException: 
 com.microsoft.azure.storage.StorageException: There is currently a lease on 
 the blob and no lease ID was specified in the request.
   at 
 org.apache.hadoop.fs.azure.AzureNativeFileSystemStore.updateFolderLastModifiedTime(AzureNativeFileSystemStore.java:2598)
   at 
 org.apache.hadoop.fs.azure.AzureNativeFileSystemStore.updateFolderLastModifiedTime(AzureNativeFileSystemStore.java:2609)
   at 
 org.apache.hadoop.fs.azure.NativeAzureFileSystem.create(NativeAzureFileSystem.java:1366)
   at 
 org.apache.hadoop.fs.azure.NativeAzureFileSystem.create(NativeAzureFileSystem.java:1195)
   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:908)
   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:889)
   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:786)
   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:775)
   at 
 org.apache.hadoop.fs.azure.NativeAzureFileSystem$FolderRenamePending.writeFile(NativeAzureFileSystem.java:255)
   ... 11 more
 Caused by: com.microsoft.azure.storage.StorageException: There is currently a 
 lease on the blob and no lease ID was specified in the request.
   at 
 com.microsoft.azure.storage.StorageException.translateException(StorageException.java:89)
   at 
 com.microsoft.azure.storage.core.StorageRequest.materializeException(StorageRequest.java:307)
   at 
 com.microsoft.azure.storage.core.ExecutionEngine.executeWithRetry(ExecutionEngine.java:182)
   at 
 com.microsoft.azure.storage.blob.CloudBlob.uploadProperties(CloudBlob.java:2892)
   at 
 org.apache.hadoop.fs.azure.StorageInterfaceImpl$CloudBlobWrapperImpl.uploadProperties(StorageInterfaceImpl.java:372)
   at 
 org.apache.hadoop.fs.azure.AzureNativeFileSystemStore.updateFolderLastModifiedTime(AzureNativeFileSystemStore.java:2593)
   ... 19 more
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12226) CHANGED_MODULES is wrong for ant

2015-07-15 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-12226:
--
Attachment: HADOOP-12226.HADOOP-12111.00.patch

-00:
* this pulls in the UNION code from HADOOP-12130
* this rewrites how CHANGED_MODULES gets calculated to be much more build tool 
agnostic
* I've verified that it works for both ant and maven in expected ways using a 
pre- version of the unit test infrastructure

 CHANGED_MODULES is wrong for ant
 

 Key: HADOOP-12226
 URL: https://issues.apache.org/jira/browse/HADOOP-12226
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
 Attachments: HADOOP-12226.HADOOP-12111.00.patch


 CHANGED_MODULES assumes maven and will have incorrect results for ant.
 We may need to rethink how this and CHANGED_UNFILTERED_MODULES work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12226) CHANGED_MODULES is wrong for ant

2015-07-15 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-12226:
--
Affects Version/s: HADOOP-12111
   Status: Patch Available  (was: Open)

 CHANGED_MODULES is wrong for ant
 

 Key: HADOOP-12226
 URL: https://issues.apache.org/jira/browse/HADOOP-12226
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Affects Versions: HADOOP-12111
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
 Attachments: HADOOP-12226.HADOOP-12111.00.patch


 CHANGED_MODULES assumes maven and will have incorrect results for ant.
 We may need to rethink how this and CHANGED_UNFILTERED_MODULES work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12239) StorageException complaining no lease ID when updating FolderLastModifiedTime in WASB

2015-07-15 Thread Duo Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Xu updated HADOOP-12239:

Description: 
This is a similar issue as HADOOP-11523 and HADOOP-12089, which I found in a 
customer's HBase cluster logs, but the piece of code is in a different place.
{code}
2015-07-09 13:38:57,388 INFO org.apache.hadoop.hbase.master.SplitLogManager: 
dead splitlog workers 
[workernode3.xxx.b6.internal.cloudapp.net,60020,1436448555180]
2015-07-09 13:38:57,466 ERROR org.apache.hadoop.hbase.executor.EventHandler: 
Caught throwable while processing event M_SERVER_SHUTDOWN
java.io.IOException: failed log splitting for 
workernode12.xxx.b6.internal.cloudapp.net,60020,1436448566374, will retry
at 
org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.resubmit(ServerShutdownHandler.java:343)
at 
org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.process(ServerShutdownHandler.java:211)
at 
org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: Unable to write RenamePending file for folder 
rename from 
hbase/WALs/workernode12.skypedataprdhbaeus00.b6.internal.cloudapp.net,60020,1436448566374
 to 
hbase/WALs/workernode12.skypedataprdhbaeus00.b6.internal.cloudapp.net,60020,1436448566374-splitting
at 
org.apache.hadoop.fs.azure.NativeAzureFileSystem$FolderRenamePending.writeFile(NativeAzureFileSystem.java:258)
at 
org.apache.hadoop.fs.azure.NativeAzureFileSystem.prepareAtomicFolderRename(NativeAzureFileSystem.java:2110)
at 
org.apache.hadoop.fs.azure.NativeAzureFileSystem.rename(NativeAzureFileSystem.java:1998)
at 
org.apache.hadoop.hbase.master.MasterFileSystem.getLogDirs(MasterFileSystem.java:325)
at 
org.apache.hadoop.hbase.master.MasterFileSystem.splitLog(MasterFileSystem.java:412)
at 
org.apache.hadoop.hbase.master.MasterFileSystem.splitLog(MasterFileSystem.java:390)
at 
org.apache.hadoop.hbase.master.MasterFileSystem.splitLog(MasterFileSystem.java:288)
at 
org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.process(ServerShutdownHandler.java:204)
... 4 more
Caused by: org.apache.hadoop.fs.azure.AzureException: 
com.microsoft.azure.storage.StorageException: There is currently a lease on the 
blob and no lease ID was specified in the request.
at 
org.apache.hadoop.fs.azure.AzureNativeFileSystemStore.updateFolderLastModifiedTime(AzureNativeFileSystemStore.java:2598)
at 
org.apache.hadoop.fs.azure.AzureNativeFileSystemStore.updateFolderLastModifiedTime(AzureNativeFileSystemStore.java:2609)
at 
org.apache.hadoop.fs.azure.NativeAzureFileSystem.create(NativeAzureFileSystem.java:1366)
at 
org.apache.hadoop.fs.azure.NativeAzureFileSystem.create(NativeAzureFileSystem.java:1195)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:908)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:889)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:786)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:775)
at 
org.apache.hadoop.fs.azure.NativeAzureFileSystem$FolderRenamePending.writeFile(NativeAzureFileSystem.java:255)
... 11 more
Caused by: com.microsoft.azure.storage.StorageException: There is currently a 
lease on the blob and no lease ID was specified in the request.
at 
com.microsoft.azure.storage.StorageException.translateException(StorageException.java:89)
at 
com.microsoft.azure.storage.core.StorageRequest.materializeException(StorageRequest.java:307)
at 
com.microsoft.azure.storage.core.ExecutionEngine.executeWithRetry(ExecutionEngine.java:182)
at 
com.microsoft.azure.storage.blob.CloudBlob.uploadProperties(CloudBlob.java:2892)
at 
org.apache.hadoop.fs.azure.StorageInterfaceImpl$CloudBlobWrapperImpl.uploadProperties(StorageInterfaceImpl.java:372)
at 
org.apache.hadoop.fs.azure.AzureNativeFileSystemStore.updateFolderLastModifiedTime(AzureNativeFileSystemStore.java:2593)
... 19 more
{code}

  was:
This is a similar issue as HADOOP-11523 and HADOOP-12089, which I found in a 
customer's HBase cluster logs, but the piece of code is in a different place.
{code}
2015-07-09 13:38:57,388 INFO org.apache.hadoop.hbase.master.SplitLogManager: 
dead splitlog workers 
[workernode3.skypedataprdhbaeus00.b6.internal.cloudapp.net,60020,1436448555180]
2015-07-09 13:38:57,466 ERROR org.apache.hadoop.hbase.executor.EventHandler: 
Caught throwable while processing event M_SERVER_SHUTDOWN
java.io.IOException: failed log splitting for 

[jira] [Updated] (HADOOP-12239) StorageException complaining no lease ID when updating FolderLastModifiedTime in WASB

2015-07-15 Thread Duo Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Xu updated HADOOP-12239:

Description: 
This is a similar issue as HADOOP-11523 and HADOOP-12089, which I found in a 
customer's HBase cluster logs, but the piece of code is in a different place.
{code}
2015-07-09 13:38:57,388 INFO org.apache.hadoop.hbase.master.SplitLogManager: 
dead splitlog workers 
[workernode3.xxx.b6.internal.cloudapp.net,60020,1436448555180]
2015-07-09 13:38:57,466 ERROR org.apache.hadoop.hbase.executor.EventHandler: 
Caught throwable while processing event M_SERVER_SHUTDOWN
java.io.IOException: failed log splitting for 
workernode12.xxx.b6.internal.cloudapp.net,60020,1436448566374, will retry
at 
org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.resubmit(ServerShutdownHandler.java:343)
at 
org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.process(ServerShutdownHandler.java:211)
at 
org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: Unable to write RenamePending file for folder 
rename from 
hbase/WALs/workernode12.xxx.b6.internal.cloudapp.net,60020,1436448566374 to 
hbase/WALs/workernode12.xxx.b6.internal.cloudapp.net,60020,1436448566374-splitting
at 
org.apache.hadoop.fs.azure.NativeAzureFileSystem$FolderRenamePending.writeFile(NativeAzureFileSystem.java:258)
at 
org.apache.hadoop.fs.azure.NativeAzureFileSystem.prepareAtomicFolderRename(NativeAzureFileSystem.java:2110)
at 
org.apache.hadoop.fs.azure.NativeAzureFileSystem.rename(NativeAzureFileSystem.java:1998)
at 
org.apache.hadoop.hbase.master.MasterFileSystem.getLogDirs(MasterFileSystem.java:325)
at 
org.apache.hadoop.hbase.master.MasterFileSystem.splitLog(MasterFileSystem.java:412)
at 
org.apache.hadoop.hbase.master.MasterFileSystem.splitLog(MasterFileSystem.java:390)
at 
org.apache.hadoop.hbase.master.MasterFileSystem.splitLog(MasterFileSystem.java:288)
at 
org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.process(ServerShutdownHandler.java:204)
... 4 more
Caused by: org.apache.hadoop.fs.azure.AzureException: 
com.microsoft.azure.storage.StorageException: There is currently a lease on the 
blob and no lease ID was specified in the request.
at 
org.apache.hadoop.fs.azure.AzureNativeFileSystemStore.updateFolderLastModifiedTime(AzureNativeFileSystemStore.java:2598)
at 
org.apache.hadoop.fs.azure.AzureNativeFileSystemStore.updateFolderLastModifiedTime(AzureNativeFileSystemStore.java:2609)
at 
org.apache.hadoop.fs.azure.NativeAzureFileSystem.create(NativeAzureFileSystem.java:1366)
at 
org.apache.hadoop.fs.azure.NativeAzureFileSystem.create(NativeAzureFileSystem.java:1195)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:908)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:889)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:786)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:775)
at 
org.apache.hadoop.fs.azure.NativeAzureFileSystem$FolderRenamePending.writeFile(NativeAzureFileSystem.java:255)
... 11 more
Caused by: com.microsoft.azure.storage.StorageException: There is currently a 
lease on the blob and no lease ID was specified in the request.
at 
com.microsoft.azure.storage.StorageException.translateException(StorageException.java:89)
at 
com.microsoft.azure.storage.core.StorageRequest.materializeException(StorageRequest.java:307)
at 
com.microsoft.azure.storage.core.ExecutionEngine.executeWithRetry(ExecutionEngine.java:182)
at 
com.microsoft.azure.storage.blob.CloudBlob.uploadProperties(CloudBlob.java:2892)
at 
org.apache.hadoop.fs.azure.StorageInterfaceImpl$CloudBlobWrapperImpl.uploadProperties(StorageInterfaceImpl.java:372)
at 
org.apache.hadoop.fs.azure.AzureNativeFileSystemStore.updateFolderLastModifiedTime(AzureNativeFileSystemStore.java:2593)
... 19 more
{code}

  was:
This is a similar issue as HADOOP-11523 and HADOOP-12089, which I found in a 
customer's HBase cluster logs, but the piece of code is in a different place.
{code}
2015-07-09 13:38:57,388 INFO org.apache.hadoop.hbase.master.SplitLogManager: 
dead splitlog workers 
[workernode3.xxx.b6.internal.cloudapp.net,60020,1436448555180]
2015-07-09 13:38:57,466 ERROR org.apache.hadoop.hbase.executor.EventHandler: 
Caught throwable while processing event M_SERVER_SHUTDOWN
java.io.IOException: failed log splitting for 
workernode12.xxx.b6.internal.cloudapp.net,60020,1436448566374, will retry
at 

[jira] [Comment Edited] (HADOOP-12226) CHANGED_MODULES is wrong for ant

2015-07-15 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14628750#comment-14628750
 ] 

Allen Wittenauer edited comment on HADOOP-12226 at 7/15/15 9:16 PM:


-00:
* this pulls in the UNION code from HADOOP-12198
* this rewrites how CHANGED_MODULES gets calculated to be much more build tool 
agnostic
* I've verified that it works for both ant and maven in expected ways using a 
pre- version of the unit test infrastructure


was (Author: aw):
-00:
* this pulls in the UNION code from HADOOP-12130
* this rewrites how CHANGED_MODULES gets calculated to be much more build tool 
agnostic
* I've verified that it works for both ant and maven in expected ways using a 
pre- version of the unit test infrastructure

 CHANGED_MODULES is wrong for ant
 

 Key: HADOOP-12226
 URL: https://issues.apache.org/jira/browse/HADOOP-12226
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Affects Versions: HADOOP-12111
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
 Attachments: HADOOP-12226.HADOOP-12111.00.patch


 CHANGED_MODULES assumes maven and will have incorrect results for ant.
 We may need to rethink how this and CHANGED_UNFILTERED_MODULES work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12198) patches that hit multiple modules may need to build at root

2015-07-15 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14628752#comment-14628752
 ] 

Allen Wittenauer commented on HADOOP-12198:
---

Making HADOOP-12226 required for future versions of this patch.

 patches that hit multiple modules may need to build at root
 ---

 Key: HADOOP-12198
 URL: https://issues.apache.org/jira/browse/HADOOP-12198
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
 Attachments: HADOOP-12198.HADOOP-12111.00.patch, 
 HADOOP-12198.HADOOP-12111.02.patch, HADOOP-12198.HADOOP-12111.03.patch, 
 HADOOP-12198.HADOOP-12111.03.patch, HADOOP-12198.HADOOP-12111.04.patch


 Patches that introduce dependencies on other modules (e.g., HADOOP-12180) 
 need to effectively built at root. 
 There is a good chance this type of logic will need to be done on a 
 per-project basis. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12226) CHANGED_MODULES is wrong for ant

2015-07-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14628764#comment-14628764
 ] 

Hadoop QA commented on HADOOP-12226:


\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} precommit patch detected. {color} |
| {color:blue}0{color} | {color:blue} @author {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipping @author checks as test-patch.sh has been 
patched. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
14s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:red}-1{color} | {color:red} shellcheck {color} | {color:red} 0m 9s 
{color} | {color:red} The applied patch generated 1 new shellcheck (v0.3.3) 
issues (total was 27, now 28). {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 0m 27s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12745523/HADOOP-12226.HADOOP-12111.00.patch
 |
| git revision | HADOOP-12111 / 33f2feb |
| Optional Tests | asflicense shellcheck |
| uname | Linux asf906.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/patchprocess/dev-support-test/personality/hadoop.sh
 |
| Default Java | 1.7.0_55 |
| Multi-JDK versions |  /home/jenkins/tools/java/jdk1.8.0:1.8.0 
/home/jenkins/tools/java/jdk1.7.0_55:1.7.0_55 |
| shellcheck | v0.3.3 (This is an old version that has serious bugs. Consider 
upgrading.) |
| shellcheck | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7288/artifact/patchprocess/diffpatchshellcheck.txt
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7288/console |


This message was automatically generated.

 CHANGED_MODULES is wrong for ant
 

 Key: HADOOP-12226
 URL: https://issues.apache.org/jira/browse/HADOOP-12226
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Affects Versions: HADOOP-12111
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
 Attachments: HADOOP-12226.HADOOP-12111.00.patch


 CHANGED_MODULES assumes maven and will have incorrect results for ant.
 We may need to rethink how this and CHANGED_UNFILTERED_MODULES work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12121) smarter branch detection

2015-07-15 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-12121:
--
Attachment: HADOOP-12121.HADOOP-12111.01.patch

-01:
* fixed a bug with jira.##(.text).branch format found via unit testing

 smarter branch detection
 

 Key: HADOOP-12121
 URL: https://issues.apache.org/jira/browse/HADOOP-12121
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Affects Versions: HADOOP-12111
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
 Attachments: HADOOP-12121.HADOOP-12111.01.patch, 
 HADOOP-12121.HADOOP-12111.patch


 We should make branch detection smarter so that it works on micro versions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HADOOP-12226) CHANGED_MODULES is wrong for ant

2015-07-15 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer reassigned HADOOP-12226:
-

Assignee: Allen Wittenauer

 CHANGED_MODULES is wrong for ant
 

 Key: HADOOP-12226
 URL: https://issues.apache.org/jira/browse/HADOOP-12226
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer

 CHANGED_MODULES assumes maven and will have incorrect results for ant.
 We may need to rethink how this and CHANGED_UNFILTERED_MODULES work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12121) smarter branch detection

2015-07-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14628744#comment-14628744
 ] 

Hadoop QA commented on HADOOP-12121:


(!) A patch to the files used for the QA process has been detected. 
Re-executing against the patched versions to perform further tests. 
The console is at 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7287/console in case of 
problems.

 smarter branch detection
 

 Key: HADOOP-12121
 URL: https://issues.apache.org/jira/browse/HADOOP-12121
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Affects Versions: HADOOP-12111
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
 Attachments: HADOOP-12121.HADOOP-12111.01.patch, 
 HADOOP-12121.HADOOP-12111.patch


 We should make branch detection smarter so that it works on micro versions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12121) smarter branch detection

2015-07-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14628746#comment-14628746
 ] 

Hadoop QA commented on HADOOP-12121:


\\
\\
| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} precommit patch detected. {color} |
| {color:blue}0{color} | {color:blue} @author {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipping @author checks as test-patch.sh has been 
patched. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
25s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 
9s {color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 0m 38s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12745519/HADOOP-12121.HADOOP-12111.01.patch
 |
| git revision | HADOOP-12111 / 33f2feb |
| Optional Tests | asflicense shellcheck |
| uname | Linux asf906.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/patchprocess/dev-support-test/personality/hadoop.sh
 |
| Default Java | 1.7.0_55 |
| Multi-JDK versions |  /home/jenkins/tools/java/jdk1.8.0:1.8.0 
/home/jenkins/tools/java/jdk1.7.0_55:1.7.0_55 |
| shellcheck | v0.3.3 (This is an old version that has serious bugs. Consider 
upgrading.) |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7287/console |


This message was automatically generated.

 smarter branch detection
 

 Key: HADOOP-12121
 URL: https://issues.apache.org/jira/browse/HADOOP-12121
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Affects Versions: HADOOP-12111
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
 Attachments: HADOOP-12121.HADOOP-12111.01.patch, 
 HADOOP-12121.HADOOP-12111.patch


 We should make branch detection smarter so that it works on micro versions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12226) CHANGED_MODULES is wrong for ant

2015-07-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14628762#comment-14628762
 ] 

Hadoop QA commented on HADOOP-12226:


(!) A patch to the files used for the QA process has been detected. 
Re-executing against the patched versions to perform further tests. 
The console is at 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7288/console in case of 
problems.

 CHANGED_MODULES is wrong for ant
 

 Key: HADOOP-12226
 URL: https://issues.apache.org/jira/browse/HADOOP-12226
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Affects Versions: HADOOP-12111
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
 Attachments: HADOOP-12226.HADOOP-12111.00.patch


 CHANGED_MODULES assumes maven and will have incorrect results for ant.
 We may need to rethink how this and CHANGED_UNFILTERED_MODULES work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12235) hadoop-openstack junit mockito dependencies should be provided

2015-07-15 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14628963#comment-14628963
 ] 

Chris Douglas commented on HADOOP-12235:


+1 lgtm

 hadoop-openstack junit  mockito dependencies should be provided
 --

 Key: HADOOP-12235
 URL: https://issues.apache.org/jira/browse/HADOOP-12235
 Project: Hadoop Common
  Issue Type: Bug
  Components: build, fs/swift
Affects Versions: 2.6.0
Reporter: Steve Loughran
Assignee: Ted Yu
Priority: Minor
 Attachments: HADOOP-12235-v1.patch

   Original Estimate: 0.5h
  Remaining Estimate: 0.5h

 The scope for JUnit  mockito in hadoop-openstack is compile, which means 
 it ends up on the downstream classpath unless excluded.
 it should be provided, which was the original intent



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12239) StorageException complaining no lease ID when updating FolderLastModifiedTime in WASB

2015-07-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14628801#comment-14628801
 ] 

Hadoop QA commented on HADOOP-12239:


\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  17m 43s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   8m 49s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m 59s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   0m 27s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 28s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 36s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   0m 52s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | tools/hadoop tests |   1m 15s | Tests passed in 
hadoop-azure. |
| | |  42m 34s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12745516/HADOOP-12239.01.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / edcaae4 |
| hadoop-azure test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7286/artifact/patchprocess/testrun_hadoop-azure.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7286/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf907.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7286/console |


This message was automatically generated.

 StorageException complaining  no lease ID when updating 
 FolderLastModifiedTime in WASB
 

 Key: HADOOP-12239
 URL: https://issues.apache.org/jira/browse/HADOOP-12239
 Project: Hadoop Common
  Issue Type: Bug
  Components: tools
Affects Versions: 2.7.0
Reporter: Duo Xu
Assignee: Duo Xu
 Attachments: HADOOP-12239.01.patch


 This is a similar issue as HADOOP-11523 and HADOOP-12089, which I found in a 
 customer's HBase cluster logs, but the piece of code is in a different place.
 {code}
 2015-07-09 13:38:57,388 INFO org.apache.hadoop.hbase.master.SplitLogManager: 
 dead splitlog workers 
 [workernode3.xxx.b6.internal.cloudapp.net,60020,1436448555180]
 2015-07-09 13:38:57,466 ERROR org.apache.hadoop.hbase.executor.EventHandler: 
 Caught throwable while processing event M_SERVER_SHUTDOWN
 java.io.IOException: failed log splitting for 
 workernode12.xxx.b6.internal.cloudapp.net,60020,1436448566374, will retry
   at 
 org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.resubmit(ServerShutdownHandler.java:343)
   at 
 org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.process(ServerShutdownHandler.java:211)
   at 
 org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
   at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
   at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
   at java.lang.Thread.run(Thread.java:745)
 Caused by: java.io.IOException: Unable to write RenamePending file for folder 
 rename from 
 hbase/WALs/workernode12.xxx.b6.internal.cloudapp.net,60020,1436448566374 to 
 hbase/WALs/workernode12.xxx.b6.internal.cloudapp.net,60020,1436448566374-splitting
   at 
 org.apache.hadoop.fs.azure.NativeAzureFileSystem$FolderRenamePending.writeFile(NativeAzureFileSystem.java:258)
   at 
 org.apache.hadoop.fs.azure.NativeAzureFileSystem.prepareAtomicFolderRename(NativeAzureFileSystem.java:2110)
   at 
 org.apache.hadoop.fs.azure.NativeAzureFileSystem.rename(NativeAzureFileSystem.java:1998)
   at 
 org.apache.hadoop.hbase.master.MasterFileSystem.getLogDirs(MasterFileSystem.java:325)
   at 
 org.apache.hadoop.hbase.master.MasterFileSystem.splitLog(MasterFileSystem.java:412)
   at 
 

[jira] [Commented] (HADOOP-12153) ByteBufferReadable doesn't declare @InterfaceAudience and @InterfaceStability

2015-07-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14627874#comment-14627874
 ] 

Hudson commented on HADOOP-12153:
-

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #257 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/257/])
HADOOP-12153. ByteBufferReadable doesn't declare @InterfaceAudience and 
@InterfaceStability. Contributed by Brahma Reddy Battula. (ozawa: rev 
cec1d43db026e66a9e84b5c3e8476dfd33f17ecb)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ByteBufferReadable.java


 ByteBufferReadable doesn't declare @InterfaceAudience and @InterfaceStability
 -

 Key: HADOOP-12153
 URL: https://issues.apache.org/jira/browse/HADOOP-12153
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.7.1
Reporter: Steve Loughran
Assignee: Brahma Reddy Battula
Priority: Minor
 Fix For: 2.8.0

 Attachments: HADOOP-12153-002.patch, HADOOP-12153.patch


 {{org.apache.hadoop.fs.ByteBufferReadable}} doesn't set any 
 {{@InterfaceAudience}} attributes. Is it intended for public consumption? If 
 so, it should declare it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12232) Upgrade Tomcat dependency to 6.0.44.

2015-07-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14627880#comment-14627880
 ] 

Hudson commented on HADOOP-12232:
-

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #257 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/257/])
HADOOP-12232. Upgrade Tomcat dependency to 6.0.44. Contributed by Chris 
Nauroth. (cnauroth: rev 0a16ee60174b15e3df653bb107cb2d0c2d606330)
* hadoop-project/pom.xml
* hadoop-common-project/hadoop-common/CHANGES.txt


 Upgrade Tomcat dependency to 6.0.44.
 

 Key: HADOOP-12232
 URL: https://issues.apache.org/jira/browse/HADOOP-12232
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Fix For: 2.7.2

 Attachments: HADOOP-12232.001.patch


 The Hadoop distro currently bundles Tomcat version 6.0.41 by default.  The 
 current Tomcat 6 version is 6.0.44, which includes a few incremental bug 
 fixes.  Let's update our default version so that our users get the latest bug 
 fixes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12236) mvn site -Preleasedoc doesn't work behind proxy

2015-07-15 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HADOOP-12236:

Status: Patch Available  (was: Open)

 mvn site -Preleasedoc doesn't work behind proxy
 ---

 Key: HADOOP-12236
 URL: https://issues.apache.org/jira/browse/HADOOP-12236
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Tsuyoshi Ozawa
Assignee: Tsuyoshi Ozawa
 Attachments: HADOOP-12236.001.patch


 releasedocmaker.py doesn't work behind a proxy because urllib.urlopen doesn't 
 care http_proxy and https_proxy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12236) mvn site -Preleasedoc doesn't work behind proxy

2015-07-15 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HADOOP-12236:

Attachment: HADOOP-12236.001.patch

Attaching a first patch to fix the problem.

 mvn site -Preleasedoc doesn't work behind proxy
 ---

 Key: HADOOP-12236
 URL: https://issues.apache.org/jira/browse/HADOOP-12236
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Tsuyoshi Ozawa
Assignee: Tsuyoshi Ozawa
 Attachments: HADOOP-12236.001.patch


 releasedocmaker.py doesn't work behind a proxy because urllib.urlopen doesn't 
 care http_proxy and https_proxy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HADOOP-12235) hadoop-openstack junit mockito dependencies should be provided

2015-07-15 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu reassigned HADOOP-12235:
---

Assignee: Ted Yu

 hadoop-openstack junit  mockito dependencies should be provided
 --

 Key: HADOOP-12235
 URL: https://issues.apache.org/jira/browse/HADOOP-12235
 Project: Hadoop Common
  Issue Type: Bug
  Components: build, fs/swift
Affects Versions: 2.6.0
Reporter: Steve Loughran
Assignee: Ted Yu
Priority: Minor
 Attachments: HADOOP-12235-v1.patch

   Original Estimate: 0.5h
  Remaining Estimate: 0.5h

 The scope for JUnit  mockito in hadoop-openstack is compile, which means 
 it ends up on the downstream classpath unless excluded.
 it should be provided, which was the original intent



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12235) hadoop-openstack junit mockito dependencies should be provided

2015-07-15 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HADOOP-12235:

Attachment: HADOOP-12235-v1.patch

 hadoop-openstack junit  mockito dependencies should be provided
 --

 Key: HADOOP-12235
 URL: https://issues.apache.org/jira/browse/HADOOP-12235
 Project: Hadoop Common
  Issue Type: Bug
  Components: build, fs/swift
Affects Versions: 2.6.0
Reporter: Steve Loughran
Priority: Minor
 Attachments: HADOOP-12235-v1.patch

   Original Estimate: 0.5h
  Remaining Estimate: 0.5h

 The scope for JUnit  mockito in hadoop-openstack is compile, which means 
 it ends up on the downstream classpath unless excluded.
 it should be provided, which was the original intent



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12198) patches that hit multiple modules may need to build at root

2015-07-15 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14628097#comment-14628097
 ] 

Allen Wittenauer commented on HADOOP-12198:
---

I'm going to split this in two and fix some of this in HADOOP-12226.  As a 
result of working on unit tests, I've found some edge-cases where UNION is 
still wrong and ant is completely screwed up.  

 patches that hit multiple modules may need to build at root
 ---

 Key: HADOOP-12198
 URL: https://issues.apache.org/jira/browse/HADOOP-12198
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
 Attachments: HADOOP-12198.HADOOP-12111.00.patch, 
 HADOOP-12198.HADOOP-12111.02.patch, HADOOP-12198.HADOOP-12111.03.patch, 
 HADOOP-12198.HADOOP-12111.03.patch, HADOOP-12198.HADOOP-12111.04.patch


 Patches that introduce dependencies on other modules (e.g., HADOOP-12180) 
 need to effectively built at root. 
 There is a good chance this type of logic will need to be done on a 
 per-project basis. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12232) Upgrade Tomcat dependency to 6.0.44.

2015-07-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14628117#comment-14628117
 ] 

Hudson commented on HADOOP-12232:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk #2184 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2184/])
HADOOP-12232. Upgrade Tomcat dependency to 6.0.44. Contributed by Chris 
Nauroth. (cnauroth: rev 0a16ee60174b15e3df653bb107cb2d0c2d606330)
* hadoop-common-project/hadoop-common/CHANGES.txt
* hadoop-project/pom.xml


 Upgrade Tomcat dependency to 6.0.44.
 

 Key: HADOOP-12232
 URL: https://issues.apache.org/jira/browse/HADOOP-12232
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Fix For: 2.7.2

 Attachments: HADOOP-12232.001.patch


 The Hadoop distro currently bundles Tomcat version 6.0.41 by default.  The 
 current Tomcat 6 version is 6.0.44, which includes a few incremental bug 
 fixes.  Let's update our default version so that our users get the latest bug 
 fixes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12153) ByteBufferReadable doesn't declare @InterfaceAudience and @InterfaceStability

2015-07-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14628111#comment-14628111
 ] 

Hudson commented on HADOOP-12153:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk #2184 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2184/])
HADOOP-12153. ByteBufferReadable doesn't declare @InterfaceAudience and 
@InterfaceStability. Contributed by Brahma Reddy Battula. (ozawa: rev 
cec1d43db026e66a9e84b5c3e8476dfd33f17ecb)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ByteBufferReadable.java
* hadoop-common-project/hadoop-common/CHANGES.txt


 ByteBufferReadable doesn't declare @InterfaceAudience and @InterfaceStability
 -

 Key: HADOOP-12153
 URL: https://issues.apache.org/jira/browse/HADOOP-12153
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.7.1
Reporter: Steve Loughran
Assignee: Brahma Reddy Battula
Priority: Minor
 Fix For: 2.8.0

 Attachments: HADOOP-12153-002.patch, HADOOP-12153.patch


 {{org.apache.hadoop.fs.ByteBufferReadable}} doesn't set any 
 {{@InterfaceAudience}} attributes. Is it intended for public consumption? If 
 so, it should declare it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12232) Upgrade Tomcat dependency to 6.0.44.

2015-07-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14628145#comment-14628145
 ] 

Hudson commented on HADOOP-12232:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #245 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/245/])
HADOOP-12232. Upgrade Tomcat dependency to 6.0.44. Contributed by Chris 
Nauroth. (cnauroth: rev 0a16ee60174b15e3df653bb107cb2d0c2d606330)
* hadoop-project/pom.xml
* hadoop-common-project/hadoop-common/CHANGES.txt


 Upgrade Tomcat dependency to 6.0.44.
 

 Key: HADOOP-12232
 URL: https://issues.apache.org/jira/browse/HADOOP-12232
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Fix For: 2.7.2

 Attachments: HADOOP-12232.001.patch


 The Hadoop distro currently bundles Tomcat version 6.0.41 by default.  The 
 current Tomcat 6 version is 6.0.44, which includes a few incremental bug 
 fixes.  Let's update our default version so that our users get the latest bug 
 fixes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12153) ByteBufferReadable doesn't declare @InterfaceAudience and @InterfaceStability

2015-07-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14628139#comment-14628139
 ] 

Hudson commented on HADOOP-12153:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #245 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/245/])
HADOOP-12153. ByteBufferReadable doesn't declare @InterfaceAudience and 
@InterfaceStability. Contributed by Brahma Reddy Battula. (ozawa: rev 
cec1d43db026e66a9e84b5c3e8476dfd33f17ecb)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ByteBufferReadable.java
* hadoop-common-project/hadoop-common/CHANGES.txt


 ByteBufferReadable doesn't declare @InterfaceAudience and @InterfaceStability
 -

 Key: HADOOP-12153
 URL: https://issues.apache.org/jira/browse/HADOOP-12153
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.7.1
Reporter: Steve Loughran
Assignee: Brahma Reddy Battula
Priority: Minor
 Fix For: 2.8.0

 Attachments: HADOOP-12153-002.patch, HADOOP-12153.patch


 {{org.apache.hadoop.fs.ByteBufferReadable}} doesn't set any 
 {{@InterfaceAudience}} attributes. Is it intended for public consumption? If 
 so, it should declare it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12236) mvn site -Preleasedoc doesn't work behind proxy

2015-07-15 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14628105#comment-14628105
 ] 

Tsuyoshi Ozawa commented on HADOOP-12236:
-

[~aw] it's okay, but could you review it for a workaround for trunk and 
branch-2 before releasing Yetus? It prevent us with running command behind 
proxy. I'll also create a jira for Yetus too.

 mvn site -Preleasedoc doesn't work behind proxy
 ---

 Key: HADOOP-12236
 URL: https://issues.apache.org/jira/browse/HADOOP-12236
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Tsuyoshi Ozawa
Assignee: Tsuyoshi Ozawa
 Attachments: HADOOP-12236.001.patch


 releasedocmaker.py doesn't work behind a proxy because urllib.urlopen doesn't 
 care environment varialibes like $http_proxy or $https_proxy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12236) mvn site -Preleasedoc doesn't work behind proxy for trunk and branch-2 before releasing Yetus

2015-07-15 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HADOOP-12236:

Summary: mvn site -Preleasedoc doesn't work behind proxy for trunk and 
branch-2 before releasing Yetus  (was: mvn site -Preleasedoc doesn't work 
behind proxy)

 mvn site -Preleasedoc doesn't work behind proxy for trunk and branch-2 before 
 releasing Yetus
 -

 Key: HADOOP-12236
 URL: https://issues.apache.org/jira/browse/HADOOP-12236
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Tsuyoshi Ozawa
Assignee: Tsuyoshi Ozawa
 Attachments: HADOOP-12236.001.patch


 releasedocmaker.py doesn't work behind a proxy because urllib.urlopen doesn't 
 care environment varialibes like $http_proxy or $https_proxy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12237) releasedocmaker.py doesn't work behind a proxy

2015-07-15 Thread Tsuyoshi Ozawa (JIRA)
Tsuyoshi Ozawa created HADOOP-12237:
---

 Summary: releasedocmaker.py doesn't work behind a proxy
 Key: HADOOP-12237
 URL: https://issues.apache.org/jira/browse/HADOOP-12237
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Tsuyoshi Ozawa


HADOOP-12236 for Yetus.

{quote}

releasedocmaker.py doesn't work behind a proxy because urllib.urlopen doesn't 
care environment varialibes like $http_proxy or $https_proxy.
{quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HADOOP-12237) releasedocmaker.py doesn't work behind a proxy

2015-07-15 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa reassigned HADOOP-12237:
---

Assignee: Tsuyoshi Ozawa

 releasedocmaker.py doesn't work behind a proxy
 --

 Key: HADOOP-12237
 URL: https://issues.apache.org/jira/browse/HADOOP-12237
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Reporter: Tsuyoshi Ozawa
Assignee: Tsuyoshi Ozawa

 HADOOP-12236 for Yetus.
 {quote}
 releasedocmaker.py doesn't work behind a proxy because urllib.urlopen doesn't 
 care environment varialibes like $http_proxy or $https_proxy.
 {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12235) hadoop-openstack junit mockito dependencies should be provided

2015-07-15 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HADOOP-12235:

Status: Patch Available  (was: Open)

 hadoop-openstack junit  mockito dependencies should be provided
 --

 Key: HADOOP-12235
 URL: https://issues.apache.org/jira/browse/HADOOP-12235
 Project: Hadoop Common
  Issue Type: Bug
  Components: build, fs/swift
Affects Versions: 2.6.0
Reporter: Steve Loughran
Assignee: Ted Yu
Priority: Minor
 Attachments: HADOOP-12235-v1.patch

   Original Estimate: 0.5h
  Remaining Estimate: 0.5h

 The scope for JUnit  mockito in hadoop-openstack is compile, which means 
 it ends up on the downstream classpath unless excluded.
 it should be provided, which was the original intent



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10615) FileInputStream in JenkinsHash#main() is never closed

2015-07-15 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14628250#comment-14628250
 ] 

Tsuyoshi Ozawa commented on HADOOP-10615:
-

Hi [~airbots], thank you for taking this issue. Could you use 
try-with-resources statement instead of IOUtils.closeStream?

 FileInputStream in JenkinsHash#main() is never closed
 -

 Key: HADOOP-10615
 URL: https://issues.apache.org/jira/browse/HADOOP-10615
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Chen He
Priority: Minor
  Labels: BB2015-05-TBR
 Attachments: HADOOP-10615-2.patch, HADOOP-10615.patch


 {code}
 FileInputStream in = new FileInputStream(args[0]);
 {code}
 The above FileInputStream is not closed upon exit of main.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12153) ByteBufferReadable doesn't declare @InterfaceAudience and @InterfaceStability

2015-07-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14628266#comment-14628266
 ] 

Hudson commented on HADOOP-12153:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2203 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2203/])
HADOOP-12153. ByteBufferReadable doesn't declare @InterfaceAudience and 
@InterfaceStability. Contributed by Brahma Reddy Battula. (ozawa: rev 
cec1d43db026e66a9e84b5c3e8476dfd33f17ecb)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ByteBufferReadable.java


 ByteBufferReadable doesn't declare @InterfaceAudience and @InterfaceStability
 -

 Key: HADOOP-12153
 URL: https://issues.apache.org/jira/browse/HADOOP-12153
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.7.1
Reporter: Steve Loughran
Assignee: Brahma Reddy Battula
Priority: Minor
 Fix For: 2.8.0

 Attachments: HADOOP-12153-002.patch, HADOOP-12153.patch


 {{org.apache.hadoop.fs.ByteBufferReadable}} doesn't set any 
 {{@InterfaceAudience}} attributes. Is it intended for public consumption? If 
 so, it should declare it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12232) Upgrade Tomcat dependency to 6.0.44.

2015-07-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14628272#comment-14628272
 ] 

Hudson commented on HADOOP-12232:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2203 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2203/])
HADOOP-12232. Upgrade Tomcat dependency to 6.0.44. Contributed by Chris 
Nauroth. (cnauroth: rev 0a16ee60174b15e3df653bb107cb2d0c2d606330)
* hadoop-common-project/hadoop-common/CHANGES.txt
* hadoop-project/pom.xml


 Upgrade Tomcat dependency to 6.0.44.
 

 Key: HADOOP-12232
 URL: https://issues.apache.org/jira/browse/HADOOP-12232
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Fix For: 2.7.2

 Attachments: HADOOP-12232.001.patch


 The Hadoop distro currently bundles Tomcat version 6.0.41 by default.  The 
 current Tomcat 6 version is 6.0.44, which includes a few incremental bug 
 fixes.  Let's update our default version so that our users get the latest bug 
 fixes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12153) ByteBufferReadable doesn't declare @InterfaceAudience and @InterfaceStability

2015-07-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14628259#comment-14628259
 ] 

Hudson commented on HADOOP-12153:
-

SUCCESS: Integrated in Hadoop-Mapreduce-trunk-Java8 #255 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/255/])
HADOOP-12153. ByteBufferReadable doesn't declare @InterfaceAudience and 
@InterfaceStability. Contributed by Brahma Reddy Battula. (ozawa: rev 
cec1d43db026e66a9e84b5c3e8476dfd33f17ecb)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ByteBufferReadable.java


 ByteBufferReadable doesn't declare @InterfaceAudience and @InterfaceStability
 -

 Key: HADOOP-12153
 URL: https://issues.apache.org/jira/browse/HADOOP-12153
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.7.1
Reporter: Steve Loughran
Assignee: Brahma Reddy Battula
Priority: Minor
 Fix For: 2.8.0

 Attachments: HADOOP-12153-002.patch, HADOOP-12153.patch


 {{org.apache.hadoop.fs.ByteBufferReadable}} doesn't set any 
 {{@InterfaceAudience}} attributes. Is it intended for public consumption? If 
 so, it should declare it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12232) Upgrade Tomcat dependency to 6.0.44.

2015-07-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14628265#comment-14628265
 ] 

Hudson commented on HADOOP-12232:
-

SUCCESS: Integrated in Hadoop-Mapreduce-trunk-Java8 #255 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/255/])
HADOOP-12232. Upgrade Tomcat dependency to 6.0.44. Contributed by Chris 
Nauroth. (cnauroth: rev 0a16ee60174b15e3df653bb107cb2d0c2d606330)
* hadoop-common-project/hadoop-common/CHANGES.txt
* hadoop-project/pom.xml


 Upgrade Tomcat dependency to 6.0.44.
 

 Key: HADOOP-12232
 URL: https://issues.apache.org/jira/browse/HADOOP-12232
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Fix For: 2.7.2

 Attachments: HADOOP-12232.001.patch


 The Hadoop distro currently bundles Tomcat version 6.0.41 by default.  The 
 current Tomcat 6 version is 6.0.44, which includes a few incremental bug 
 fixes.  Let's update our default version so that our users get the latest bug 
 fixes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10365) BufferedOutputStream in FileUtil#unpackEntries() should be closed in finally block

2015-07-15 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14628285#comment-14628285
 ] 

Tsuyoshi Ozawa commented on HADOOP-10365:
-

[~kiranmr] sorry for the delay. Can we remove explicit close of outputStream 
because we use try-with-resources statement?

{quote}
+  outputStream.close();
{quote}

 BufferedOutputStream in FileUtil#unpackEntries() should be closed in finally 
 block
 --

 Key: HADOOP-10365
 URL: https://issues.apache.org/jira/browse/HADOOP-10365
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Reporter: Ted Yu
Assignee: Kiran Kumar M R
Priority: Minor
  Labels: BB2015-05-RFC
 Attachments: HADOOP-10365.2.patch, HADOOP-10365.3.patch, 
 HADOOP-10365.4.patch, HADOOP-10365.5.patch, HADOOP-10365.patch


 {code}
 BufferedOutputStream outputStream = new BufferedOutputStream(
 new FileOutputStream(outputFile));
 ...
 outputStream.flush();
 outputStream.close();
 {code}
 outputStream should be closed in finally block.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10365) BufferedOutputStream in FileUtil#unpackEntries() should be closed in finally block

2015-07-15 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HADOOP-10365:

Description: 
{code}
BufferedOutputStream outputStream = new BufferedOutputStream(
new FileOutputStream(outputFile));
...
outputStream.flush();
outputStream.close();
{code}

outputStream should be closed in finally block.

  was:
{code}
BufferedOutputStream outputStream = new BufferedOutputStream(
new FileOutputStream(outputFile));
...
outputStream.flush();
outputStream.close();
{code}
outputStream should be closed in finally block.


 BufferedOutputStream in FileUtil#unpackEntries() should be closed in finally 
 block
 --

 Key: HADOOP-10365
 URL: https://issues.apache.org/jira/browse/HADOOP-10365
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Reporter: Ted Yu
Assignee: Kiran Kumar M R
Priority: Minor
  Labels: BB2015-05-RFC
 Attachments: HADOOP-10365.2.patch, HADOOP-10365.3.patch, 
 HADOOP-10365.4.patch, HADOOP-10365.5.patch, HADOOP-10365.patch


 {code}
 BufferedOutputStream outputStream = new BufferedOutputStream(
 new FileOutputStream(outputFile));
 ...
 outputStream.flush();
 outputStream.close();
 {code}
 outputStream should be closed in finally block.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12236) mvn site -Preleasedoc doesn't work behind proxy for trunk before merging HADOOP-12111

2015-07-15 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14628215#comment-14628215
 ] 

Allen Wittenauer commented on HADOOP-12236:
---

Yup.  -Preleasedocs is *optional*.  The only thing you'll be missing is the 
changelog that shows up on the web site.  We don't guarantee that every feature 
works everywhere.  


 mvn site -Preleasedoc doesn't work behind proxy for trunk before merging 
 HADOOP-12111
 -

 Key: HADOOP-12236
 URL: https://issues.apache.org/jira/browse/HADOOP-12236
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Tsuyoshi Ozawa
Assignee: Tsuyoshi Ozawa
 Attachments: HADOOP-12236.001.patch


 releasedocmaker.py doesn't work behind a proxy because urllib.urlopen doesn't 
 care environment varialibes like $http_proxy or $https_proxy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12236) mvn site -Preleasedoc doesn't work behind proxy for trunk before merging HADOOP-12111

2015-07-15 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14628228#comment-14628228
 ] 

Tsuyoshi Ozawa commented on HADOOP-12236:
-

{quote}
-Preleasedocs is optional.
{quote}

One problem I faced is that I couldn't create website for confirming that the 
documentation is generated correctly. Do you know any workaround to do this 
withougt -Preleasedocs? From BUILDING.txt:
{quote}
Create a local staging version of the website (in /tmp/hadoop-site)

  $ mvn clean site -Preleasedocs; mvn site:stage 
-DstagingDirectory=/tmp/hadoop-site
{quote}



 mvn site -Preleasedoc doesn't work behind proxy for trunk before merging 
 HADOOP-12111
 -

 Key: HADOOP-12236
 URL: https://issues.apache.org/jira/browse/HADOOP-12236
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Tsuyoshi Ozawa
Assignee: Tsuyoshi Ozawa
 Attachments: HADOOP-12236.001.patch


 releasedocmaker.py doesn't work behind a proxy because urllib.urlopen doesn't 
 care environment varialibes like $http_proxy or $https_proxy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12236) mvn site -Preleasedoc doesn't work behind proxy for trunk before merging HADOOP-12111

2015-07-15 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14628230#comment-14628230
 ] 

Allen Wittenauer commented on HADOOP-12236:
---

bq. mvn clean site; mvn site:stage -DstagingDirectory=/tmp/hadoop-site

... will generate everything but the change log.

 mvn site -Preleasedoc doesn't work behind proxy for trunk before merging 
 HADOOP-12111
 -

 Key: HADOOP-12236
 URL: https://issues.apache.org/jira/browse/HADOOP-12236
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Tsuyoshi Ozawa
Assignee: Tsuyoshi Ozawa
 Attachments: HADOOP-12236.001.patch


 releasedocmaker.py doesn't work behind a proxy because urllib.urlopen doesn't 
 care environment varialibes like $http_proxy or $https_proxy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12236) mvn site -Preleasedoc doesn't work behind proxy for trunk before merging HADOOP-12111

2015-07-15 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HADOOP-12236:

Resolution: Invalid
Status: Resolved  (was: Patch Available)

 mvn site -Preleasedoc doesn't work behind proxy for trunk before merging 
 HADOOP-12111
 -

 Key: HADOOP-12236
 URL: https://issues.apache.org/jira/browse/HADOOP-12236
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Tsuyoshi Ozawa
Assignee: Tsuyoshi Ozawa
 Attachments: HADOOP-12236.001.patch


 releasedocmaker.py doesn't work behind a proxy because urllib.urlopen doesn't 
 care environment varialibes like $http_proxy or $https_proxy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12236) mvn site -Preleasedoc doesn't work behind proxy for trunk before merging HADOOP-12111

2015-07-15 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14628234#comment-14628234
 ] 

Tsuyoshi Ozawa commented on HADOOP-12236:
-

OIC! Thank you very much, closing this as invalid one.

 mvn site -Preleasedoc doesn't work behind proxy for trunk before merging 
 HADOOP-12111
 -

 Key: HADOOP-12236
 URL: https://issues.apache.org/jira/browse/HADOOP-12236
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Tsuyoshi Ozawa
Assignee: Tsuyoshi Ozawa
 Attachments: HADOOP-12236.001.patch


 releasedocmaker.py doesn't work behind a proxy because urllib.urlopen doesn't 
 care environment varialibes like $http_proxy or $https_proxy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12237) releasedocmaker.py doesn't work behind a proxy

2015-07-15 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HADOOP-12237:

Attachment: HADOOP-12237.001.patch

Attaching a first patch.

 releasedocmaker.py doesn't work behind a proxy
 --

 Key: HADOOP-12237
 URL: https://issues.apache.org/jira/browse/HADOOP-12237
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Reporter: Tsuyoshi Ozawa
Assignee: Tsuyoshi Ozawa
 Attachments: HADOOP-12237.001.patch


 HADOOP-12236 for Yetus.
 {quote}
 releasedocmaker.py doesn't work behind a proxy because urllib.urlopen doesn't 
 care environment varialibes like $http_proxy or $https_proxy.
 {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12236) mvn site -Preleasedoc doesn't work behind proxy for trunk before merging HADOOP-12111

2015-07-15 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14628207#comment-14628207
 ] 

Tsuyoshi Ozawa commented on HADOOP-12236:
-

Hmm, okay. But one problem is that we cannot run mvn site -Preleasedoc behind a 
proxy before merging HADOOP-12111. 

 mvn site -Preleasedoc doesn't work behind proxy for trunk before merging 
 HADOOP-12111
 -

 Key: HADOOP-12236
 URL: https://issues.apache.org/jira/browse/HADOOP-12236
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Tsuyoshi Ozawa
Assignee: Tsuyoshi Ozawa
 Attachments: HADOOP-12236.001.patch


 releasedocmaker.py doesn't work behind a proxy because urllib.urlopen doesn't 
 care environment varialibes like $http_proxy or $https_proxy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12235) hadoop-openstack junit mockito dependencies should be provided

2015-07-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14628210#comment-14628210
 ] 

Hadoop QA commented on HADOOP-12235:


\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  14m 44s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   7m 38s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 38s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 22s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 19s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 32s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | tools/hadoop tests |   0m 14s | Tests passed in 
hadoop-openstack. |
| | |  34m 30s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12745446/HADOOP-12235-v1.patch |
| Optional Tests | javadoc javac unit |
| git revision | trunk / edcaae4 |
| hadoop-openstack test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7283/artifact/patchprocess/testrun_hadoop-openstack.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7283/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf901.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7283/console |


This message was automatically generated.

 hadoop-openstack junit  mockito dependencies should be provided
 --

 Key: HADOOP-12235
 URL: https://issues.apache.org/jira/browse/HADOOP-12235
 Project: Hadoop Common
  Issue Type: Bug
  Components: build, fs/swift
Affects Versions: 2.6.0
Reporter: Steve Loughran
Assignee: Ted Yu
Priority: Minor
 Attachments: HADOOP-12235-v1.patch

   Original Estimate: 0.5h
  Remaining Estimate: 0.5h

 The scope for JUnit  mockito in hadoop-openstack is compile, which means 
 it ends up on the downstream classpath unless excluded.
 it should be provided, which was the original intent



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12238) Passing project option to releasedocmaker.py for running mvn site -Preleasedocs

2015-07-15 Thread Tsuyoshi Ozawa (JIRA)
Tsuyoshi Ozawa created HADOOP-12238:
---

 Summary: Passing project option to releasedocmaker.py for running 
mvn site -Preleasedocs
 Key: HADOOP-12238
 URL: https://issues.apache.org/jira/browse/HADOOP-12238
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Tsuyoshi Ozawa
Assignee: Tsuyoshi Ozawa


Currently we cannot run mvn site -Preleasedocs on branch for HADOOP-12111.  
This patch fix it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12238) Passing project option to releasedocmaker.py for running mvn site -Preleasedocs

2015-07-15 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12238?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-12238:
--
Issue Type: Bug  (was: Sub-task)
Parent: (was: HADOOP-12111)

 Passing project option to releasedocmaker.py for running mvn site 
 -Preleasedocs
 ---

 Key: HADOOP-12238
 URL: https://issues.apache.org/jira/browse/HADOOP-12238
 Project: Hadoop Common
  Issue Type: Bug
  Components: yetus
Reporter: Tsuyoshi Ozawa
Assignee: Tsuyoshi Ozawa
 Attachments: HADOOP-12238.001.patch


 Currently we cannot run mvn site -Preleasedocs on branch for HADOOP-12111.  
 This patch fix it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12238) Passing project option to releasedocmaker.py for running mvn site -Preleasedocs

2015-07-15 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14628218#comment-14628218
 ] 

Allen Wittenauer commented on HADOOP-12238:
---

Also switching this to a full issue, because it's not a *yetus* subtask but an 
issue for Hadoop.

 Passing project option to releasedocmaker.py for running mvn site 
 -Preleasedocs
 ---

 Key: HADOOP-12238
 URL: https://issues.apache.org/jira/browse/HADOOP-12238
 Project: Hadoop Common
  Issue Type: Bug
  Components: yetus
Reporter: Tsuyoshi Ozawa
Assignee: Tsuyoshi Ozawa
 Attachments: HADOOP-12238.001.patch


 Currently we cannot run mvn site -Preleasedocs on branch for HADOOP-12111.  
 This patch fix it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12237) releasedocmaker.py doesn't work behind a proxy

2015-07-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14628221#comment-14628221
 ] 

Hadoop QA commented on HADOOP-12237:


\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch |   0m  0s | The patch command could not apply 
the patch during dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12745461/HADOOP-12237.001.patch 
|
| Optional Tests |  |
| git revision | trunk / edcaae4 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7284/console |


This message was automatically generated.

 releasedocmaker.py doesn't work behind a proxy
 --

 Key: HADOOP-12237
 URL: https://issues.apache.org/jira/browse/HADOOP-12237
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Reporter: Tsuyoshi Ozawa
Assignee: Tsuyoshi Ozawa
 Attachments: HADOOP-12237.001.patch


 HADOOP-12236 for Yetus.
 {quote}
 releasedocmaker.py doesn't work behind a proxy because urllib.urlopen doesn't 
 care environment varialibes like $http_proxy or $https_proxy.
 {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10615) FileInputStream in JenkinsHash#main() is never closed

2015-07-15 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10615?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HADOOP-10615:

Description: 
{code}
FileInputStream in = new FileInputStream(args[0]);
{code}

The above FileInputStream is not closed upon exit of main.

  was:
{code}
FileInputStream in = new FileInputStream(args[0]);
{code}
The above FileInputStream is not closed upon exit of main.


 FileInputStream in JenkinsHash#main() is never closed
 -

 Key: HADOOP-10615
 URL: https://issues.apache.org/jira/browse/HADOOP-10615
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Chen He
Priority: Minor
  Labels: BB2015-05-TBR
 Attachments: HADOOP-10615-2.patch, HADOOP-10615.patch


 {code}
 FileInputStream in = new FileInputStream(args[0]);
 {code}
 The above FileInputStream is not closed upon exit of main.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12236) mvn site -Preleasedoc doesn't work behind proxy for trunk and branch-2 before releasing Yetus

2015-07-15 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14628191#comment-14628191
 ] 

Allen Wittenauer commented on HADOOP-12236:
---

... and by the time trunk is released, yetus will be out.

 mvn site -Preleasedoc doesn't work behind proxy for trunk and branch-2 before 
 releasing Yetus
 -

 Key: HADOOP-12236
 URL: https://issues.apache.org/jira/browse/HADOOP-12236
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Tsuyoshi Ozawa
Assignee: Tsuyoshi Ozawa
 Attachments: HADOOP-12236.001.patch


 releasedocmaker.py doesn't work behind a proxy because urllib.urlopen doesn't 
 care environment varialibes like $http_proxy or $https_proxy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12236) mvn site -Preleasedoc doesn't work behind proxy for trunk and branch-2 before releasing Yetus

2015-07-15 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14628188#comment-14628188
 ] 

Allen Wittenauer commented on HADOOP-12236:
---

branch-2 doesn't have releasedocmaker.

 mvn site -Preleasedoc doesn't work behind proxy for trunk and branch-2 before 
 releasing Yetus
 -

 Key: HADOOP-12236
 URL: https://issues.apache.org/jira/browse/HADOOP-12236
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Tsuyoshi Ozawa
Assignee: Tsuyoshi Ozawa
 Attachments: HADOOP-12236.001.patch


 releasedocmaker.py doesn't work behind a proxy because urllib.urlopen doesn't 
 care environment varialibes like $http_proxy or $https_proxy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12237) releasedocmaker.py doesn't work behind a proxy

2015-07-15 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HADOOP-12237:

Status: Patch Available  (was: Open)

 releasedocmaker.py doesn't work behind a proxy
 --

 Key: HADOOP-12237
 URL: https://issues.apache.org/jira/browse/HADOOP-12237
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Reporter: Tsuyoshi Ozawa
Assignee: Tsuyoshi Ozawa
 Attachments: HADOOP-12237.001.patch


 HADOOP-12236 for Yetus.
 {quote}
 releasedocmaker.py doesn't work behind a proxy because urllib.urlopen doesn't 
 care environment varialibes like $http_proxy or $https_proxy.
 {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12236) mvn site -Preleasedoc doesn't work behind proxy for trunk before merging HADOOP-12111

2015-07-15 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HADOOP-12236:

Summary: mvn site -Preleasedoc doesn't work behind proxy for trunk before 
merging HADOOP-12111  (was: mvn site -Preleasedoc doesn't work behind proxy for 
trunk and branch-2 before releasing Yetus)

 mvn site -Preleasedoc doesn't work behind proxy for trunk before merging 
 HADOOP-12111
 -

 Key: HADOOP-12236
 URL: https://issues.apache.org/jira/browse/HADOOP-12236
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Tsuyoshi Ozawa
Assignee: Tsuyoshi Ozawa
 Attachments: HADOOP-12236.001.patch


 releasedocmaker.py doesn't work behind a proxy because urllib.urlopen doesn't 
 care environment varialibes like $http_proxy or $https_proxy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-12238) Passing project option to releasedocmaker.py for running mvn site -Preleasedocs

2015-07-15 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12238?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-12238.
---
Resolution: Duplicate

There's already a JIRA covering moving trunk's releasedoc support to Yetus.

 Passing project option to releasedocmaker.py for running mvn site 
 -Preleasedocs
 ---

 Key: HADOOP-12238
 URL: https://issues.apache.org/jira/browse/HADOOP-12238
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Reporter: Tsuyoshi Ozawa
Assignee: Tsuyoshi Ozawa
 Attachments: HADOOP-12238.001.patch


 Currently we cannot run mvn site -Preleasedocs on branch for HADOOP-12111.  
 This patch fix it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12238) Passing project option to releasedocmaker.py for running mvn site -Preleasedocs

2015-07-15 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12238?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HADOOP-12238:

Attachment: HADOOP-12238.001.patch

 Passing project option to releasedocmaker.py for running mvn site 
 -Preleasedocs
 ---

 Key: HADOOP-12238
 URL: https://issues.apache.org/jira/browse/HADOOP-12238
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Reporter: Tsuyoshi Ozawa
Assignee: Tsuyoshi Ozawa
 Attachments: HADOOP-12238.001.patch


 Currently we cannot run mvn site -Preleasedocs on branch for HADOOP-12111.  
 This patch fix it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12238) Passing project option to releasedocmaker.py for running mvn site -Preleasedocs

2015-07-15 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14628219#comment-14628219
 ] 

Tsuyoshi Ozawa commented on HADOOP-12238:
-

Thank you for notifying. Is HADOOP-12137 the one?

 Passing project option to releasedocmaker.py for running mvn site 
 -Preleasedocs
 ---

 Key: HADOOP-12238
 URL: https://issues.apache.org/jira/browse/HADOOP-12238
 Project: Hadoop Common
  Issue Type: Bug
  Components: yetus
Reporter: Tsuyoshi Ozawa
Assignee: Tsuyoshi Ozawa
 Attachments: HADOOP-12238.001.patch


 Currently we cannot run mvn site -Preleasedocs on branch for HADOOP-12111.  
 This patch fix it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)