[jira] [Created] (HADOOP-13460) Fix findbugs warnings of hadoop-hdfs-client in branch-2

2016-08-02 Thread Akira Ajisaka (JIRA)
Akira Ajisaka created HADOOP-13460:
--

 Summary: Fix findbugs warnings of hadoop-hdfs-client in branch-2
 Key: HADOOP-13460
 URL: https://issues.apache.org/jira/browse/HADOOP-13460
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Akira Ajisaka


There are 7 warnings.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [DISCUSS] Release numbering semantics with concurrent (>2) releases [Was Setting JIRA fix versions for 3.0.0 releases]

2016-08-02 Thread Andrew Wang
In the absence of further comments, I've pushed this text to a new "Release
Versioning" page on the website. I think svnpubsub automatically builds and
pushes for us now, but not 100% sure.

Anyway, it seems like we can proceed with the 2.8.0 and 3.0.0-alpha1
version updates. I'm going to be on vacation until the 15th, but will
tackle this when I get back. The bulk updates will also be floowed with a
wide-distribution email reminder about how to appropriately set fix
versions.

Best,
Andrew

On Thu, Jul 28, 2016 at 4:46 PM, Andrew Wang 
wrote:

> I've written up the proposal from my initial reply in a GDoc. I found one
> bug in the rules when working through my example again, and also
> incorporated Akira's correction. Thanks all for the discussion so far!
>
>
> https://docs.google.com/document/d/1vlDtpsnSjBPIZiWQjSwgnV0_Z6ZQJ1r91J8G0FduyTg/edit
>
> Ping me if you'd like edit/comment privs, or send comments to this thread.
>
> I'm eager to close on this so we can keeping pushing on the 2.8.0 and
> 3.0.0-alpha1 releases. I'd like to post this content somewhere official
> early next week, so if you have additional feedback, please keep it coming.
>
> Best,
> Andrew
>
> On Thu, Jul 28, 2016 at 3:01 PM, Karthik Kambatla 
> wrote:
>
>> Inline.
>>
>>
>>>
 BTW, I never see we have a clear definition for alpha release. It is
 previously used as unstable in API definition (2.1-alpha, 2.2-alpha, etc.)
 but sometimes means unstable in production quality (2.7.0). I think we
 should clearly define it with major consensus so user won't
 misunderstanding the risky here.

>>>
>>> These are the definitions of "alpha" and "beta" used leading up to the
>>> 2.2 GA release, so it's not something new. These are also the normal
>>> industry definitions. Alpha means no API compatibility guarantees, early
>>> software. Beta means API compatible, but still some bugs.
>>>
>>> If anything, we never defined the terms "alpha" and "beta" for 2.x
>>> releases post-2.2 GA. The thinking was that everything after would be
>>> compatible and thus (at the least) never alpha. I think this is why the
>>> website talks about the 2.7.x line as "stable" or "unstable" instead, but
>>> since I think we still guarantee API compatibility between 2.7.0 and 2.7.1,
>>> we could have just called 2.7.0 "beta".
>>>
>>> I think this would be good to have in our compat guidelines or
>>> somewhere. Happy to work with Karthik/Vinod/others on this.
>>>
>>
>> I am not sure if we formally defined the terms "alpha" and "beta" for
>> Hadoop 2, but my understanding of them agrees with the general definitions
>> on the web.
>>
>> Alpha:
>>
>>- Early version for testing - integration with downstream, deployment
>>etc.
>>- Not feature complete
>>- No compatibility guarantees yet
>>
>> Beta:
>>
>>- Feature complete
>>- API compatibility guaranteed
>>- Need clear definition for other kinds of compatibility (wire,
>>client-dependencies, server-dependencies etc.)
>>- Not ready for production deployments
>>
>> GA
>>
>>- Ready for production
>>- All the usual compatibility guarantees apply.
>>
>> If there is general agreement, I can work towards getting this into our
>> documentation.
>>
>>
>>>
 Also, if we treat our 3.0.0-alpha release work seriously, we should
 also think about trunk's version number issue (bump up to 4.0.0-alpha?) or
 there could be no room for 3.0 incompatible feature/bits soon.

 While we're still in alpha for 3.0.0, there's no need for a separate
>>> 4.0.0 version since there's no guarantee of API compatibility. I plan to
>>> cut a branch-3 for the beta period, at which point we'll upgrade trunk to
>>> 4.0.0-alpha1. This is something we discussed on another mailing list thread.
>>>
>>
>> Branching at beta time seems reasonable.
>>
>> Overall, are there any incompatible changes on trunk that we wouldn't be
>> comfortable shipping in 3.0.0. If yes, do we feel comfortable shipping
>> those bits ever?
>>
>>
>>>
>>> Best,
>>> Andrew
>>>
>>
>>
>


[jira] [Created] (HADOOP-13459) hadoop-aws runs several test cases repeatedly, causing unnecessarily long running time.

2016-08-02 Thread Chris Nauroth (JIRA)
Chris Nauroth created HADOOP-13459:
--

 Summary: hadoop-aws runs several test cases repeatedly, causing 
unnecessarily long running time.
 Key: HADOOP-13459
 URL: https://issues.apache.org/jira/browse/HADOOP-13459
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/azure, test
Reporter: Chris Nauroth
Assignee: Chris Nauroth
Priority: Minor


Within hadoop-azure, we have the {{NativeAzureFileSystemBaseTest}} abstract 
class, which defines setup and teardown to handle the Azure storage account and 
also defines multiple test cases.  This class originally was contributed to 
provide a layer of indirection for running the same test cases in live mode or 
mock mode: {{TestNativeAzureFileSystemLive}} and 
{{TestNativeAzureFileSystemMocked}}.  It appears that since then, we created 
multiple new test suites that subclassed {{NativeAzureFileSystemBaseTest}} for 
the benefit of getting the common setup and teardown code, but also with the 
side effect of running the inherited test cases repeatedly.  This is a 
significant factor in the overall execution time of the hadoop-azure tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-13434) Add quoting to Shell class

2016-08-02 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13434?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal reopened HADOOP-13434:


Reopening to attach branch-2.7 patch.

> Add quoting to Shell class
> --
>
> Key: HADOOP-13434
> URL: https://issues.apache.org/jira/browse/HADOOP-13434
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Owen O'Malley
>Assignee: Owen O'Malley
> Fix For: 2.8.0
>
> Attachments: HADOOP-13434.patch, HADOOP-13434.patch, 
> HADOOP-13434.patch
>
>
> The Shell class makes assumptions that the parameters won't have spaces or 
> other special characters, even when it invokes bash.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13458) LoadBalancingKMSClientProvider#doOp should log IOException stacktrace

2016-08-02 Thread Wei-Chiu Chuang (JIRA)
Wei-Chiu Chuang created HADOOP-13458:


 Summary: LoadBalancingKMSClientProvider#doOp should log 
IOException stacktrace
 Key: HADOOP-13458
 URL: https://issues.apache.org/jira/browse/HADOOP-13458
 Project: Hadoop Common
  Issue Type: Improvement
  Components: kms
Reporter: Wei-Chiu Chuang
Priority: Trivial


Sometimes it's relatively hard to comprehend the meaning of the exception 
message without stacktrace. I think we should log the stacktrace too.

{code}
LOG.warn("KMS provider at [{}] threw an IOException [{}]!!",
provider.getKMSUrl(), ioe.getMessage());
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13457) Remove hardcoded absolute path for shell executable

2016-08-02 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HADOOP-13457:
--

 Summary: Remove hardcoded absolute path for shell executable
 Key: HADOOP-13457
 URL: https://issues.apache.org/jira/browse/HADOOP-13457
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal


Shell.java has a hardcoded path to /bin/bash which is not correct on all 
platforms. 

Pointed out by [~aw] while reviewing HADOOP-13434.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-13429) Dispose of unnecessary SASL servers

2016-08-02 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13429?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee resolved HADOOP-13429.
-
   Resolution: Fixed
Fix Version/s: 3.0.0-alpha2
   2.9.0

> Dispose of unnecessary SASL servers
> ---
>
> Key: HADOOP-13429
> URL: https://issues.apache.org/jira/browse/HADOOP-13429
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: ipc
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13429.patch
>
>
> The IPC server retains a per-connection SASL server for the duration of the 
> connection.  This causes many unnecessary objects to be promoted to old gen.  
> The SASL server should be disposed of unless required for subsequent 
> encryption.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2016-08-02 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/121/

[Aug 1, 2016 8:35:59 AM] (aajisaka) HADOOP-13444. Replace 
org.apache.commons.io.Charsets with
[Aug 1, 2016 10:38:38 AM] (vvasudev) YARN-5444. Fix failing unit tests in
[Aug 1, 2016 3:14:28 PM] (daryn) HDFS-10655. Fix path related byte array 
conversion bugs. (daryn)
[Aug 2, 2016 5:34:40 AM] (shv) Revert "HDFS-10301. Interleaving processing of 
storages from repeated
[Aug 2, 2016 6:35:47 AM] (gera) MAPREDUCE-6724. Single shuffle to memory must 
not exceed




-1 overall


The following subsystems voted -1:
asflicense mvnsite unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.hdfs.server.namenode.ha.TestInitializeSharedEdits 
   hadoop.yarn.logaggregation.TestAggregatedLogFormat 
   
hadoop.yarn.server.nodemanager.containermanager.queuing.TestQueuingContainerManager
 
   hadoop.yarn.server.applicationhistoryservice.webapp.TestAHSWebServices 
   hadoop.yarn.server.TestMiniYarnClusterNodeUtilization 
   hadoop.yarn.server.TestContainerManagerSecurity 
   hadoop.yarn.client.api.impl.TestYarnClient 

Timed out junit tests :

   org.apache.hadoop.http.TestHttpServerLifecycle 
   org.apache.hadoop.hdfs.TestLeaseRecovery2 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/121/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/121/artifact/out/diff-compile-javac-root.txt
  [172K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/121/artifact/out/diff-checkstyle-root.txt
  [16M]

   mvnsite:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/121/artifact/out/patch-mvnsite-root.txt
  [112K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/121/artifact/out/diff-patch-pylint.txt
  [16K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/121/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/121/artifact/out/diff-patch-shelldocs.txt
  [12K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/121/artifact/out/whitespace-eol.txt
  [12M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/121/artifact/out/whitespace-tabs.txt
  [1.3M]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/121/artifact/out/diff-javadoc-javadoc-root.txt
  [2.3M]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/121/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
  [116K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/121/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [144K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/121/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.txt
  [20K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/121/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
  [36K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/121/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-applicationhistoryservice.txt
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/121/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt
  [268K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/121/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/121/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-nativetask.txt
  [124K]

   asflicense:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/121/artifact/out/patch-asflicense-problems.txt
  [4.0K]

Powered by Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org



-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org

Re: AWS S3AInputStream questions

2016-08-02 Thread Mr rty ff
The message got garbled up so I trying to send it again.
Hi I have few questions about implementation of inputstream in S3.From 
S3AInputStream.java
1)
public synchronized long getPos() throws IOException {return (nextReadPos < 0) 
? 0 : nextReadPos;}
Why does it return nextReadPos  not pos?In memeber definition for pos
/*** This is the public position; the one set in {@link #seek(long)}* and 
returned in {@link #getPos()}.*/private long pos;
 2)seekInStreamIn the last lines you have:
// close the stream; if read the object will be opened at the new 
poscloseStream("seekInStream()", this.requestedStreamLen);pos = targetPos; Why 
you need this line? Shouldn`t pos be updated with actual skipped value? As you 
did:
| if (skipped > 0) { |
|


| pos += skipped; |


Thanks 

On Tuesday, August 2, 2016 10:17 AM, Mr rty ff  wrote:
 

 Hi I have few questions about implementation of inputstream in S3.
 1)public synchronized long getPos() throws IOException {return (nextReadPos < 
0) ? 0 : nextReadPos;}Why does it return nextReadPos  not pos?In memeber 
definition for pos/*** This is the public position; the one set in {@link 
#seek(long)}* and returned in {@link #getPos()}.*/private long pos; 
2)seekInStreamIn the last lines you have:// close the stream; if read the 
object will be opened at the new poscloseStream("seekInStream()", 
this.requestedStreamLen);pos = targetPos; Why you need this line? Shouldn`t pos 
be updated with actual skipped value? As you did:
| if (skipped > 0) { |
|


| pos += skipped; |


Thanks


  

AWS S3AInputStream questions

2016-08-02 Thread Mr rty ff
Hi I have few questions about implementation of inputstream in S3.
 1)public synchronized long getPos() throws IOException {return (nextReadPos < 
0) ? 0 : nextReadPos;}Why does it return nextReadPos  not pos?In memeber 
definition for pos/*** This is the public position; the one set in {@link 
#seek(long)}* and returned in {@link #getPos()}.*/private long pos; 
2)seekInStreamIn the last lines you have:// close the stream; if read the 
object will be opened at the new poscloseStream("seekInStream()", 
this.requestedStreamLen);pos = targetPos; Why you need this line? Shouldn`t pos 
be updated with actual skipped value? As you did:
| if (skipped > 0) { |
|


| pos += skipped; |


Thanks