[jira] [Created] (HADOOP-17254) Upgrade hbase to 1.2.6.1 on branch-2

2020-09-08 Thread Masatake Iwasaki (Jira)
Masatake Iwasaki created HADOOP-17254:
-

 Summary: Upgrade hbase to 1.2.6.1 on branch-2
 Key: HADOOP-17254
 URL: https://issues.apache.org/jira/browse/HADOOP-17254
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Masatake Iwasaki
Assignee: Masatake Iwasaki






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17253) Upgrade zookeeper to 3.4.14 on branch-2.10

2020-09-08 Thread Masatake Iwasaki (Jira)
Masatake Iwasaki created HADOOP-17253:
-

 Summary: Upgrade zookeeper to 3.4.14 on branch-2.10
 Key: HADOOP-17253
 URL: https://issues.apache.org/jira/browse/HADOOP-17253
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Masatake Iwasaki
Assignee: Masatake Iwasaki


Since versions of zookeeper and curator have different history between 
branch-2.10 and trunk, I filed this to upgrade both zookeeper and curator on 
branch-2.10.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: How to manually trigger a PreCommit build for a github PR?

2020-09-08 Thread Dinesh Chitlangia
You could try doing an empty commit.

git commit --allow-empty -m 'trigger new CI check' && git push


Thanks,
Dinesh



On Tue, Sep 8, 2020 at 5:39 PM Mingliang Liu  wrote:

> Hi,
>
> To trigger a PreCommit build without code change, I can make the JIRA
> status "Patch Available" and provide the JIRA number to "Build With
> Parameters" link
> <
> https://ci-hadoop.apache.org/view/Hadoop/job/PreCommit-HADOOP-Build/build?delay=0sec
> >
> .
>
> Not sure how to do that for a PR without a real commit to the PR branch?
>
> Thanks,
>


Re: How to manually trigger a PreCommit build for a github PR?

2020-09-08 Thread Ayush Saxena
Hi Mingliang,
If by running Pre-Commit without any code change, you mean to say rerun
jenkins, that means it ran once, you want to run it again without any code
changes?
If so,
You can go to that PR build and click on replay. For example :
Once logged in, if you click on the below link :
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2281/2/

You can see an option 'replay' on the left, Clicking on it will rerun the
build.

Secondly, If the last build isn't available, then I think creating an empty
commit is the only way as Dinesh also suggested.

-Ayush

On Wed, 9 Sep 2020 at 03:43, Mingliang Liu  wrote:

> Thanks Dinesh. This is very helpful.
>
> I will add this to the wiki page
>  if
> this is the suggested way of doing it.
>
>
>
> On Tue, Sep 8, 2020 at 2:54 PM Dinesh Chitlangia  >
> wrote:
>
> > You could try doing an empty commit.
> >
> > git commit --allow-empty -m 'trigger new CI check' && git push
> >
> >
> > Thanks,
> > Dinesh
> >
> >
> >
> > On Tue, Sep 8, 2020 at 5:39 PM Mingliang Liu  wrote:
> >
> >> Hi,
> >>
> >> To trigger a PreCommit build without code change, I can make the JIRA
> >> status "Patch Available" and provide the JIRA number to "Build With
> >> Parameters" link
> >> <
> >>
> https://ci-hadoop.apache.org/view/Hadoop/job/PreCommit-HADOOP-Build/build?delay=0sec
> >> >
> >> .
> >>
> >> Not sure how to do that for a PR without a real commit to the PR branch?
> >>
> >> Thanks,
> >>
> >
>


Re: How to manually trigger a PreCommit build for a github PR?

2020-09-08 Thread Mingliang Liu
Thanks Dinesh. This is very helpful.

I will add this to the wiki page
 if
this is the suggested way of doing it.



On Tue, Sep 8, 2020 at 2:54 PM Dinesh Chitlangia 
wrote:

> You could try doing an empty commit.
>
> git commit --allow-empty -m 'trigger new CI check' && git push
>
>
> Thanks,
> Dinesh
>
>
>
> On Tue, Sep 8, 2020 at 5:39 PM Mingliang Liu  wrote:
>
>> Hi,
>>
>> To trigger a PreCommit build without code change, I can make the JIRA
>> status "Patch Available" and provide the JIRA number to "Build With
>> Parameters" link
>> <
>> https://ci-hadoop.apache.org/view/Hadoop/job/PreCommit-HADOOP-Build/build?delay=0sec
>> >
>> .
>>
>> Not sure how to do that for a PR without a real commit to the PR branch?
>>
>> Thanks,
>>
>


[jira] [Created] (HADOOP-17252) Website to link to latest Hadoop wiki

2020-09-08 Thread Mingliang Liu (Jira)
Mingliang Liu created HADOOP-17252:
--

 Summary: Website to link to latest Hadoop wiki
 Key: HADOOP-17252
 URL: https://issues.apache.org/jira/browse/HADOOP-17252
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Mingliang Liu


Currently the website links to the [old wiki|https://wiki.apache.org/hadoop]. 
Shall we update that to the latest one: 
https://cwiki.apache.org/confluence/display/HADOOP2/Home



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



How to manually trigger a PreCommit build for a github PR?

2020-09-08 Thread Mingliang Liu
Hi,

To trigger a PreCommit build without code change, I can make the JIRA
status "Patch Available" and provide the JIRA number to "Build With
Parameters" link

.

Not sure how to do that for a PR without a real commit to the PR branch?

Thanks,


Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86_64

2020-09-08 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/259/

[Sep 7, 2020 8:29:20 AM] (Adam Antal) YARN-9136. getNMResourceInfo NodeManager 
REST API method is not documented
[Sep 7, 2020 9:39:03 AM] (Peter Bacsko) YARN-10411. Create an allowCreate flag 
for MappingRuleAction. Contributed by Gergely Pollak.
[Sep 7, 2020 9:44:09 AM] (Adam Antal) YARN-10332. RESOURCE_UPDATE event was 
repeatedly registered in DECOMMISSIONING state. Contributed by yehuanhuan
[Sep 7, 2020 6:36:13 PM] (noreply) HDFS-15558: 
ViewDistributedFileSystem#recoverLease should call super.recoverLease when 
there are no mounts configured (#2275) Contributed by Uma Maheswara Rao G.




-1 overall


The following subsystems voted -1:
asflicense pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
 

Failed junit tests :

   hadoop.hdfs.TestFileChecksum 
   hadoop.hdfs.server.balancer.TestBalancerRPCDelay 
   hadoop.hdfs.server.namenode.TestAddStripedBlockInFBR 
   hadoop.hdfs.TestFileChecksumCompositeCrc 
   hadoop.hdfs.TestGetFileChecksum 
   hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks 
   hadoop.yarn.server.nodemanager.amrmproxy.TestFederationInterceptor 
   hadoop.yarn.applications.distributedshell.TestDistributedShell 
   hadoop.yarn.sls.appmaster.TestAMSimulator 
  

   cc:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/259/artifact/out/diff-compile-cc-root.txt
  [48K]

   javac:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/259/artifact/out/diff-compile-javac-root.txt
  [568K]

   checkstyle:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/259/artifact/out/diff-checkstyle-root.txt
  [16M]

   pathlen:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/259/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/259/artifact/out/diff-patch-pylint.txt
  [60K]

   shellcheck:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/259/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/259/artifact/out/diff-patch-shelldocs.txt
  [44K]

   whitespace:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/259/artifact/out/whitespace-eol.txt
  [13M]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/259/artifact/out/whitespace-tabs.txt
  [1.9M]

   xml:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/259/artifact/out/xml.txt
  [24K]

   javadoc:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/259/artifact/out/diff-javadoc-javadoc-root.txt
  [1.3M]

   unit:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/259/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [428K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/259/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
  [72K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/259/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-distributedshell.txt
  [16K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/259/artifact/out/patch-unit-hadoop-tools_hadoop-sls.txt
  [12K]

   asflicense:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/259/artifact/out/patch-asflicense-problems.txt
  [4.0K]

Powered by Apache Yetus 0.12.0   https://yetus.apache.org

--

Re: [DISCUSS] Ozone TLP proposal

2020-09-08 Thread Masatake Iwasaki

Hi Elek,


  2. Following the path of Submarine, any existing Hadoop committers -- who are 
willing to contribute -- can ask to be included in the initial committer list 
without any additional constraints. (Edit the wiki, or send an email to this 
thread or to me). Thanks for Vinod to suggesting this approach. (for Submarine 
at that time)


Since I'm willing to contribute, I added my name to the wiki.

Thanks,
Masatake Iwasaki

On 2020/09/07 21:04, Elek, Marton wrote:


Hi,

The Hadoop community earlier decided to move out Ozone sub-project to a 
separated Apache Top Level Project (TLP). [1]

For detailed history and motivation, please check the previous thread ([1])

Ozone community discussed and agreed on the initial version of the project 
proposal, and now it's time to discuss it with the full Hadoop community.

The current version is available at the Hadoop wiki:

https://cwiki.apache.org/confluence/display/HADOOP/Ozone+Hadoop+subproject+to+Apache+TLP+proposal


  1. Please read it. You can suggest any modifications or topics to cover (here 
or in the comments)

  2. Following the path of Submarine, any existing Hadoop committers -- who are 
willing to contribute -- can ask to be included in the initial committer list 
without any additional constraints. (Edit the wiki, or send an email to this 
thread or to me). Thanks for Vinod to suggesting this approach. (for Submarine 
at that time)


Next steps:

  * After this discussion thread (in case of consensus) a new VOTE thread will 
be started about the proposal (*-dev@hadoop.a.o)

  * In case VOTE is passed, the proposal will be sent to the Apache Board to be 
discussed.


Please help to make the proposal better,

Thanks a lot,
Marton


[1]. 
https://lists.apache.org/thread.html/r298eba8abecc210abd952f040b0c4f07eccc62dcdc49429c1b8f4ba9%40%3Chdfs-dev.hadoop.apache.org%3E

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [DISCUSS] Hadoop 2.10.1 release

2020-09-08 Thread Masatake Iwasaki

https://issues.apache.org/jira/issues/?jql=project%20in%20(HADOOP%2C%20HDFS%2C%20YARN%2C%20MAPREDUCE)%20AND%20%22Target%20Version%2Fs%22%20%3D%202.10.1%20and%20statusCategory%20!%3D%20Done

There is no blocker/critical issues now.
I updated the target version of issues without
recent activities or applicable patches.

Only HADOOP-16918 has release-blocker label.
I filed HADOOP-17249 and HADOOP-17251 as sub tasks.

If those can not be merged in a few days,
I would like to put them off until next release and
cut a release candidate.

Thanks,
Masatake Iwasaki

On 2020/09/03 21:35, Masatake Iwasaki wrote:

Thanks, Jim Brennan.

I can see 4 Blocker/Critical issues targeted to 2.10.1.

https://issues.apache.org/jira/issues/?jql=project%20in%20(HADOOP%2C%20HDFS%2C%20YARN%2C%20MAPREDUCE)%20AND%20%22Target%20Version%2Fs%22%20%3D%202.10.1%20and%20statusCategory%20!%3D%20Done

https://issues.apache.org/jira/browse/YARN-10177
https://issues.apache.org/jira/browse/HDFS-15422
https://issues.apache.org/jira/browse/HDFS-14277
https://issues.apache.org/jira/browse/HDFS-12548

I'm going to look into them and
update 'Target Version/s:' of non-critical issues.

Masatake Iwasaki

On 2020/09/03 4:25, Jim Brennan wrote:

Thanks Masatake Iwasaki!
I am willing to help out with Hadoop 2.10.1 release.

Jim Brennan

On Tue, Sep 1, 2020 at 2:13 AM Masatake Iwasaki 
wrote:


Thanks, Mingliang Liu.

I volunteer to take the RM role then.
I will appreciate advice from who have the experience.

Masatake Iwasaki

On 2020/09/01 10:38, Mingliang Liu wrote:

I can see how I can help, but I can not take the RM role this time.

Thanks,

On Mon, Aug 31, 2020 at 12:15 PM Wei-Chiu Chuang
 wrote:


Hello,

I see that Masatake graciously agreed to volunteer with the Hadoop

2.10.1

release work in the 2.9 branch EOL discussion thread


https://urldefense.com/v3/__https://s.apache.org/hadoop2.9eold__;!!Op6eflyXZCqGR5I!VgXExIv_vKXrm8I53vtPUDS-H-gXOm8a48tad9NLLwKDrPfbDAjgMW3Cu9eHGwoifmKPnIM$


Anyone else likes to contribute also?

Thanks





-
To unsubscribe, e-mail: mapreduce-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-dev-h...@hadoop.apache.org






-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17251) Upgrade netty-all to 4.1.50.Final on branch-2.10

2020-09-08 Thread Masatake Iwasaki (Jira)
Masatake Iwasaki created HADOOP-17251:
-

 Summary: Upgrade netty-all to 4.1.50.Final on branch-2.10
 Key: HADOOP-17251
 URL: https://issues.apache.org/jira/browse/HADOOP-17251
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Masatake Iwasaki
Assignee: Masatake Iwasaki






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: branch-2.10+JDK7 on Linux/x86_64

2020-09-08 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/50/

No changes




-1 overall


The following subsystems voted -1:
asflicense hadolint jshint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   hadoop-build-tools/src/main/resources/checkstyle/checkstyle.xml 
   hadoop-build-tools/src/main/resources/checkstyle/suppressions.xml 
   
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml
 
   hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml 
   hadoop-tools/hadoop-azure/src/config/checkstyle.xml 
   hadoop-tools/hadoop-resourceestimator/src/config/checkstyle.xml 
   hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

Failed junit tests :

   hadoop.util.TestDiskCheckerWithDiskIo 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.hdfs.server.namenode.TestDecommissioningStatus 
   
hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithUpgradeDomain 
   hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.hdfs.server.federation.router.TestRouterNamenodeHeartbeat 
   hadoop.hdfs.server.federation.resolver.order.TestLocalResolver 
   hadoop.hdfs.server.federation.router.TestRouterQuota 
   hadoop.hdfs.server.federation.resolver.TestMultipleDestinationResolver 
   hadoop.yarn.server.resourcemanager.TestClientRMService 
   hadoop.yarn.client.api.impl.TestAMRMClient 
   hadoop.mapreduce.jobhistory.TestHistoryViewerPrinter 
   hadoop.tools.TestDistCpSystem 
   hadoop.resourceestimator.service.TestResourceEstimatorService 
   hadoop.resourceestimator.solver.impl.TestLpSolver 
  

   jshint:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/50/artifact/out/diff-patch-jshint.txt
  [208K]

   cc:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/50/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/50/artifact/out/diff-compile-javac-root.txt
  [428K]

   checkstyle:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/50/artifact/out/diff-checkstyle-root.txt
  [16M]

   hadolint:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/50/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   pathlen:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/50/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/50/artifact/out/diff-patch-pylint.txt
  [60K]

   shellcheck:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/50/artifact/out/diff-patch-shellcheck.txt
  [56K]

   shelldocs:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/50/artifact/out/diff-patch-shelldocs.txt
  [8.0K]

   whitespace:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/50/artifact/out/whitespace-eol.txt
  [12M]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/50/artifact/out/whitespace-tabs.txt
  [1.3M]

   xml:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/50/artifact/out/xml.txt
  [4.0K]

   javadoc:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/50/artifact/out/diff-javadoc-javadoc-root.txt
  [20K]

   unit:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/50/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
  [208K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/50/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [268K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/50/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs_src_contrib_bkjournal.txt
  [12K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/50/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
  [36K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/50/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
  [104K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/50

[jira] [Created] (HADOOP-17250) ABFS: Allow random read sizes to be of buffer size

2020-09-08 Thread Sneha Vijayarajan (Jira)
Sneha Vijayarajan created HADOOP-17250:
--

 Summary: ABFS: Allow random read sizes to be of buffer size
 Key: HADOOP-17250
 URL: https://issues.apache.org/jira/browse/HADOOP-17250
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/azure
Affects Versions: 3.1.4
Reporter: Sneha Vijayarajan
Assignee: Sneha Vijayarajan


ADLS Gen2/ABFS driver is optimized to read only the bytes that are requested 
for when the read pattern is random. 

It was observed in some spark jobs that though the reads are random, the next 
read doesn't skip by a lot and can be served by the earlier read if read was 
done in buffer size. As a result the job triggered a higher count of read calls 
and resulted in higher job runtime.

When these jobs were run against Gen1 which always reads in buffer size , the 
jobs fared well. 

In this Jira we try to provide a control over config on random read to be of 
requested size or buffer size.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17189) add way for s3a to recognise buckets with "." in name and switch to path access

2020-09-08 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-17189.
-
Fix Version/s: 3.4.0
   Resolution: Won't Fix

WONTFIXing this, as buckets with . in their name will soon be forbidden

> add way for s3a to recognise buckets with "." in name and switch to path 
> access
> ---
>
> Key: HADOOP-17189
> URL: https://issues.apache.org/jira/browse/HADOOP-17189
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Priority: Minor
> Fix For: 3.4.0
>
>
> # AWS has, historically, allowed buckets with '.' in their name (along with 
> other non-DNS valid chars)
> # none of which work with virtual hostname S3 clients -you have to enable 
> path style access
> # which we can't do on a per-bucket basis, as the logic there doesn't support 
> buckets with '.' in the name (think about it...)
> # and we can't blindly say "use path access everywhere", because all buckets 
> created on/after 2020-10-01 won't work that way  
> dest.set(Math.max(dest.get(), sourceValue));



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17229) Test failure as failed request body counted in byte received metric - ITestAbfsNetworkStatistics#testAbfsHttpResponseStatistics

2020-09-08 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17229?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-17229.
-
Fix Version/s: 3.4.0
   Resolution: Fixed

> Test failure as failed request body counted in byte received metric - 
> ITestAbfsNetworkStatistics#testAbfsHttpResponseStatistics
> ---
>
> Key: HADOOP-17229
> URL: https://issues.apache.org/jira/browse/HADOOP-17229
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure, test
>Affects Versions: 3.3.0
>Reporter: Sneha Vijayarajan
>Assignee: Mehakmeet Singh
>Priority: Major
>  Labels: abfsactive, pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> Bytes received counter increments for every request response received. 
> [https://github.com/apache/hadoop/blob/d23cc9d85d887f01d72180bdf1af87dfdee15c5a/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsRestOperation.java#L251]
> This increments even for failed requests. 
> Observed during testing done for HADOOP-17215. A request failed with 409 
> Conflict contains response body as below:
> {"error":\{"code":"PathAlreadyExists","message":"The specified path already 
> exists.\nRequestId:c3b2c55c-b01f-0061-7b31-7b6ee300\nTime:2020-08-25T22:44:07.2356054Z"}}
> The error body of 168 size is incremented in bytes_received counter. 
> This also breaks the testcase testAbfsHttpResponseStatistics. 
> {code:java}
> [ERROR] Tests run: 2, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
> 22.746 s <<< FAILURE! - in 
> org.apache.hadoop.fs.azurebfs.ITestAbfsNetworkStatistics[ERROR] Tests run: 2, 
> Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 22.746 s <<< FAILURE! - in 
> org.apache.hadoop.fs.azurebfs.ITestAbfsNetworkStatistics[ERROR] 
> testAbfsHttpResponseStatistics(org.apache.hadoop.fs.azurebfs.ITestAbfsNetworkStatistics)
>   Time elapsed: 13.183 s  <<< FAILURE!java.lang.AssertionError: Mismatch in 
> bytes_received expected:<143> but was:<311> at 
> org.junit.Assert.fail(Assert.java:88) at 
> org.junit.Assert.failNotEquals(Assert.java:834) at 
> org.junit.Assert.assertEquals(Assert.java:645) at 
> org.apache.hadoop.fs.azurebfs.AbstractAbfsIntegrationTest.assertAbfsStatistics(AbstractAbfsIntegrationTest.java:445)
>  at 
> org.apache.hadoop.fs.azurebfs.ITestAbfsNetworkStatistics.testAbfsHttpResponseStatistics(ITestAbfsNetworkStatistics.java:291)
> {code}
> [~mehakmeetSingh] - is the bytes_received counter increment for failed 
> requests an expected behaviour ?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17158) Test timeout for ITestAbfsInputStreamStatistics#testReadAheadCounters

2020-09-08 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-17158.
-
Fix Version/s: 3.4.0
   Resolution: Fixed

fixed in trunk; happy to cherrypick to -3.3 if you are

> Test timeout for ITestAbfsInputStreamStatistics#testReadAheadCounters
> -
>
> Key: HADOOP-17158
> URL: https://issues.apache.org/jira/browse/HADOOP-17158
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 3.3.0
>Reporter: Mehakmeet Singh
>Assignee: Mehakmeet Singh
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Intermittent test timeout for 
> ITestAbfsInputStreamStatistics#testReadAheadCounters happening due to race 
> conditions in readAhead threads.
> Test error:
> {code:java}
> [ERROR] 
> testReadAheadCounters(org.apache.hadoop.fs.azurebfs.ITestAbfsInputStreamStatistics)
>   Time elapsed: 30.723 s  <<< 
> ERROR!org.junit.runners.model.TestTimedOutException: test timed out after 
> 3 milliseconds    at java.lang.Thread.sleep(Native Method)    at 
> org.apache.hadoop.fs.azurebfs.ITestAbfsInputStreamStatistics.testReadAheadCounters(ITestAbfsInputStreamStatistics.java:346)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)    
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
>    at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:498)    at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>     at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>     at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>     at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>     at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
>     at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
>     at java.util.concurrent.FutureTask.run(FutureTask.java:266)    at 
> java.lang.Thread.run(Thread.java:748) {code}
> Possible Reasoning:
> - ReadAhead queue doesn't get completed and hence the counter values are not 
> satisfied in 30 seconds time for some systems.
> - The condition that readAheadBytesRead and remoteBytesRead counter values 
> need to be greater than or equal to 4KB and 32KB respectively doesn't occur 
> in some machines due to the fact that sometimes instead of reading for 
> readAhead Buffer, remote reads are performed due to Threads still being in 
> the readAhead queue to fill that buffer. Thus resulting in either of the 2 
> counter values to be not satisfying the condition and getting in an infinite 
> loop and hence timing out the test eventually.
> Possible Fixes:
> - Write better test(That would pass under all conditions).
> - Maybe UT instead of IT?
> Possible fix to better the test would be preferable and UT as the last resort.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org