Re: Two more binding votes required for Apache Hadoop 2.10.1 RC0

2020-09-18 Thread Sunil Govindan
Thanks.
I will verify the release.

Sunil

On Sat, Sep 19, 2020 at 12:26 AM Wei-Chiu Chuang  wrote:

> Masatake is doing a great job rolling out 2.10.1 RC0. Let's give him a
> final push to get it out.
>
> Thanks,
> Wei-Chiu
>


Re: [E] Re: [VOTE] Release Apache Hadoop 2.10.1 (RC0)

2020-09-18 Thread Eric Badger
+1 (non-binding)

- Verified all hashes and checksums
- Built from source on RHEL 7
- Deployed a single node pseudo cluster
- Ran some example jobs

Only minor thing is that the DockerLinuxContainerRuntime won't run in a
secure environment using kerberos and nscd because there is no way to
bind-mount the nscd socket without modifying the source code. However,
DockerLinuxContainerRuntime is wholly experimental in 2.10 and anyone using
it should be on 3.x

Eric

On Fri, Sep 18, 2020 at 3:21 PM Eric Payne 
wrote:

> Masatake,
>
> Thank you for the good work on creating this release!
>
> +1
>
> I downloaded and built the source. I ran a one-node cluster with 6 NMs.
> I manually ran apps in the Capacity Scheduler to test labels and capacity
> assignments.
>
> -Eric
>
>
> On Monday, September 14, 2020, 12:59:17 PM CDT, Masatake Iwasaki <
> iwasak...@oss.nttdata.co.jp> wrote:
>
> Hi folks,
>
> This is the first release candidate for the second release of Apache
> Hadoop 2.10.
> It contains 218 fixes/improvements since 2.10.0 [1].
>
> The RC0 artifacts are at:
>
> https://urldefense.com/v3/__http://home.apache.org/*iwasakims/hadoop-2.10.1-RC0/__;fg!!Op6eflyXZCqGR5I!QTDqxdNdFvwVpf8N82qVDnPyt1z8aPY6V3dtkcE3Pkxk3iR0SPKio96j4FMZ$
>
> RC tag is release-2.10.1-RC0:
>
> https://urldefense.com/v3/__https://github.com/apache/hadoop/tree/release-2.10.1-RC0__;!!Op6eflyXZCqGR5I!QTDqxdNdFvwVpf8N82qVDnPyt1z8aPY6V3dtkcE3Pkxk3iR0SPKio254keyy$
>
> The maven artifacts are hosted here:
>
> https://urldefense.com/v3/__https://repository.apache.org/content/repositories/orgapachehadoop-1279/__;!!Op6eflyXZCqGR5I!QTDqxdNdFvwVpf8N82qVDnPyt1z8aPY6V3dtkcE3Pkxk3iR0SPKioxzi9JE1$
>
> My public key is available here:
>
> https://urldefense.com/v3/__https://dist.apache.org/repos/dist/release/hadoop/common/KEYS__;!!Op6eflyXZCqGR5I!QTDqxdNdFvwVpf8N82qVDnPyt1z8aPY6V3dtkcE3Pkxk3iR0SPKioz0s1m6i$
>
> The vote will run for 5 days, until Saturday, September 19 at 10:00 am PDT.
>
> [1]
> https://urldefense.com/v3/__https://issues.apache.org/jira/issues/?jql=project*20in*20(HDFS*2C*20YARN*2C*20HADOOP*2C*20MAPREDUCE)*20AND*20resolution*20*3D*20Fixed*20AND*20fixVersion*20*3D*202.10.1__;JSUlJSUlJSUlJSUlJSUlJSUl!!Op6eflyXZCqGR5I!QTDqxdNdFvwVpf8N82qVDnPyt1z8aPY6V3dtkcE3Pkxk3iR0SPKiox7cDnhJ$
>
> Thanks,
> Masatake Iwasaki
>
> -
> To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org
>
>
> -
> To unsubscribe, e-mail: mapreduce-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: mapreduce-dev-h...@hadoop.apache.org
>
>


Re: [VOTE] Release Apache Hadoop 2.10.1 (RC0)

2020-09-18 Thread Eric Payne
Masatake,

Thank you for the good work on creating this release!

+1

I downloaded and built the source. I ran a one-node cluster with 6 NMs.
I manually ran apps in the Capacity Scheduler to test labels and capacity 
assignments.

-Eric


On Monday, September 14, 2020, 12:59:17 PM CDT, Masatake Iwasaki 
 wrote: 

Hi folks,

This is the first release candidate for the second release of Apache Hadoop 
2.10.
It contains 218 fixes/improvements since 2.10.0 [1].

The RC0 artifacts are at:
http://home.apache.org/~iwasakims/hadoop-2.10.1-RC0/

RC tag is release-2.10.1-RC0:
https://github.com/apache/hadoop/tree/release-2.10.1-RC0

The maven artifacts are hosted here:
https://repository.apache.org/content/repositories/orgapachehadoop-1279/

My public key is available here:
https://dist.apache.org/repos/dist/release/hadoop/common/KEYS

The vote will run for 5 days, until Saturday, September 19 at 10:00 am PDT.

[1] 
https://issues.apache.org/jira/issues/?jql=project%20in%20(HDFS%2C%20YARN%2C%20HADOOP%2C%20MAPREDUCE)%20AND%20resolution%20%3D%20Fixed%20AND%20fixVersion%20%3D%202.10.1

Thanks,
Masatake Iwasaki

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org


-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-15588) Arbitrarily low values for `dfs.block.access.token.lifetime` aren't safe and can cause a healthy datanode to be excluded

2020-09-18 Thread sr2020 (Jira)
sr2020 created HDFS-15588:
-

 Summary: Arbitrarily low values for 
`dfs.block.access.token.lifetime` aren't safe and can cause a healthy datanode 
to be excluded
 Key: HDFS-15588
 URL: https://issues.apache.org/jira/browse/HDFS-15588
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs, hdfs-client, security
Reporter: sr2020


*Description*:
Setting `dfs.block.access.token.lifetime` to arbitrarily low values (like 1) 
means the lifetime of a block token is very short, as a result some healthy 
datanodes could be wrongly excluded by the client due to the 
`InvalidBlockTokenException`.

More specifically, in `nextBlockOutputStream`, the client tries to get the 
`accessToken` from the namenode and use it to talk to datanode. And the 
lifetime of `accessToken` could set to very small (like 1 min) by setting 
`dfs.block.access.token.lifetime`. In some extreme conditions (like a VM 
migration, temporary network issue, or a stop-the-world GC), the `accessToken` 
could become expired when the client tries to use it to talk to the datanode. 
If expired, `createBlockOutputStream` will return false (and mask the 
`InvalidBlockTokenException`), so the client will think the datanode is 
unhealthy, mark the it as "excluded" and will never read/write on it.


*Proposed solution*:
A simple retry on the same datanode after catching `InvalidBlockTokenException` 
can solve this problem (assuming the extreme conditions won't happen often). 
Since currently the `dfs.block.access.token.lifetime` can even accept values 
like 0, we can also choose to prevent the users from setting 
`dfs.block.access.token.lifetime` to a small value (e.g., we can enforce a 
minimum value of 5mins for this parameter).

We submit a patch for retrying after catching `InvalidBlockTokenException` in 
`nextBlockOutputStream`. We can also provide a patch for enforcing a larger 
minimum value for `dfs.block.access.token.lifetime` if it is a better way to 
handle this.




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Two more binding votes required for Apache Hadoop 2.10.1 RC0

2020-09-18 Thread Wei-Chiu Chuang
Masatake is doing a great job rolling out 2.10.1 RC0. Let's give him a
final push to get it out.

Thanks,
Wei-Chiu


Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86_64

2020-09-18 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/269/

[Sep 17, 2020 5:43:00 AM] (noreply) HDFS-15578: Fix the rename issues with 
fallback fs enabled (#2305). Contributed by Uma Maheswara Rao G.
[Sep 17, 2020 9:20:08 AM] (noreply) HDFS-15568. namenode start failed to start 
when dfs.namenode.max.snapshot.limit set. (#2296)
[Sep 17, 2020 1:11:42 PM] (Stephen O'Donnell) HDFS-15415. Reduce locking in 
Datanode DirectoryScanner. Contributed by Stephen O'Donnell
[Sep 17, 2020 5:39:19 PM] (noreply) HADOOP-17208. 
LoadBalanceKMSClientProvider#deleteKey should invalidateCache via all 
KMSClientProvider instances. (#2259)
[Sep 17, 2020 5:57:19 PM] (Szilard Nemeth) YARN-9333. 
TestFairSchedulerPreemption.testRelaxLocalityPreemptionWithNoLessAMInRemainingNodes
 fails intermittently. Contributed by Peter Bacsko




-1 overall


The following subsystems voted -1:
asflicense pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
 

Failed junit tests :

   hadoop.crypto.key.kms.server.TestKMS 
   hadoop.hdfs.TestDFSInotifyEventInputStreamKerberized 
   hadoop.hdfs.server.namenode.ha.TestPipelinesFailover 
   hadoop.hdfs.TestFileChecksum 
   hadoop.hdfs.TestFileChecksumCompositeCrc 
   hadoop.hdfs.TestGetFileChecksum 
   hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier 
   hadoop.hdfs.web.TestWebHdfsWithMultipleNameNodes 
   hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped 
   hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks 
   hadoop.hdfs.TestSnapshotCommands 
   hadoop.fs.contract.router.web.TestRouterWebHDFSContractMkdir 
   hadoop.fs.contract.router.TestRouterHDFSContractRootDirectorySecure 
   hadoop.hdfs.server.federation.router.TestRouterMultiRack 
   hadoop.fs.contract.router.web.TestRouterWebHDFSContractAppend 
   hadoop.hdfs.server.federation.router.TestRouterRpcMultiDestination 
   hadoop.fs.contract.router.web.TestRouterWebHDFSContractConcat 
   hadoop.fs.contract.router.web.TestRouterWebHDFSContractCreate 
   hadoop.hdfs.server.federation.router.TestRouterAllResolver 
   hadoop.fs.contract.router.web.TestRouterWebHDFSContractRename 
   hadoop.hdfs.server.federation.router.TestDisableNameservices 
   hadoop.hdfs.server.federation.router.TestRouterRpc 
   hadoop.fs.contract.router.web.TestRouterWebHDFSContractDelete 
   hadoop.fs.contract.router.web.TestRouterWebHDFSContractOpen 
   hadoop.hdfs.server.federation.router.TestSafeMode 
   hadoop.hdfs.server.federation.router.TestRouterQuota 
   hadoop.fs.contract.router.web.TestRouterWebHDFSContractSeek 
   hadoop.fs.contract.router.TestRouterHDFSContractDelete 
   hadoop.fs.contract.router.TestRouterHDFSContractAppend 
   hadoop.yarn.server.nodemanager.TestNodeStatusUpdater 
   hadoop.yarn.applications.distributedshell.TestDistributedShell 
  

   cc:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/269/artifact/out/diff-compile-cc-root.txt
  [48K]

   javac:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/269/artifact/out/diff-compile-javac-root.txt
  [568K]

   checkstyle:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/269/artifact/out/diff-checkstyle-root.txt
  [16M]

   pathlen:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/269/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/269/artifact/out/diff-patch-pylint.txt
  [60K]

   shellcheck:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/269/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   

[jira] [Resolved] (HDFS-15587) Hadoop Client version 3.2.1 vulnerability

2020-09-18 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HDFS-15587.
---
Resolution: Invalid

> Hadoop Client version 3.2.1 vulnerability
> -
>
> Key: HDFS-15587
> URL: https://issues.apache.org/jira/browse/HDFS-15587
> Project: Hadoop HDFS
>  Issue Type: Wish
>Reporter: Laszlo Czol
>Priority: Minor
>
>  I'm having a problem using hadoop-client version 3.2.1 in my dependency 
> tree. It has a vulnerable jar: org.apache.hadoop : 
> hadoop-mapreduce-client-core : 3.2.1 The code for the vulnerability is: 
> CVE-2017-3166, basically _if a file in an encryption zone with access 
> permissions that make it world readable is localized via YARN's localization 
> mechanism, that file will be stored in a world-readable location and can be 
> shared freely with any application that requests to localize that file_ The 
> problem is that: if I'm updating for the 3.3.0 hadoop-client version the 
> vulnerability remains and I wouldn't make a downgrade for the version 2.8.1 
> which is the next non-vulnerable version.
> Do you have any roadmap or any plan for this?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 2.10.1 (RC0)

2020-09-18 Thread Masatake Iwasaki

+1(binding) from myself.

* launched pseudo-distributed cluster using binary tarball on CentOS 7.
* ran some example mapreduce jobs.

* built source tarball on CentOS 8 with `-Pdist -Pnative`.
* launched 3-nodes cluster with NN-HA and RM-HA enabled using docker.
* ran some example mapreduce jobs.

* built RPM from source tarball using master branch of Bigtop on CentOS 7.
* deployed 3-nodes cluster using docker provisioner of Bigtop.
* ran smoke-tests of hdfs, yarn, mapreduce and kms.

* built RPM of hbase-1.5.0 and hive-2.3.6 using master branch of Bigtop on 
CentOS 7.
  
Thanks,

Masatake Iwasaki

On 2020/09/15 2:59, Masatake Iwasaki wrote:

Hi folks,

This is the first release candidate for the second release of Apache Hadoop 
2.10.
It contains 218 fixes/improvements since 2.10.0 [1].

The RC0 artifacts are at:
http://home.apache.org/~iwasakims/hadoop-2.10.1-RC0/

RC tag is release-2.10.1-RC0:
https://github.com/apache/hadoop/tree/release-2.10.1-RC0

The maven artifacts are hosted here:
https://repository.apache.org/content/repositories/orgapachehadoop-1279/

My public key is available here:
https://dist.apache.org/repos/dist/release/hadoop/common/KEYS

The vote will run for 5 days, until Saturday, September 19 at 10:00 am PDT.

[1] 
https://issues.apache.org/jira/issues/?jql=project%20in%20(HDFS%2C%20YARN%2C%20HADOOP%2C%20MAPREDUCE)%20AND%20resolution%20%3D%20Fixed%20AND%20fixVersion%20%3D%202.10.1

Thanks,
Masatake Iwasaki

-
To unsubscribe, e-mail: mapreduce-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-dev-h...@hadoop.apache.org



-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-15587) Hadoop Client version 3.2.1 vulnerability

2020-09-18 Thread Laszlo Czol (Jira)
Laszlo Czol created HDFS-15587:
--

 Summary: Hadoop Client version 3.2.1 vulnerability
 Key: HDFS-15587
 URL: https://issues.apache.org/jira/browse/HDFS-15587
 Project: Hadoop HDFS
  Issue Type: Wish
Reporter: Laszlo Czol


 I'm having a problem using hadoop-client version 3.2.1 in my dependency tree. 
It has a vulnerable jar: org.apache.hadoop : hadoop-mapreduce-client-core : 
3.2.1 The code for the vulnerability is: CVE-2017-3166, basically _if a file in 
an encryption zone with access permissions that make it world readable is 
localized via YARN's localization mechanism, that file will be stored in a 
world-readable location and can be shared freely with any application that 
requests to localize that file_ The problem is that: if I'm updating for the 
3.3.0 hadoop-client version the vulnerability remains and I wouldn't make a 
downgrade for the version 2.8.1 which is the next non-vulnerable version.
Do you have any roadmap or any plan for this?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: branch-2.10+JDK7 on Linux/x86_64

2020-09-18 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/60/

No changes




-1 overall


The following subsystems voted -1:
asflicense hadolint jshint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   hadoop-build-tools/src/main/resources/checkstyle/checkstyle.xml 
   hadoop-build-tools/src/main/resources/checkstyle/suppressions.xml 
   
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml
 
   hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml 
   hadoop-tools/hadoop-azure/src/config/checkstyle.xml 
   hadoop-tools/hadoop-resourceestimator/src/config/checkstyle.xml 
   hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

Failed junit tests :

   hadoop.security.token.delegation.web.TestWebDelegationToken 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   
hadoop.hdfs.server.blockmanagement.TestReplicationPolicyWithUpgradeDomain 
   hadoop.hdfs.TestMultipleNNPortQOP 
   hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys 
   hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.hdfs.server.federation.router.TestRouterQuota 
   hadoop.hdfs.server.federation.router.TestRouterNamenodeHeartbeat 
   hadoop.hdfs.server.federation.resolver.order.TestLocalResolver 
   hadoop.hdfs.server.federation.resolver.TestMultipleDestinationResolver 
   hadoop.yarn.server.resourcemanager.TestClientRMService 
   hadoop.mapreduce.jobhistory.TestHistoryViewerPrinter 
   hadoop.fs.azure.TestClientThrottlingAnalyzer 
   hadoop.resourceestimator.solver.impl.TestLpSolver 
   hadoop.resourceestimator.service.TestResourceEstimatorService 
  

   jshint:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/60/artifact/out/diff-patch-jshint.txt
  [208K]

   cc:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/60/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/60/artifact/out/diff-compile-javac-root.txt
  [436K]

   checkstyle:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/60/artifact/out/diff-checkstyle-root.txt
  [16M]

   hadolint:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/60/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   pathlen:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/60/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/60/artifact/out/diff-patch-pylint.txt
  [60K]

   shellcheck:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/60/artifact/out/diff-patch-shellcheck.txt
  [56K]

   shelldocs:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/60/artifact/out/diff-patch-shelldocs.txt
  [8.0K]

   whitespace:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/60/artifact/out/whitespace-eol.txt
  [12M]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/60/artifact/out/whitespace-tabs.txt
  [1.3M]

   xml:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/60/artifact/out/xml.txt
  [4.0K]

   javadoc:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/60/artifact/out/diff-javadoc-javadoc-root.txt
  [20K]

   unit:

   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/60/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
  [188K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/60/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [280K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/60/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs_src_contrib_bkjournal.txt
  [12K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/60/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
  [36K]
   
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/60/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
  [120K]
   

[jira] [Resolved] (HDFS-15585) ViewDFS#getDelegationToken should not throw UnsupportedOperationException.

2020-09-18 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-15585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena resolved HDFS-15585.
-
Fix Version/s: 3.4.0
   3.3.1
 Hadoop Flags: Reviewed
   Resolution: Fixed

Committed to trunk and branch-3.3

Thanx [~umamaheswararao] for the contribution!!!

> ViewDFS#getDelegationToken should not throw UnsupportedOperationException.
> --
>
> Key: HDFS-15585
> URL: https://issues.apache.org/jira/browse/HDFS-15585
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.4.0
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.1, 3.4.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> When starting Hive in secure environment, it is throwing 
> UnsupportedOprationException from ViewDFS.
> at org.apache.hive.service.server.HiveServer2.start(HiveServer2.java:736) 
> ~[hive-service-3.1.3000.7.2.3.0-54.jar:3.1.3000.7.2.3.0-54]
>   at 
> org.apache.hive.service.server.HiveServer2.startHiveServer2(HiveServer2.java:1077)
>  ~[hive-service-3.1.3000.7.2.3.0-54.jar:3.1.3000.7.2.3.0-54]
>   ... 9 more
> Caused by: java.lang.UnsupportedOperationException
>   at 
> org.apache.hadoop.hdfs.ViewDistributedFileSystem.getDelegationToken(ViewDistributedFileSystem.java:1042)
>  ~[hadoop-hdfs-client-3.1.1.7.2.3.0-54.jar:?]
>   at 
> org.apache.hadoop.security.token.DelegationTokenIssuer.collectDelegationTokens(DelegationTokenIssuer.java:95)
>  ~[hadoop-common-3.1.1.7.2.3.0-54.jar:?]
>   at 
> org.apache.hadoop.security.token.DelegationTokenIssuer.addDelegationTokens(DelegationTokenIssuer.java:76)
>  ~[hadoop-common-3.1.1.7.2.3.0-54.jar:?]
>   at 
> org.apache.tez.common.security.TokenCache.obtainTokensForFileSystemsInternal(TokenCache.java:140)
>  ~[tez-api-0.9.1.7.2.3.0-54.jar:0.9.1.7.2.3.0-54]
>   at 
> org.apache.tez.common.security.TokenCache.obtainTokensForFileSystemsInternal(TokenCache.java:101)
>  ~[tez-api-0.9.1.7.2.3.0-54.jar:0.9.1.7.2.3.0-54]
>   at 
> org.apache.tez.common.security.TokenCache.obtainTokensForFileSystems(TokenCache.java:77)
>  ~[tez-api-0.9.1.7.2.3.0-54.jar:0.9.1.7.2.3.0-54]
>   at 
> org.apache.hadoop.hive.ql.exec.tez.TezSessionState.createLlapCredentials(TezSessionState.java:443)
>  ~[hive-exec-3.1.3000.7.2.3.0-54.jar:3.1.3000.7.2.3.0-54]
>   at 
> org.apache.hadoop.hive.ql.exec.tez.TezSessionState.openInternal(TezSessionState.java:354)
>  ~[hive-exec-3.1.3000.7.2.3.0-54.jar:3.1.3000.7.2.3.0-54]
>   at 
> org.apache.hadoop.hive.ql.exec.tez.TezSessionState.open(TezSessionState.java:313)
>  ~[hive-exec-3.1.3000.7.2.3.0-54.jar:3.1.3000.7.2.3.0-54]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-15586) HBase NodeLabel support

2020-09-18 Thread jianghua zhu (Jira)
jianghua zhu created HDFS-15586:
---

 Summary: HBase NodeLabel support
 Key: HDFS-15586
 URL: https://issues.apache.org/jira/browse/HDFS-15586
 Project: Hadoop HDFS
  Issue Type: New Feature
Reporter: jianghua zhu


We can add a new feature. Similar to Yarn(NodeLabel).
Here, the main purpose is to classify nodes with different resources (cpu, 
memory, etc.) in the cluster so that they can be more efficient when accessing 
HBase.

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org