Re: [NOTICE] Removal of protobuf classes from Hadoop Token's public APIs' signature

2020-04-28 Thread Wei-Chiu Chuang
I'm sorry for coming to this late. I missed this message. It should have
been a DISCUSS thread rather than NOTICE.

Looks like this is inevitable. But we should make the downstream developers
aware & make the update easier. As long as it is stated clearly how to
update the code to support Hadoop 3.3, I am okay with that.

Here's what I suggest:
(1) label the jira incompatible (just updated the jira) and updated the
release note to tell app developer how to update.
(2) declare ProtobufHelper a public API HADOOP-17019


Tez doesn't use the removed Token API, but there's code that breaks with
the relocated protobuf class. The ProtobufHelper API will make this
transition much easier.

Other downstreamers that break with the relocated protobuf include: Ozone
and HBase. but neither of them use the removed Token API.


On Wed, Jan 8, 2020 at 4:40 AM Vinayakumar B 
wrote:

> Hi All,
>
>This mail is to notify about the Removal of following public APIs from
> Hadoop Common.
>
>  ClassName: org.apache.hadoop.security.token.Token
>  APIs:
>  public Token(TokenProto tokenPB);
>  public TokenProto toTokenProto();
>
>Reason: These APIs are having Generated protobuf classes in the
> signature. Right now due to protobuf upgrade in trunk (soon to be 3.3.0
> release) these APIs are breaking the downstream builds, even though
> downstreams dont use these APIs (just Loading Token class). Downstreams are
> still referencing having older version (2.5.0) of protobuf, hence build is
> being broken.
>
> These APIs were added for the internal purpose(HADOOP-12563), to
> support serializing tokens using protobuf in UGI Credentials.
> Same purpose can be achieved using the Helper classes without introducing
> protobuf classes in API signatures.
>
> Token.java is marked as Evolving, so I believe APIs can be changed whenever
> absolute necessary.
>
> Jira https://issues.apache.org/jira/browse/HADOOP-16621 has been
> reported to solve downstream build failure.
>
> So since this API was added for internal purpose easy approach to solve
> this is to remove APIs and use helper classes. Otherwise, as mentioned in
> HADOOP-16621, workaround will add unnecessary codes to be maintained.
>
> If anyone using these APIs outside hadoop project accidentally, please
> reply to this mail immediately.
>
> If no objection by next week, will go ahead with removal of above said APIs
> in HADOOP-16621.
>
> -Vinay
>


[jira] [Created] (YARN-10251) Show extended resources on legacy RM UI.

2020-04-28 Thread Eric Payne (Jira)
Eric Payne created YARN-10251:
-

 Summary: Show extended resources on legacy RM UI.
 Key: YARN-10251
 URL: https://issues.apache.org/jira/browse/YARN-10251
 Project: Hadoop YARN
  Issue Type: Improvement
Reporter: Eric Payne
Assignee: Eric Payne
 Attachments: Legacy RM UI With Not All Resources Shown.png, Updated 
Legacy RM UI With All Resources Shown.png





--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



[jira] [Created] (YARN-10250) Container Relaunch - find: File system loop detected

2020-04-28 Thread Matthew Sharp (Jira)
Matthew Sharp created YARN-10250:


 Summary: Container Relaunch - find: File system loop detected
 Key: YARN-10250
 URL: https://issues.apache.org/jira/browse/YARN-10250
 Project: Hadoop YARN
  Issue Type: Bug
Affects Versions: 3.2.0
Reporter: Matthew Sharp


Hive LLAP YARN service tries to relaunch from a container failure and when it 
retries on the same node we are seeing it fail with:
{code:java}
find: File system loop detected; ‘./lib/llap-27Apr2020.tar.gz’ is part of the 
same file system loop as ‘./lib’. {code}
 

YARN-8667 attempted to clean up the prior symlinks before relaunching, but in 
this case it still exists since it recreates the symlinks right before trying 
to output to directory.info for logging.

 

The following line appears to be the culprit:  
[https://github.com/apache/hadoop/blob/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/launcher/ContainerLaunch.java#L1346]

 

 

 

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



[jira] [Created] (YARN-10249) Various ResourceManager tests are failing on branch-3.2

2020-04-28 Thread Benjamin Teke (Jira)
Benjamin Teke created YARN-10249:


 Summary: Various ResourceManager tests are failing on branch-3.2
 Key: YARN-10249
 URL: https://issues.apache.org/jira/browse/YARN-10249
 Project: Hadoop YARN
  Issue Type: Bug
  Components: resourcemanager
Affects Versions: 3.2.0
Reporter: Benjamin Teke
Assignee: Benjamin Teke


Various tests are failing on branch-3.2. Some examples can be found in: 
YARN-10003, YARN-10002, YARN-10237.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



[jira] [Created] (YARN-10248) when config allowed-gpu-devices , excluded GPUs still be visible to containers

2020-04-28 Thread zhao yufei (Jira)
zhao yufei created YARN-10248:
-

 Summary: when config allowed-gpu-devices , excluded GPUs still be 
visible to containers
 Key: YARN-10248
 URL: https://issues.apache.org/jira/browse/YARN-10248
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Affects Versions: 3.2.1
Reporter: zhao yufei






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: branch2.10+JDK7 on Linux/x86

2020-04-28 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/669/

[Apr 27, 2020 9:14:42 PM] (jhung) YARN-8382. cgroup file leak in NM. 
Contributed by Hu Ziqian.
[Apr 28, 2020 12:14:21 AM] (ericp) MAPREDUCE-7277. IndexCache totalMemoryUsed 
differs from cache contents.




-1 overall


The following subsystems voted -1:
asflicense compile findbugs hadolint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml
 
   hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml 
   hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

FindBugs :

   module:hadoop-common-project/hadoop-minikdc 
   Possible null pointer dereference in 
org.apache.hadoop.minikdc.MiniKdc.delete(File) due to return value of called 
method Dereferenced at 
MiniKdc.java:org.apache.hadoop.minikdc.MiniKdc.delete(File) due to return value 
of called method Dereferenced at MiniKdc.java:[line 515] 

FindBugs :

   module:hadoop-common-project/hadoop-auth 
   
org.apache.hadoop.security.authentication.server.MultiSchemeAuthenticationHandler.authenticate(HttpServletRequest,
 HttpServletResponse) makes inefficient use of keySet iterator instead of 
entrySet iterator At MultiSchemeAuthenticationHandler.java:of keySet iterator 
instead of entrySet iterator At MultiSchemeAuthenticationHandler.java:[line 
192] 

FindBugs :

   module:hadoop-common-project/hadoop-common 
   org.apache.hadoop.crypto.CipherSuite.setUnknownValue(int) 
unconditionally sets the field unknownValue At CipherSuite.java:unknownValue At 
CipherSuite.java:[line 44] 
   org.apache.hadoop.crypto.CryptoProtocolVersion.setUnknownValue(int) 
unconditionally sets the field unknownValue At 
CryptoProtocolVersion.java:unknownValue At CryptoProtocolVersion.java:[line 67] 
   Possible null pointer dereference in 
org.apache.hadoop.fs.FileUtil.fullyDeleteOnExit(File) due to return value of 
called method Dereferenced at 
FileUtil.java:org.apache.hadoop.fs.FileUtil.fullyDeleteOnExit(File) due to 
return value of called method Dereferenced at FileUtil.java:[line 118] 
   Possible null pointer dereference in 
org.apache.hadoop.fs.RawLocalFileSystem.handleEmptyDstDirectoryOnWindows(Path, 
File, Path, File) due to return value of called method Dereferenced at 
RawLocalFileSystem.java:org.apache.hadoop.fs.RawLocalFileSystem.handleEmptyDstDirectoryOnWindows(Path,
 File, Path, File) due to return value of called method Dereferenced at 
RawLocalFileSystem.java:[line 383] 
   Useless condition:lazyPersist == true at this point At 
CommandWithDestination.java:[line 502] 
   org.apache.hadoop.io.DoubleWritable.compareTo(DoubleWritable) 
incorrectly handles double value At DoubleWritable.java: At 
DoubleWritable.java:[line 78] 
   org.apache.hadoop.io.DoubleWritable$Comparator.compare(byte[], int, int, 
byte[], int, int) incorrectly handles double value At DoubleWritable.java:int) 
incorrectly handles double value At DoubleWritable.java:[line 97] 
   org.apache.hadoop.io.FloatWritable.compareTo(FloatWritable) incorrectly 
handles float value At FloatWritable.java: At FloatWritable.java:[line 71] 
   org.apache.hadoop.io.FloatWritable$Comparator.compare(byte[], int, int, 
byte[], int, int) incorrectly handles float value At FloatWritable.java:int) 
incorrectly handles float value At FloatWritable.java:[line 89] 
   Possible null pointer dereference in 
org.apache.hadoop.io.IOUtils.listDirectory(File, FilenameFilter) due to return 
value of called method Dereferenced at 
IOUtils.java:org.apache.hadoop.io.IOUtils.listDirectory(File, FilenameFilter) 
due to return value of called method Dereferenced at IOUtils.java:[line 389] 
   Possible bad parsing of shift operation in 
org.apache.hadoop.io.file.tfile.Utils$Version.hashCode() At 
Utils.java:operation in 
org.apache.hadoop.io.file.tfile.Utils$Version.hashCode() At Utils.java:[line 
398] 
   
org.apache.hadoop.metrics2.lib.DefaultMetricsFactory.setInstance(MutableMetricsFactory)
 unconditionally sets the field mmfImpl At DefaultMetricsFactory.java:mmfImpl 
At DefaultMetricsFactory.java:[line 49] 
   
org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.setMiniClusterMode(boolean) 
unconditionally sets the field miniClusterMode At 
DefaultMetricsSystem.java:miniClusterMode At DefaultMetricsSystem.java:[line 
92] 
   Useless object stored in variable seqOs of method 
org.apache.hadoop.security.token.delegation.ZKDelegationTokenSecretManager.a

Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2020-04-28 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1483/

[Apr 27, 2020 6:26:11 AM] (snemeth) YARN-10194. YARN RMWebServices 
/scheduler-conf/validate leaks ZK
[Apr 27, 2020 8:20:47 AM] (github) HDFS-15298 Fix the findbugs warnings 
introduced in HDFS-15217 (#1979)
[Apr 27, 2020 1:19:15 PM] (pjoseph) YARN-10156. Destroy Jersey Client in 
TimelineConnector.
[Apr 27, 2020 1:43:51 PM] (github) HDFS-1820. FTPFileSystem attempts to close 
the outputstream even when it
[Apr 27, 2020 7:10:00 PM] (ericp) MAPREDUCE-7277. IndexCache totalMemoryUsed 
differs from cache contents.
[Apr 27, 2020 8:35:36 PM] (aajisaka) YARN-9848. Revert YARN-4946. Contributed 
by Steven Rand.
[Apr 27, 2020 9:17:14 PM] (aajisaka) HDFS-15286. Concat on a same file deleting 
the file. Contributed by




-1 overall


The following subsystems voted -1:
asflicense findbugs pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common
 
   org.apache.hadoop.yarn.server.webapp.WebServiceClient.sslFactory should 
be package protected At WebServiceClient.java: At WebServiceClient.java:[line 
42] 

Failed junit tests :

   hadoop.registry.server.dns.TestRegistryDNS 
   hadoop.hdfs.server.namenode.ha.TestConfiguredFailoverProxyProvider 
   hadoop.hdfs.TestByteBufferPread 
   hadoop.TestRefreshCallQueue 
   hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication 
   hadoop.metrics2.sink.TestRollingFileSystemSinkWithSecureHdfs 
   hadoop.hdfs.server.namenode.ha.TestStandbyIsHot 
   hadoop.hdfs.server.namenode.ha.TestBootstrapStandbyWithInProgressTailing 
   hadoop.hdfs.server.namenode.TestNamenodeStorageDirectives 
   hadoop.hdfs.server.namenode.ha.TestDelegationTokensWithHA 
   hadoop.hdfs.server.namenode.ha.TestDNFencing 
   hadoop.security.TestPermission 
   hadoop.hdfs.TestDFSStripedOutputStreamWithFailure 
   hadoop.security.TestPermissionSymlinks 
   hadoop.tools.TestJMXGet 
   hadoop.hdfs.TestLeaseRecovery 
   hadoop.security.TestRefreshUserMappings 
   hadoop.hdfs.server.namenode.ha.TestBootstrapStandby 
   hadoop.hdfs.nfs.nfs3.TestRpcProgramNfs3 
   
hadoop.hdfs.server.federation.router.TestRouterRPCMultipleDestinationMountTableResolver
 
   hadoop.hdfs.server.federation.router.TestRouterRpc 
   hadoop.hdfs.server.federation.router.TestRouterAdminCLI 
   hadoop.hdfs.server.federation.router.TestDisableNameservices 
   hadoop.hdfs.server.federation.store.driver.TestStateStoreFileSystem 
   hadoop.yarn.server.timelineservice.reader.TestTimelineReaderServer 
   hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesReservation 
   hadoop.yarn.webapp.TestRMWithCSRFFilter 
   hadoop.yarn.service.TestCleanupAfterKill 
   hadoop.streaming.TestStreamReduceNone 
   hadoop.tools.TestDistCpViewFs 
   hadoop.tools.dynamometer.workloadgenerator.TestWorkloadGenerator 
   hadoop.tools.dynamometer.TestDynamometerInfra 
   hadoop.tools.dynamometer.TestDynamometerInfra 
   hadoop.tools.dynamometer.workloadgenerator.TestWorkloadGenerator 
   hadoop.mapred.gridmix.TestSleepJob 
   hadoop.mapred.gridmix.TestGridmixSubmission 
   hadoop.mapred.gridmix.TestLoadJob 
   hadoop.fs.s3a.commit.staging.TestStagingPartitionedFileListing 
   hadoop.fs.s3a.commit.staging.TestDirectoryCommitterScale 
   hadoop.fs.s3a.commit.TestTasks 
   hadoop.fs.s3a.commit.staging.TestStagingCommitter 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1483/artifact/out/diff-compile-cc-root.txt
  [36K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1483/artifact/out/diff-compile-ja

[jira] [Created] (YARN-10247) Application priority queue ACLs are not respected

2020-04-28 Thread Sunil G (Jira)
Sunil G created YARN-10247:
--

 Summary: Application priority queue ACLs are not respected
 Key: YARN-10247
 URL: https://issues.apache.org/jira/browse/YARN-10247
 Project: Hadoop YARN
  Issue Type: Task
  Components: capacity scheduler
Reporter: Sunil G
Assignee: Sunil G


This is a regression from queue path jira.

App priority acls are not working correctly. 
{code:java}
yarn.scheduler.capacity.root.B.acl_application_max_priority=[user=john 
group=users max_priority=4]
{code}
max_priority enforcement is not working. For user john, maximum supported 
priority is 4. However I can submit like priority 6 for this user.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org