[jira] [Created] (YARN-10239) Capacity is zero for auto created leaf queues after leaf-queue-template.capacity has been updated

2020-04-19 Thread Akhil PB (Jira)
Akhil PB created YARN-10239:
---

 Summary: Capacity is zero for auto created leaf queues after 
leaf-queue-template.capacity has been updated
 Key: YARN-10239
 URL: https://issues.apache.org/jira/browse/YARN-10239
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Akhil PB


In the scheduler response, the capacity of the auto created leaf queue became 
zero after leaf-queue-template.capacity of managed parent queue has been 
updated.\


cc: [~sunilg] [~wangda] [~prabhujoseph]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



Re: [Hadoop-3.3 Release update]- branch-3.3 has created

2020-04-19 Thread Akira Ajisaka
Hi Brahma,

Thank you for preparing the release.
Could you cut branch-3.3.0? I would like to backport some fixes for 3.3.1
and not for 3.3.0.

Thanks and regards,
Akira

On Fri, Apr 17, 2020 at 11:11 AM Brahma Reddy Battula 
wrote:

> Hi All,
>
> we are down to two blockers issues now (YARN-10194 and YARN-9848) which
> are in patch available state.Hopefully we can out the RC soon.
>
> thanks to @Prabhu Joseph  ,@masakate,@akira
> and @Wei-Chiu Chuang   and others for helping
> resloving the blockers.
>
>
>
> On Tue, Apr 14, 2020 at 10:49 PM Brahma Reddy Battula 
> wrote:
>
>>
>> @Prabhu Joseph 
>> >>> Have committed the YARN blocker YARN-10219 to trunk and cherry-picked
>> to branch-3.3. Right now, there are two blocker Jiras - YARN-10233 and
>> HADOOP-16982
>> which i will help to review and commit. Thanks.
>>
>> Looks you committed YARN-10219. Noted YARN-10233 and HADOOP-16982 as a
>> blockers. (without YARN-10233 we have given so many releases,it's not newly
>> introduced.).. Thanks
>>
>> @Vinod Kumar Vavilapalli  ,@adam Antal,
>>
>> I noted YARN-9848 as a blocker as you mentioned above.
>>
>> @All,
>>
>> Currently following four blockers are pending for 3.3.0 RC.
>>
>> HADOOP-16963,YARN-10233,HADOOP-16982 and YARN-9848.
>>
>>
>>
>> On Tue, Apr 14, 2020 at 8:11 PM Vinod Kumar Vavilapalli <
>> vino...@apache.org> wrote:
>>
>>> Looks like a really bad bug to me.
>>>
>>> +1 for revert and +1 for making that a 3.3.0 blocker. I think should
>>> also revert it in a 3.2 maintenance release too.
>>>
>>> Thanks
>>> +Vinod
>>>
>>> > On Apr 14, 2020, at 5:03 PM, Adam Antal 
>>> wrote:
>>> >
>>> > Hi everyone,
>>> >
>>> > Sorry for coming a bit late with this, but there's also one jira that
>>> can
>>> > have potential impact on clusters and we should talk about it.
>>> >
>>> > Steven Rand found this problem earlier and commented to
>>> > https://issues.apache.org/jira/browse/YARN-4946.
>>> > The bug has impact on the RM state store: the RM does not delete apps
>>> - see
>>> > more details in his comment here:
>>> >
>>> https://issues.apache.org/jira/browse/YARN-4946?focusedCommentId=16898599&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16898599
>>> > .
>>> > (FYI He also created https://issues.apache.org/jira/browse/YARN-9848
>>> with
>>> > the revert task).
>>> >
>>> > It might not be an actual blocker, but since there wasn't any consensus
>>> > about a follow up action, I thought we should decide how to proceed
>>> before
>>> > release 3.3.0.
>>> >
>>> > Regards,
>>> > Adam
>>> >
>>> > On Tue, Apr 14, 2020 at 9:35 AM Prabhu Joseph <
>>> prabhujose.ga...@gmail.com>
>>> > wrote:
>>> >
>>> >> Thanks Brahma for the update.
>>> >>
>>> >> Have committed the YARN blocker YARN-10219 to trunk and cherry-picked
>>> to
>>> >> branch-3.3. Right now, there are two blocker Jiras - YARN-10233 and
>>> >> HADOOP-16982
>>> >> which i will help to review and commit. Thanks.
>>> >>
>>> >> [image: Screen Shot 2020-04-14 at 1.01.51 PM.png]
>>> >>
>>> >> project in (YARN, HADOOP, MAPREDUCE, HDFS) AND priority in (Blocker,
>>> >> Critical) AND resolution = Unresolved AND "Target Version/s" = 3.3.0
>>> ORDER
>>> >> BY priority DESC
>>> >>
>>> >>
>>> >> On Sun, Apr 12, 2020 at 12:19 AM Brahma Reddy Battula <
>>> bra...@apache.org>
>>> >> wrote:
>>> >>
>>> >>> *Pending for 3.3.0 Release:*
>>> >>>
>>> >>> One Blocker(HADOOP-16963) confirmation and following jira's are open
>>> as
>>> >>> these needs to merged to other branches(I am tracking the same,
>>> Ideally
>>> >>> this can be closed and can raise seperate jira's to track).
>>> >>>
>>> >>>
>>> >>> 1–4 of 4Refresh results
>>> >>> <
>>> >>>
>>> https://issues.apache.org/jira/issues/?jql=project%20in%20(%22Hadoop%20HDFS%22)%20AND%20resolution%20%3D%20Unresolved%20AND%20(cf%5B12310320%5D%20%3D%203.3.0%20OR%20fixVersion%20%3D%203.3.0)%20ORDER%20BY%20priority%20DESC#
>>> 
>>> >>> Columns
>>> >>> Patch InfoKeyTSummaryAssigneeReporterP
>>> StatusResolutionUpdatedDueCreated
>>> >>>  HDFS-14353
>>> >>>  [image: Sub-task]
>>> >>> 
>>> >>>
>>> >>> HDFS-8031  Erasure
>>> >>> Coding:
>>> >>> metrics xmitsInProgress become to negative.
>>> >>> 
>>> >>> maobaolong
>>> >>> <
>>> https://issues.apache.org/jira/secure/ViewProfile.jspa?name=maobaolong>
>>> >>> maobaolong
>>> >>> <
>>> https://issues.apache.org/jira/secure/ViewProfile.jspa?name=maobaolong>
>>> >>> [image:
>>> >>> Major] REOPENED *Unresolved* 11/Apr/20   11/Mar/19 Actions
>>> >>> <
>>> >>>
>>> https://issues.apache.org/jira/rest/api/1.0/issues/13220750/ActionsAndOperations?atl_token=A5KQ-2QAV-T4JA-FDED_194e108bac53dceebb1b88ae92ef65a9eba913b0_lin
>>> 
>>> >>>  HDFS-14788
>>> >>> 

Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2020-04-19 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1474/

[Apr 18, 2020 7:57:27 AM] (snemeth) YARN-10189. Code cleanup in 
LeveldbRMStateStore. Contributed by Benjamin
[Apr 18, 2020 8:13:37 AM] (snemeth) YARN-9996. Code cleanup in 
QueueAdminConfigurationMutationACLPolicy.
[Apr 18, 2020 2:12:20 PM] (surendralilhore) MAPREDUCE-7199. HsJobsBlock reuse 
JobACLsManager for checkAccess.
[Apr 18, 2020 2:37:21 PM] (surendralilhore) HDFS-15218. RBF: 
MountTableRefresherService failed to refresh other
[Apr 18, 2020 5:43:44 PM] (github) HADOOP-16944. Use Yetus 0.12.0 in GitHub PR 
(#1917)
[Apr 18, 2020 8:52:07 PM] (github) HDFS-15217 Add more information to longest 
write/read lock held log




-1 overall


The following subsystems voted -1:
asflicense findbugs pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
 

FindBugs :

   module:hadoop-hdfs-project/hadoop-hdfs 
   Possible null pointer dereference of effectiveDirective in 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.addCacheDirective(CacheDirectiveInfo,
 EnumSet, boolean) Dereferenced at FSNamesystem.java:effectiveDirective in 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.addCacheDirective(CacheDirectiveInfo,
 EnumSet, boolean) Dereferenced at FSNamesystem.java:[line 7444] 
   Possible null pointer dereference of ret in 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.renameTo(String, String, 
boolean) Dereferenced at FSNamesystem.java:ret in 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.renameTo(String, String, 
boolean) Dereferenced at FSNamesystem.java:[line 3213] 
   Possible null pointer dereference of res in 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.renameTo(String, String, 
boolean, Options$Rename[]) Dereferenced at FSNamesystem.java:res in 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.renameTo(String, String, 
boolean, Options$Rename[]) Dereferenced at FSNamesystem.java:[line 3248] 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common
 
   org.apache.hadoop.yarn.server.webapp.WebServiceClient.sslFactory should 
be package protected At WebServiceClient.java: At WebServiceClient.java:[line 
42] 

FindBugs :

   module:hadoop-cloud-storage-project/hadoop-cos 
   Redundant nullcheck of dir, which is known to be non-null in 
org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at 
BufferPool.java:is known to be non-null in 
org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at 
BufferPool.java:[line 66] 
   org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may 
expose internal representation by returning CosNInputStream$ReadBuffer.buffer 
At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At 
CosNInputStream.java:[line 87] 
   Found reliance on default encoding in 
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, 
byte[]):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, 
File, byte[]): new String(byte[]) At CosNativeFileSystemStore.java:[line 199] 
   Found reliance on default encoding in 
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, 
InputStream, byte[], long):in 
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, 
InputStream, byte[], long): new String(byte[]) At 
CosNativeFileSystemStore.java:[line 178] 
   org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.uploadPart(File, 
String, String, int) may fail to clean up java.io.InputStream Obligation to 
clean up resource created at CosNativeFileSystemStore.java:fail to clean up 
java.io.InputStream Obligation to clean up resource created at 
CosNativeFileSystemStore.java:[line 252] is not discharged 


Apache Hadoop qbt Report: branch2.10+JDK7 on Linux/x86

2020-04-19 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/660/

No changes




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml
 
   hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml 
   hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

FindBugs :

   module:hadoop-common-project/hadoop-minikdc 
   Possible null pointer dereference in 
org.apache.hadoop.minikdc.MiniKdc.delete(File) due to return value of called 
method Dereferenced at 
MiniKdc.java:org.apache.hadoop.minikdc.MiniKdc.delete(File) due to return value 
of called method Dereferenced at MiniKdc.java:[line 515] 

FindBugs :

   module:hadoop-common-project/hadoop-auth 
   
org.apache.hadoop.security.authentication.server.MultiSchemeAuthenticationHandler.authenticate(HttpServletRequest,
 HttpServletResponse) makes inefficient use of keySet iterator instead of 
entrySet iterator At MultiSchemeAuthenticationHandler.java:of keySet iterator 
instead of entrySet iterator At MultiSchemeAuthenticationHandler.java:[line 
192] 

FindBugs :

   module:hadoop-common-project/hadoop-common 
   org.apache.hadoop.crypto.CipherSuite.setUnknownValue(int) 
unconditionally sets the field unknownValue At CipherSuite.java:unknownValue At 
CipherSuite.java:[line 44] 
   org.apache.hadoop.crypto.CryptoProtocolVersion.setUnknownValue(int) 
unconditionally sets the field unknownValue At 
CryptoProtocolVersion.java:unknownValue At CryptoProtocolVersion.java:[line 67] 
   Possible null pointer dereference in 
org.apache.hadoop.fs.FileUtil.fullyDeleteOnExit(File) due to return value of 
called method Dereferenced at 
FileUtil.java:org.apache.hadoop.fs.FileUtil.fullyDeleteOnExit(File) due to 
return value of called method Dereferenced at FileUtil.java:[line 118] 
   Possible null pointer dereference in 
org.apache.hadoop.fs.RawLocalFileSystem.handleEmptyDstDirectoryOnWindows(Path, 
File, Path, File) due to return value of called method Dereferenced at 
RawLocalFileSystem.java:org.apache.hadoop.fs.RawLocalFileSystem.handleEmptyDstDirectoryOnWindows(Path,
 File, Path, File) due to return value of called method Dereferenced at 
RawLocalFileSystem.java:[line 383] 
   Useless condition:lazyPersist == true at this point At 
CommandWithDestination.java:[line 502] 
   org.apache.hadoop.io.DoubleWritable.compareTo(DoubleWritable) 
incorrectly handles double value At DoubleWritable.java: At 
DoubleWritable.java:[line 78] 
   org.apache.hadoop.io.DoubleWritable$Comparator.compare(byte[], int, int, 
byte[], int, int) incorrectly handles double value At DoubleWritable.java:int) 
incorrectly handles double value At DoubleWritable.java:[line 97] 
   org.apache.hadoop.io.FloatWritable.compareTo(FloatWritable) incorrectly 
handles float value At FloatWritable.java: At FloatWritable.java:[line 71] 
   org.apache.hadoop.io.FloatWritable$Comparator.compare(byte[], int, int, 
byte[], int, int) incorrectly handles float value At FloatWritable.java:int) 
incorrectly handles float value At FloatWritable.java:[line 89] 
   Possible null pointer dereference in 
org.apache.hadoop.io.IOUtils.listDirectory(File, FilenameFilter) due to return 
value of called method Dereferenced at 
IOUtils.java:org.apache.hadoop.io.IOUtils.listDirectory(File, FilenameFilter) 
due to return value of called method Dereferenced at IOUtils.java:[line 389] 
   Possible bad parsing of shift operation in 
org.apache.hadoop.io.file.tfile.Utils$Version.hashCode() At 
Utils.java:operation in 
org.apache.hadoop.io.file.tfile.Utils$Version.hashCode() At Utils.java:[line 
398] 
   
org.apache.hadoop.metrics2.lib.DefaultMetricsFactory.setInstance(MutableMetricsFactory)
 unconditionally sets the field mmfImpl At DefaultMetricsFactory.java:mmfImpl 
At DefaultMetricsFactory.java:[line 49] 
   
org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.setMiniClusterMode(boolean) 
unconditionally sets the field miniClusterMode At 
DefaultMetricsSystem.java:miniClusterMode At DefaultMetricsSystem.java:[line 
92] 
   Useless object stored in variable seqOs of method 
org.apache.hadoop.security.token.delegation.ZKDelegationTokenSecretManager.addOrUpdateToken(AbstractDelegationTokenIdentifier,
 AbstractDelegationTokenSecretManager$DelegationTokenInformation, boolean) At 
ZKDelegationTokenSecretManager.java:seqOs of method 
org.apache.hadoop.

Re: Moving to Yetus 0.12.0

2020-04-19 Thread Akira Ajisaka
Updated Precommit-(HADOOP|HDFS|YARN|MAPREDUCE)-Build jobs. The daily qbt
jobs are kept as-is. Now I'm testing the new CI servers and testing the qbt
jobs with Yetus 0.12.0.
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java11-linux-x86 (actually
the test runs on Java 8, I'll rename it)

-Akira

On Sun, Apr 19, 2020 at 3:41 AM Akira Ajisaka  wrote:

> Hi folks,
>
> I updated Jenkinsfile to use Yetus 0.12.0 in GitHub PR.
> https://issues.apache.org/jira/browse/HADOOP-16944
>
> In addition, I'm updating the configs in the Jenkins jobs to use Yetus
> 0.12.0 in ASF JIRAs. I updated the settings of
> https://builds.apache.org/view/H-L/view/Hadoop/job/PreCommit-HADOOP-Build
> 
>  and
> testing in https://issues.apache.org/jira/browse/HADOOP-17000.
>
> If there is something wrong, please let me know and please feel free to
> revert the config.
>
> Regards,
> Akira
>