[jira] [Resolved] (YARN-10142) Distributed shell: add support for localization visibility

2020-02-18 Thread Peter Bacsko (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Bacsko resolved YARN-10142.
-
Resolution: Duplicate

> Distributed shell: add support for localization visibility
> --
>
> Key: YARN-10142
> URL: https://issues.apache.org/jira/browse/YARN-10142
> Project: Hadoop YARN
>  Issue Type: Improvement
>  Components: distributed-shell
>Reporter: Peter Bacsko
>Assignee: Peter Bacsko
>Priority: Major
>
> The localization is now hard coded in DistributedShell:
> {noformat}
> FileStatus scFileStatus = fs.getFileStatus(dst);
> LocalResource scRsrc =
> LocalResource.newInstance(
> URL.fromURI(dst.toUri()),
> LocalResourceType.FILE, LocalResourceVisibility.APPLICATION,
> scFileStatus.getLen(), scFileStatus.getModificationTime());
> localResources.put(fileDstPath, scRsrc);
> {noformat}
> However, sometimes it's useful if you have the possibility to change this to 
> PRIVATE/PUBLIC for testing purposes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



Re: [DISCUSS] EOL Hadoop branch-2.8

2020-02-18 Thread Akira Ajisaka
Thanks Wei-Chiu for starting the discussion,

+1 for the EoL.

-Akira

On Tue, Feb 18, 2020 at 4:59 PM Ayush Saxena  wrote:

> Thanx Wei-Chiu for initiating this
> +1 for marking 2.8 EOL
>
> -Ayush
>
> > On 17-Feb-2020, at 11:14 PM, Wei-Chiu Chuang  wrote:
> >
> > The last Hadoop 2.8.x release, 2.8.5, was GA on September 15th 2018.
> >
> > It's been 17 months since the release and the community by and large have
> > moved up to 2.9/2.10/3.x.
> >
> > With Hadoop 3.3.0 over the horizon, is it time to start the EOL
> discussion
> > and reduce the number of active branches?
>
> -
> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: common-dev-h...@hadoop.apache.org
>
>


[jira] [Created] (YARN-10152) Fix findbugs warnings in hadoop-yarn-applications-mawo-core module

2020-02-18 Thread Akira Ajisaka (Jira)
Akira Ajisaka created YARN-10152:


 Summary: Fix findbugs warnings in 
hadoop-yarn-applications-mawo-core module
 Key: YARN-10152
 URL: https://issues.apache.org/jira/browse/YARN-10152
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Akira Ajisaka


{noformat}
    FindBugs :

       
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core
       Class org.apache.hadoop.applications.mawo.server.common.TaskStatus 
implements Cloneable but does not define or use clone method At 
TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 
39-346]
       Equals method for 
org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument 
is of type WorkerId At WorkerId.java:the argument is of type WorkerId At 
WorkerId.java:[line 114]
       
org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does 
not check for null argument At WorkerId.java:null argument At 
WorkerId.java:[lines 114-115] {noformat}
Detail: 
[https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1414/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-applications-mawo_hadoop-yarn-applications-mawo-core-warnings.html]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



[jira] [Resolved] (YARN-6506) Fix the code vulnerability of org.apache.hadoop.yarn.sls.SLSRunner.simulateInfoMap

2020-02-18 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-6506?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved YARN-6506.
-
Resolution: Cannot Reproduce

Now there are no findbugs warnings in the module. Closing.

> Fix the code vulnerability of 
> org.apache.hadoop.yarn.sls.SLSRunner.simulateInfoMap
> --
>
> Key: YARN-6506
> URL: https://issues.apache.org/jira/browse/YARN-6506
> Project: Hadoop YARN
>  Issue Type: Sub-task
>  Components: scheduler-load-simulator
>Reporter: Yufei Gu
>Priority: Major
>
> It is reported by findbugs in YARN-6423.
> MS_MUTABLE_COLLECTION: Field is a mutable collection
> A mutable collection instance is assigned to a final static field, thus can 
> be changed by malicious code or by accident from another package. Consider 
> wrapping this field into Collections.unmodifiableSet/List/Map/etc. to avoid 
> this vulnerability.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



[jira] [Resolved] (YARN-9981) Fix findbugs warning in timelineservice

2020-02-18 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-9981?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved YARN-9981.
-
Target Version/s:   (was: 3.3.0)
  Resolution: Cannot Reproduce

Now there are no warnings in the module. Closing.

> Fix findbugs warning in timelineservice
> ---
>
> Key: YARN-9981
> URL: https://issues.apache.org/jira/browse/YARN-9981
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: timelineservice
>Affects Versions: 3.3.0
>Reporter: Adam Antal
>Priority: Minor
>
> Findbugs are complaining about this:
> {noformat}
> module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client
>Boxed value is unboxed and then immediately reboxed in 
> org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
>  byte[], byte[], KeyConverter, ValueConverter, boolean) At 
> ColumnRWHelper.java:then immediately reboxed in 
> org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
>  byte[], byte[], KeyConverter, ValueConverter, boolean) At 
> ColumnRWHelper.java:[line 335]
> {noformat}
> Let's fix it!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



[jira] [Resolved] (YARN-7576) Findbug warning for Resource exposing internal representation

2020-02-18 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-7576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved YARN-7576.
-
  Assignee: (was: Wangda Tan)
Resolution: Duplicate

Yes, we can close this.

> Findbug warning for Resource exposing internal representation
> -
>
> Key: YARN-7576
> URL: https://issues.apache.org/jira/browse/YARN-7576
> Project: Hadoop YARN
>  Issue Type: Bug
>  Components: api
>Affects Versions: 3.0.0
>Reporter: Jason Darrell Lowe
>Priority: Major
> Attachments: YARN-7576.001.patch
>
>
> Precommit builds are complaining about a findbugs warning:
> {noformat}
> EIorg.apache.hadoop.yarn.api.records.Resource.getResources() may expose 
> internal representation by returning Resource.resources
>   
> Bug type EI_EXPOSE_REP (click for details)
> In class org.apache.hadoop.yarn.api.records.Resource
> In method org.apache.hadoop.yarn.api.records.Resource.getResources()
> Field org.apache.hadoop.yarn.api.records.Resource.resources
> At Resource.java:[line 213]
> Returning a reference to a mutable object value stored in one of the object's 
> fields exposes the internal representation of the object.  If instances are 
> accessed by untrusted code, and unchecked changes to the mutable object would 
> compromise security or other important properties, you will need to do 
> something different. Returning a new copy of the object is better approach in 
> many situations.
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



[jira] [Created] (YARN-10151) Disable Capacity Scheduler's move app between queue functionality

2020-02-18 Thread Wangda Tan (Jira)
Wangda Tan created YARN-10151:
-

 Summary: Disable Capacity Scheduler's move app between queue 
functionality
 Key: YARN-10151
 URL: https://issues.apache.org/jira/browse/YARN-10151
 Project: Hadoop YARN
  Issue Type: Bug
Reporter: Wangda Tan


Saw this happened in many clusters: Capacity Scheduler cannot work correctly 
with the move app between queue features. It will cause weird JMX issue, 
resource accounting issue, etc. In a lot of causes it will cause RM completely 
hung and available resource became negative, nothing can be allocated after 
that. We should turn off CapacityScheduler's move app between queue feature. 
(see: 
{{org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler#moveApplication}}
 )



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



[jira] [Created] (YARN-10150) Incorrect link at http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/ReservationSystem.html

2020-02-18 Thread Srinivasa Meka (Jira)
Srinivasa Meka created YARN-10150:
-

 Summary: Incorrect link at 
http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/ReservationSystem.html
 Key: YARN-10150
 URL: https://issues.apache.org/jira/browse/YARN-10150
 Project: Hadoop YARN
  Issue Type: Task
  Components: documentation
Affects Versions: 3.2.1
Reporter: Srinivasa Meka


Go to 
[http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/ReservationSystem.html]

 

Click on the top of the page displayed navigation link 
("[Apache|http://www.apache.org/] > [Hadoop|http://hadoop.apache.org/] > 
[Apache Hadoop 
YARN|http://hadoop.apache.org/docs/current/hadoop-yarn/index.html] > [Apache 
Hadoop 
3.2.1|http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/index.html]
 > Reservation System") and click on one level up ("Apache Hadoop 3.2.1). We 
get this error:
h1. Not Found

The requested URL was not found on this server.

 

Also, there is no navigational link in the left frame (under "Yarn" section) 
and user is not sure how this content fits into the overall architecture and 
flow with the content. 

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



[jira] [Created] (YARN-10149) container-executor exits with 139 when the permissions of yarn log directory is improper

2020-02-18 Thread Tarun Parimi (Jira)
Tarun Parimi created YARN-10149:
---

 Summary: container-executor exits with 139 when the permissions of 
yarn log directory is improper
 Key: YARN-10149
 URL: https://issues.apache.org/jira/browse/YARN-10149
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Affects Versions: 3.1.0
Reporter: Tarun Parimi
Assignee: Tarun Parimi


container-executor fails with segmentation fault and exit code 139 when the 
permission of the yarn log directory was not proper.

While running the container-executor manually, we get the below message.

{code:java}
Error checking file stats for /hadoop/yarn/log Permission denied -1
{code}

But the exit code is 139 which corresponds to a segmentation fault. This is 
misleading especially since the "Permission denied" is not getting printed in 
the applogs or the NM logs.
 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2020-02-18 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1414/

[Feb 17, 2020 6:55:10 AM] (github) HDFS-15173. RBF: Delete repeated 
configuration
[Feb 17, 2020 7:13:33 PM] (ayushsaxena) HADOOP-13666. Supporting rack exclusion 
in countNumOfAvailableNodes in
[Feb 17, 2020 10:06:34 PM] (stevel) HADOOP-15961. S3A committers: make sure 
there's regular progress()
[Feb 17, 2020 10:14:39 PM] (github) HADOOP-16759. FileSystem Javadocs to list 
what breaks on API changes




-1 overall


The following subsystems voted -1:
asflicense findbugs pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/fair-scheduler-invalid.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/resources/yarn-site-with-invalid-allocation-file-ref.xml
 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core
 
   Class org.apache.hadoop.applications.mawo.server.common.TaskStatus 
implements Cloneable but does not define or use clone method At 
TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 
39-346] 
   Equals method for 
org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument 
is of type WorkerId At WorkerId.java:the argument is of type WorkerId At 
WorkerId.java:[line 114] 
   
org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does 
not check for null argument At WorkerId.java:null argument At 
WorkerId.java:[lines 114-115] 

FindBugs :

   module:hadoop-cloud-storage-project/hadoop-cos 
   Redundant nullcheck of dir, which is known to be non-null in 
org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at 
BufferPool.java:is known to be non-null in 
org.apache.hadoop.fs.cosn.BufferPool.createDir(String) Redundant null check at 
BufferPool.java:[line 66] 
   org.apache.hadoop.fs.cosn.CosNInputStream$ReadBuffer.getBuffer() may 
expose internal representation by returning CosNInputStream$ReadBuffer.buffer 
At CosNInputStream.java:by returning CosNInputStream$ReadBuffer.buffer At 
CosNInputStream.java:[line 87] 
   Found reliance on default encoding in 
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, File, 
byte[]):in org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFile(String, 
File, byte[]): new String(byte[]) At CosNativeFileSystemStore.java:[line 199] 
   Found reliance on default encoding in 
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, 
InputStream, byte[], long):in 
org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.storeFileWithRetry(String, 
InputStream, byte[], long): new String(byte[]) At 
CosNativeFileSystemStore.java:[line 178] 
   org.apache.hadoop.fs.cosn.CosNativeFileSystemStore.uploadPart(File, 
String, String, int) may fail to clean up java.io.InputStream Obligation to 
clean up resource created at CosNativeFileSystemStore.java:fail to clean up 
java.io.InputStream Obligation to clean up resource created at 
CosNativeFileSystemStore.java:[line 252] is not discharged 

Failed junit tests :

   hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped 
   hadoop.hdfs.server.federation.router.TestRouterFaultTolerant 
   hadoop.yarn.applications.distributedshell.TestDistributedShell 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1414/artifact/out/diff-compile-cc-root.txt
  [8.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1414/artifact/out/diff-compile-javac-root.txt
  [428K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1414/artifact/out/diff-checkstyle-root.txt
  [16M]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1414/artifact/out/pathlen.txt
  [12K]

   pylint:

   

Apache Hadoop qbt Report: branch2.10+JDK7 on Linux/x86

2020-02-18 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/600/

[Feb 17, 2020 9:06:00 PM] (kihwal) Revert "HDFS-11156. Add new op 
GETFILEBLOCKLOCATIONS to WebHDFS REST
[Feb 17, 2020 9:49:48 PM] (kihwal) HDFS-12459. Fix revert: Add new op 
GETFILEBLOCKLOCATIONS to WebHDFS REST




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml
 
   hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml 
   hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client
 
   Boxed value is unboxed and then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:[line 335] 

Failed junit tests :

   hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.fs.http.client.TestHttpFSFWithWebhdfsFileSystem 
   hadoop.fs.http.client.TestHttpFSFWithSWebhdfsFileSystem 
   hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints 
   hadoop.registry.secure.TestSecureLogins 
   hadoop.yarn.server.timelineservice.security.TestTimelineAuthFilterForV2 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/600/artifact/out/diff-compile-cc-root-jdk1.7.0_95.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/600/artifact/out/diff-compile-javac-root-jdk1.7.0_95.txt
  [328K]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/600/artifact/out/diff-compile-cc-root-jdk1.8.0_242.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/600/artifact/out/diff-compile-javac-root-jdk1.8.0_242.txt
  [308K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/600/artifact/out/diff-checkstyle-root.txt
  [16M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/600/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/600/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/600/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/600/artifact/out/diff-patch-shellcheck.txt
  [56K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/600/artifact/out/diff-patch-shelldocs.txt
  [8.0K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/600/artifact/out/whitespace-eol.txt
  [12M]
   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/600/artifact/out/whitespace-tabs.txt
  [1.3M]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/600/artifact/out/xml.txt
  [12K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/600/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase_hadoop-yarn-server-timelineservice-hbase-client-warnings.html
  [8.0K]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/600/artifact/out/diff-javadoc-javadoc-root-jdk1.7.0_95.txt
  [16K]
   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/600/artifact/out/diff-javadoc-javadoc-root-jdk1.8.0_242.txt
  [1.1M]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/600/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [232K]
   
https://builds.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86/600/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-httpfs.txt
  [32K]
   

[jira] [Created] (YARN-10148) Add Unit test for queue ACL for both FS and CS

2020-02-18 Thread Kinga Marton (Jira)
Kinga Marton created YARN-10148:
---

 Summary: Add Unit test for queue ACL for both FS and CS
 Key: YARN-10148
 URL: https://issues.apache.org/jira/browse/YARN-10148
 Project: Hadoop YARN
  Issue Type: Improvement
  Components: scheduler
Reporter: Kinga Marton
Assignee: Kinga Marton


Add some unit tests covering the queue ACL evaluation for both FS and CS.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



[jira] [Resolved] (YARN-10117) FS-CS converter: adjust queue ACL to have the same output with CS as for FS has

2020-02-18 Thread Kinga Marton (Jira)


 [ 
https://issues.apache.org/jira/browse/YARN-10117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kinga Marton resolved YARN-10117.
-
Resolution: Not A Problem

> FS-CS converter: adjust queue ACL to have the same output with CS as for FS 
> has
> ---
>
> Key: YARN-10117
> URL: https://issues.apache.org/jira/browse/YARN-10117
> Project: Hadoop YARN
>  Issue Type: Sub-task
>Reporter: Kinga Marton
>Priority: Major
>
> Both  FS and CS seems to check the ACL recursively: from the leaf via the 
> parent(s) to the root (inclusive). However there are some differences in 
> evaluating them, what cause to have different results with the two schedulers.
> Some examples are the following ones:
> ||Tested scenario||FS output||CS output||
> |Root - “ ”
>     C - *
>         C1 - *|Denied by root ACL|OK|
> |Root - “ ”
>     C - “ ”
>         C1 - “ ”|Denied by root ACL|OK|
> |Root - “ ”
>     C - *
>         C1 - “ ”|Denied by root ACL|OK|
> |Root - “ ”
>     C - “ ”
>         C1 - *|Denied by root ACL|OK|
> Note: I have set the same value for both submit application and administer 
> queue ACLs



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



[jira] [Created] (YARN-10147) FPGA plugin can't find the localized aocx file

2020-02-18 Thread Peter Bacsko (Jira)
Peter Bacsko created YARN-10147:
---

 Summary: FPGA plugin can't find the localized aocx file
 Key: YARN-10147
 URL: https://issues.apache.org/jira/browse/YARN-10147
 Project: Hadoop YARN
  Issue Type: Bug
  Components: nodemanager
Reporter: Peter Bacsko
Assignee: Peter Bacsko


There's a bug in the FPGA plugin which is intended to find the localized "aocx" 
file:

{noformat}
...
if (localizedResources != null) {
  Optional aocxPath = localizedResources
  .keySet()
  .stream()
  .filter(path -> matchesIpid(path, id))
  .findFirst();

  if (aocxPath.isPresent()) {
ipFilePath = aocxPath.get().toUri().toString();
LOG.debug("Found: " + ipFilePath);
  }
} else {
  LOG.warn("Localized resource is null!");
}

return ipFilePath;
  }

  private boolean matchesIpid(Path p, String id) {
return p.getName().toLowerCase().equals(id.toLowerCase())
&& p.getName().endsWith(".aocx");
  }
{noformat}

The method {{matchesIpid()}} works incorrecty: the {{id}} argument is the 
expected filename, but without the extension. Therefore the {{equals()}} 
comparison will always be false.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



[jira] [Created] (YARN-10145) Error in creating hbase tables

2020-02-18 Thread yehuanhuan (Jira)
yehuanhuan created YARN-10145:
-

 Summary: Error in creating hbase tables
 Key: YARN-10145
 URL: https://issues.apache.org/jira/browse/YARN-10145
 Project: Hadoop YARN
  Issue Type: Bug
  Components: ATSv2
Affects Versions: 3.2.1
Reporter: yehuanhuan


When using the command(bin/hadoop 
org.apache.hadoop.yarn.server.timelineservice.storage.TimelineSchemaCreator 
-create) to create the timeline service schema,I get the following error.
{panel:title=ERROR}
2020-02-18 17:16:53,694 ERROR storage.TimelineSchemaCreator: Error in creating 
hbase tables: 
org.apache.hadoop.hbase.DoNotRetryIOException: java.lang.IllegalAccessError: 
tried to access method shaded.com.google.common.base.Stopwatch.()V from 
class org.apache.hadoop.hbase.zookeeper.MetaTableLocator
at 
org.apache.hadoop.hbase.client.RpcRetryingCaller.translateException(RpcRetryingCaller.java:239)
at 
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:212)
at 
org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:327)
at 
org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:302)
at 
org.apache.hadoop.hbase.client.ClientScanner.initializeScannerInConstruction(ClientScanner.java:167)
at 
org.apache.hadoop.hbase.client.ClientScanner.(ClientScanner.java:162)
at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:797)
at 
org.apache.hadoop.hbase.MetaTableAccessor.fullScan(MetaTableAccessor.java:602)
at 
org.apache.hadoop.hbase.MetaTableAccessor.tableExists(MetaTableAccessor.java:366)
at 
org.apache.hadoop.hbase.client.HBaseAdmin.tableExists(HBaseAdmin.java:406)
at 
org.apache.hadoop.yarn.server.timelineservice.storage.entity.EntityTableRW.createTable(EntityTableRW.java:87)
at 
org.apache.hadoop.yarn.server.timelineservice.storage.TimelineSchemaCreator.createAllTables(TimelineSchemaCreator.java:308)
at 
org.apache.hadoop.yarn.server.timelineservice.storage.TimelineSchemaCreator.createAllSchemas(TimelineSchemaCreator.java:278)
at 
org.apache.hadoop.yarn.server.timelineservice.storage.TimelineSchemaCreator.main(TimelineSchemaCreator.java:147)
Caused by: java.lang.IllegalAccessError: tried to access method 
shaded.com.google.common.base.Stopwatch.()V from class 
org.apache.hadoop.hbase.zookeeper.MetaTableLocator
at 
org.apache.hadoop.hbase.zookeeper.MetaTableLocator.blockUntilAvailable(MetaTableLocator.java:604)
at 
org.apache.hadoop.hbase.zookeeper.MetaTableLocator.blockUntilAvailable(MetaTableLocator.java:588)
at 
org.apache.hadoop.hbase.zookeeper.MetaTableLocator.blockUntilAvailable(MetaTableLocator.java:561)
at 
org.apache.hadoop.hbase.client.ZooKeeperRegistry.getMetaRegionLocation(ZooKeeperRegistry.java:61)
at 
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateMeta(ConnectionManager.java:1211)
at 
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1178)
at 
org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:305)
at 
org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:156)
at 
org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:60)
at 
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:210)
{panel}





--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org



Re: [DISCUSS] EOL Hadoop branch-2.8

2020-02-18 Thread Ayush Saxena
Thanx Wei-Chiu for initiating this
+1 for marking 2.8 EOL

-Ayush

> On 17-Feb-2020, at 11:14 PM, Wei-Chiu Chuang  wrote:
> 
> The last Hadoop 2.8.x release, 2.8.5, was GA on September 15th 2018.
> 
> It's been 17 months since the release and the community by and large have
> moved up to 2.9/2.10/3.x.
> 
> With Hadoop 3.3.0 over the horizon, is it time to start the EOL discussion
> and reduce the number of active branches?

-
To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org