Any thoughts making Submarine a separate Apache project?

2019-07-10 Thread Xun Liu
Hi all,

This is Xun Liu contributing to the Submarine project for deep learning
workloads running with big data workloads together on Hadoop clusters.

There are a bunch of integrations of Submarine to other projects are
finished or going on, such as Apache Zeppelin, TonY, Azkaban. The next step
of Submarine is going to integrate with more projects like Apache Arrow,
Redis, MLflow, etc. & be able to handle end-to-end machine learning use
cases like model serving, notebook management, advanced training
optimizations (like auto parameter tuning, memory cache optimizations for
large datasets for training, etc.), and make it run on other platforms like
Kubernetes or natively on Cloud. LinkedIn also wants to donate TonY project
to Apache so we can put Submarine and TonY together to the same codebase
(Page #30.
https://www.slideshare.net/xkrogen/hadoop-meetup-jan-2019-tony-tensorflow-on-yarn-and-beyond#30
).

This expands the scope of the original Submarine project in exciting new
ways. Toward that end, would it make sense to create a separate Submarine
project at Apache? This can make faster adoption of Submarine, and allow
Submarine to grow to a full-blown machine learning platform.

There will be lots of technical details to work out, but any initial
thoughts on this?

Best Regards,
Xun Liu


Re: new committer: Gabor Bota

2019-07-10 Thread Aaron Fabbri
Congrats Gabor! Have really enjoyed working with you, looking forward to
more good stuff

On Thu, Jul 4, 2019 at 4:31 AM Gabor Bota 
wrote:

> Thank you!
>
> On Thu, Jul 4, 2019 at 8:50 AM Szilard Nemeth  >
> wrote:
>
> > Congrats, Gabor!
> >
> > On Tue, Jul 2, 2019, 01:36 Sean Mackrory  wrote:
> >
> > > The Project Management Committee (PMC) for Apache Hadoop
> > > has invited Gabor Bota to become a committer and we are pleased
> > > to announce that he has accepted.
> > >
> > > Gabor has been working on the S3A file-system, especially on
> > > the robustness and completeness of S3Guard to help deal with
> > > inconsistency in object storage. I'm excited to see his work
> > > with the community continue!
> > >
> > > Being a committer enables easier contribution to the
> > > project since there is no need to go via the patch
> > > submission process. This should enable better productivity.
> > >
> >
>


Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86

2019-07-10 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/378/

[Jul 9, 2019 3:54:37 PM] (stack) Backport HDFS-3246,HDFS-14111 ByteBuffer pread 
interface to branch-2.9
[Jul 9, 2019 3:57:57 PM] (stack) Revert "Backport HDFS-3246,HDFS-14111 
ByteBuffer pread interface to
[Jul 9, 2019 3:58:16 PM] (stack) HDFS-14483 Backport HDFS-3246,HDFS-14111 
ByteBuffer pread interface to




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/empty-configuration.xml
 
   hadoop-tools/hadoop-azure/src/config/checkstyle-suppressions.xml 
   hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/public/crossdomain.xml 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/src/main/webapp/public/crossdomain.xml
 

FindBugs :

   module:hadoop-common-project/hadoop-common 
   Class org.apache.hadoop.fs.GlobalStorageStatistics defines non-transient 
non-serializable instance field map In GlobalStorageStatistics.java:instance 
field map In GlobalStorageStatistics.java 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase/hadoop-yarn-server-timelineservice-hbase-client
 
   Boxed value is unboxed and then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:then immediately reboxed in 
org.apache.hadoop.yarn.server.timelineservice.storage.common.ColumnRWHelper.readResultsWithTimestamps(Result,
 byte[], byte[], KeyConverter, ValueConverter, boolean) At 
ColumnRWHelper.java:[line 335] 

Failed junit tests :

   hadoop.hdfs.server.datanode.TestDirectoryScanner 
   hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys 
   hadoop.hdfs.server.namenode.TestDiskspaceQuotaUpdate 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.registry.secure.TestSecureLogins 
   hadoop.yarn.server.timelineservice.security.TestTimelineAuthFilterForV2 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/378/artifact/out/diff-compile-cc-root-jdk1.7.0_95.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/378/artifact/out/diff-compile-javac-root-jdk1.7.0_95.txt
  [328K]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/378/artifact/out/diff-compile-cc-root-jdk1.8.0_212.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/378/artifact/out/diff-compile-javac-root-jdk1.8.0_212.txt
  [308K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/378/artifact/out/diff-checkstyle-root.txt
  [16M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/378/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/378/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/378/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/378/artifact/out/diff-patch-shellcheck.txt
  [72K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/378/artifact/out/diff-patch-shelldocs.txt
  [8.0K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/378/artifact/out/whitespace-eol.txt
  [12M]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/378/artifact/out/whitespace-tabs.txt
  [1.2M]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/378/artifact/out/xml.txt
  [12K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/378/artifact/out/branch-findbugs-hadoop-common-project_hadoop-common-warnings.html
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/378/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase_hadoop-yarn-server-timelineservice-hbase-client-warnings.html
  [8.0K]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/378/artifact/out/diff-javadoc-javadoc-root-jdk1.7.0_95.txt
  [16K]
   
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/378/artifact/out/diff-javadoc-javadoc-root-jdk1.8.0_2

Re: Any thoughts making Submarine a separate Apache project?

2019-07-10 Thread Wanqiang Ji
+1  This is a fantastic recommendation. I can see the community grows fast
and good collaborative, submarine can be an independent project at now,
thanks for all contributors.

FYI,
Wanqiang Ji

On Wed, Jul 10, 2019 at 3:34 PM Xun Liu  wrote:

> Hi all,
>
> This is Xun Liu contributing to the Submarine project for deep learning
> workloads running with big data workloads together on Hadoop clusters.
>
> There are a bunch of integrations of Submarine to other projects are
> finished or going on, such as Apache Zeppelin, TonY, Azkaban. The next step
> of Submarine is going to integrate with more projects like Apache Arrow,
> Redis, MLflow, etc. & be able to handle end-to-end machine learning use
> cases like model serving, notebook management, advanced training
> optimizations (like auto parameter tuning, memory cache optimizations for
> large datasets for training, etc.), and make it run on other platforms like
> Kubernetes or natively on Cloud. LinkedIn also wants to donate TonY project
> to Apache so we can put Submarine and TonY together to the same codebase
> (Page #30.
>
> https://www.slideshare.net/xkrogen/hadoop-meetup-jan-2019-tony-tensorflow-on-yarn-and-beyond#30
> ).
>
> This expands the scope of the original Submarine project in exciting new
> ways. Toward that end, would it make sense to create a separate Submarine
> project at Apache? This can make faster adoption of Submarine, and allow
> Submarine to grow to a full-blown machine learning platform.
>
> There will be lots of technical details to work out, but any initial
> thoughts on this?
>
> Best Regards,
> Xun Liu
>


Re: new committer: Gabor Bota

2019-07-10 Thread Antal Mihalyi
Awesome news, congratulations, Gabor!

Anti

On Wed, Jul 10, 2019 at 12:04 PM Aaron Fabbri  wrote:

> Congrats Gabor! Have really enjoyed working with you, looking forward to
> more good stuff
>
> On Thu, Jul 4, 2019 at 4:31 AM Gabor Bota  .invalid>
> wrote:
>
> > Thank you!
> >
> > On Thu, Jul 4, 2019 at 8:50 AM Szilard Nemeth
>  > >
> > wrote:
> >
> > > Congrats, Gabor!
> > >
> > > On Tue, Jul 2, 2019, 01:36 Sean Mackrory  wrote:
> > >
> > > > The Project Management Committee (PMC) for Apache Hadoop
> > > > has invited Gabor Bota to become a committer and we are pleased
> > > > to announce that he has accepted.
> > > >
> > > > Gabor has been working on the S3A file-system, especially on
> > > > the robustness and completeness of S3Guard to help deal with
> > > > inconsistency in object storage. I'm excited to see his work
> > > > with the community continue!
> > > >
> > > > Being a committer enables easier contribution to the
> > > > project since there is no need to go via the patch
> > > > submission process. This should enable better productivity.
> > > >
> > >
> >
>


-- 
*Antal Mihályi* | Engineering Manager
e. amiha...@cloudera.com 
cloudera.com 

[image: Cloudera] 

[image: Cloudera on Twitter]  [image:
Cloudera on Facebook]  [image: Cloudera
on LinkedIn] 
--


Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2019-07-10 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1193/

[Jul 9, 2019 3:12:55 AM] (msingh) HDDS-1750. Add block allocation metrics for 
pipelines in SCM.
[Jul 9, 2019 3:24:12 AM] (aengineer) HDDS-1550. MiniOzoneCluster is not 
shutting down all the threads during
[Jul 9, 2019 4:06:50 AM] (arp7) HDDS-1705. Recon: Add estimatedTotalCount to 
the response of containers
[Jul 9, 2019 11:22:00 AM] (elek) HDDS-1717. MR Job fails as 
OMFailoverProxyProvider has dependency
[Jul 9, 2019 3:21:16 PM] (elek) HDDS-1742. Merge ozone-perf and ozonetrace 
example clusters
[Jul 9, 2019 5:47:50 PM] (msingh) HDDS-1718. Increase Ratis Leader election 
timeout default. Contributed
[Jul 9, 2019 9:43:55 PM] (xyao) HDDS-1586. Allow Ozone RPC client to read with 
topology awareness.




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-documentstore
 
   Unread field:TimelineEventSubDoc.java:[line 56] 
   Unread field:TimelineMetricSubDoc.java:[line 44] 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core
 
   Class org.apache.hadoop.applications.mawo.server.common.TaskStatus 
implements Cloneable but does not define or use clone method At 
TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 
39-346] 
   Equals method for 
org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument 
is of type WorkerId At WorkerId.java:the argument is of type WorkerId At 
WorkerId.java:[line 114] 
   
org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does 
not check for null argument At WorkerId.java:null argument At 
WorkerId.java:[lines 114-115] 

FindBugs :

   module:hadoop-tools/hadoop-dynamometer/hadoop-dynamometer-infra 
   org.apache.hadoop.tools.dynamometer.Client.addFileToZipRecursively(File, 
File, ZipOutputStream) may fail to clean up java.io.InputStream on checked 
exception Obligation to clean up resource created at Client.java:to clean up 
java.io.InputStream on checked exception Obligation to clean up resource 
created at Client.java:[line 863] is not discharged 
   Exceptional return value of java.io.File.mkdirs() ignored in 
org.apache.hadoop.tools.dynamometer.DynoInfraUtils.fetchHadoopTarball(File, 
String, Configuration, Logger) At DynoInfraUtils.java:ignored in 
org.apache.hadoop.tools.dynamometer.DynoInfraUtils.fetchHadoopTarball(File, 
String, Configuration, Logger) At DynoInfraUtils.java:[line 142] 
   Found reliance on default encoding in 
org.apache.hadoop.tools.dynamometer.SimulatedDataNodes.run(String[]):in 
org.apache.hadoop.tools.dynamometer.SimulatedDataNodes.run(String[]): new 
java.io.InputStreamReader(InputStream) At SimulatedDataNodes.java:[line 149] 
   org.apache.hadoop.tools.dynamometer.SimulatedDataNodes.run(String[]) 
invokes System.exit(...), which shuts down the entire virtual machine At 
SimulatedDataNodes.java:down the entire virtual machine At 
SimulatedDataNodes.java:[line 123] 
   org.apache.hadoop.tools.dynamometer.SimulatedDataNodes.run(String[]) may 
fail to close stream At SimulatedDataNodes.java:stream At 
SimulatedDataNodes.java:[line 149] 

FindBugs :

   module:hadoop-tools/hadoop-dynamometer/hadoop-dynamometer-blockgen 
   Self assignment of field BlockInfo.replication in new 
org.apache.hadoop.tools.dynamometer.blockgenerator.BlockInfo(BlockInfo) At 
BlockInfo.java:in new 
org.apache.hadoop.tools.dynamometer.blockgenerator.BlockInfo(BlockInfo) At 
BlockInfo.java:[line 78] 

Failed junit tests :

   hadoop.hdfs.server.datanode.TestBPOfferService 
   hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes 
   hadoop.hdfs.TestMultipleNNPortQOP 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.hdfs.server.datanode.TestDirectoryScanner 
   hadoop.hdfs.server.federation.router.TestRouterWithSecureStartup 
   hadoop.hdfs.server.federation.security.TestRouterHttpDelegationToken 
   hadoop.ozone.freon.TestRandomKeyGenerator 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1193/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1193/artifact/out/diff-compile-javac-root.txt
  [336K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1193/artifact/out/diff-checkstyle-root.txt
  [17M]

   hadolint:

   
https://builds.apach

[jira] [Created] (HADOOP-16420) S3A returns 400 "bad request" on a single path within an S3 bucket

2019-07-10 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-16420:
---

 Summary: S3A returns 400 "bad request" on a single path within an 
S3 bucket
 Key: HADOOP-16420
 URL: https://issues.apache.org/jira/browse/HADOOP-16420
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 3.3.0
Reporter: Steve Loughran


Filing this as "who knows?"; surfaced during testing. Notable that the previous 
testing was playing with SSE-C, if that makes a difference: it could be that 
there's a marker entry encrypted with SSE-C that is now being rejected by a 
different run.

Somehow, with a set of credentials I can work with all paths in a directory, 
except read the dir marker /fork-0001/"; try that and a 400 bad request comes 
back. AWS console views the path as an empty dir.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16420) S3A returns 400 "bad request" on a single path within an S3 bucket

2019-07-10 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-16420.
-
Resolution: Cannot Reproduce

> S3A returns 400 "bad request" on a single path within an S3 bucket
> --
>
> Key: HADOOP-16420
> URL: https://issues.apache.org/jira/browse/HADOOP-16420
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Priority: Minor
> Attachments: out.txt
>
>
> Filing this as "who knows?"; surfaced during testing. Notable that the 
> previous testing was playing with SSE-C, if that makes a difference: it could 
> be that there's a marker entry encrypted with SSE-C that is now being 
> rejected by a different run.
> Somehow, with a set of credentials I can work with all paths in a directory, 
> except read the dir marker /fork-0001/"; try that and a 400 bad request comes 
> back. AWS console views the path as an empty dir.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16393) S3Guard init command uses global settings, not those of target bucket

2019-07-10 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-16393.
-
   Resolution: Fixed
 Assignee: Steve Loughran
Fix Version/s: 3.3.0

> S3Guard init command uses global settings, not those of target bucket
> -
>
> Key: HADOOP-16393
> URL: https://issues.apache.org/jira/browse/HADOOP-16393
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Fix For: 3.3.0
>
>
> If you call {{s3guard init s3a://name/}} then the custom bucket options of 
> fs.s3a.bucket.name are not picked up, instead the global value is used.
> Fix: take the name of the bucket and use that to eval properties and patch 
> the config used for the init command.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: Incorrect NOTICE files in TLP releases

2019-07-10 Thread Akira Ajisaka
Hi Vinod,

This issue is now tracked by https://issues.apache.org/jira/browse/HADOOP-15958

Thanks,
Akira

On Fri, Jul 5, 2019 at 1:29 PM Vinod Kumar Vavilapalli
 wrote:
>
> A bit of an old email, but want to make sure this isn't missed.
>
> Has anyone looked into this concern?
>
> Ref https://issues.apache.org/jira/browse/ROL-2138 
> .
>
> Thanks
> +Vinod
>
> > Begin forwarded message:
> >
> > From: sebb 
> > Subject: Incorrect NOTICE files in TLP releases
> > Date: June 21, 2019 at 2:34:17 AM GMT+5:30
> > To: "bo...@apache.org Board" 
> > Reply-To: bo...@apache.org
> >
> > To whom it may concern:
> >
> > I had occasion to download the source for Roller, and happened to look
> > at the NOTICE file.
> > It does not conform to ASF policies, so I raised ROL-2138.
> >
> > One of the replies asked how to manage different N&L files for binary
> > and source releases, and pointed out that Hadoop and Karaf don't
> > appear to have multiple copies of the files.
> >
> > So I had a look at Hadoop and Karaf; their NOTICE files are also
> > non-standard, and it looks like Kafka does not have a NOTICE file in
> > the source bundle.
> >
> > I suspect these are not the only projects with non-conformant NOTICE files.
> > The LICENSE files are also likely to be incorrect (I have not checked).
> >
> > Sebb.
>

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2019-07-10 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1194/

[Jul 10, 2019 2:19:36 AM] (msingh) HDDS-1603. Handle Ratis Append Failure in 
Container State Machine.
[Jul 10, 2019 2:53:34 AM] (yqlin) HDFS-14632. Reduce useless 
#getNumLiveDataNodes call in SafeModeMonitor.
[Jul 10, 2019 11:15:55 AM] (msingh) HDDS-1748. Error message for 3 way commit 
failure is not verbose.
[Jul 10, 2019 11:22:51 AM] (elek) HDDS-1764. Fix hidden errors in acceptance 
tests
[Jul 10, 2019 1:31:28 PM] (elek) HDDS-1525. Mapreduce failure when using Hadoop 
2.7.5
[Jul 10, 2019 4:43:58 PM] (arp7) HDDS-1778. Fix existing blockade tests. (#1068)
[Jul 10, 2019 4:59:11 PM] (xkrogen) HDFS-14622. [Dynamometer] Update XML 
FsImage parsing logic to ignore
[Jul 10, 2019 6:03:58 PM] (aengineer) HDDS-1611. Evaluate ACL on volume bucket 
key and prefix to authorize
[Jul 10, 2019 6:11:52 PM] (inigoiri) HDFS-12703. Exceptions are fatal to 
decommissioning monitor. Contributed
[Jul 10, 2019 6:28:18 PM] (aengineer) HDDS-1611.[Addendum] Evaluate ACL on 
volume bucket key and prefix to
[Jul 10, 2019 7:57:02 PM] (stevel) HADOOP-16393. S3Guard init command uses 
global settings, not those of
[Jul 10, 2019 9:15:33 PM] (eyang) YARN-9660. Update support documentation for 
Docker on YARN.

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org

Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86

2019-07-10 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/379/

[Jul 10, 2019 2:58:59 AM] (yqlin) HDFS-14632. Reduce useless 
#getNumLiveDataNodes call in SafeModeMonitor.

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org