Re: [VOTE] Release Hadoop-3.1.3-RC0

2019-09-17 Thread Rohith Sharma K S
+1 (binding)
- verified sha512 for all the artifacts
- built from source and installed pseudo cluster. Run sample MR jobs and
distributed shell.

-Rohith Sharma K S


Re: Compile 64bit native Hadoop 3.2.0 binaries for AIX 7.1

2019-09-17 Thread Matt Foley
Hi Candy,
It’s possible that you’ve already tried the below, but one can’t tell from your 
message, so here goes:

1. Presumably you’ve read 
https://hadoop.apache.org/docs/r3.2.0/hadoop-project-dist/hadoop-common/NativeLibraries.html
Admittedly it doesn’t answer your question, but it does point out several 
important prerequisites that must be met.

2. This isn’t entirely in my area of expertise, but it looks like a key file is 
https://github.com/apache/hadoop/blob/branch-3.2/hadoop-common-project/hadoop-common/HadoopCommon.cmake
It sets up CMake configurations shared by all Native components.  If you search 
for “64” and “32” within that file, you’ll see example setups for several 
different architectures.  I would infer that you need to create corresponding 
declarations for AIX, that sets 64-bit mode for it.

3. The most direct answer to your question, for GCC based toolchains, is that 
"-m64” should be used as both compiler and linker flags, but you’ll have to 
look up the proper flags for the compiler/linker you’re actually using, as well 
as take care of any other details needed by your toolchain.  If your toolchain 
is GCC based, then the Linux setup in HadoopCommon.cmake is likely to be a good 
example.

Hope this helps,
—Matt


On Sep 17, 2019, at 7:19 PM, Candy Ho  wrote:

Dear all,

I have mostly implemented the various 'fixes' and peculiarities of the
Unix-based AIX systems required for Hadoop and now I'm stuck with the
compile error

Linking C shared library target/usr/local/lib/libhadoop.a
ld: 0711-736 ERROR: Input file /usr/java8_64/jre/lib/ppc64/j9vm/libjvm.so
XCOFF64 object files are not allowed in 32-bit mode
collect2: error :ld returned 8 exit status
make:1254-004 The error code from the last command is 1

I'm using 64-bit java 8.

My main question is how do I force a 64bit build?

Thanks
Candy Ho


-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 3.2.1 - RC0

2019-09-17 Thread zhankun tang
+1 (non-binding).
Installed and verified it by running several Spark job and DS jobs.

BR,
Zhankun

On Wed, 18 Sep 2019 at 08:05, Naganarasimha Garla <
naganarasimha...@apache.org> wrote:

> Verified the source and the binary tar and the sha512 checksums
> Installed and verified the basic hadoop operations (ran few MR tasks)
>
> +1.
>
> Thanks,
> + Naga
>
> On Wed, Sep 18, 2019 at 1:32 AM Anil Sadineni 
> wrote:
>
> > +1 (non-binding)
> >
> > On Tue, Sep 17, 2019 at 9:55 AM Santosh Marella 
> wrote:
> >
> > > +1 (non-binding)
> > >
> > > On Wed, Sep 11, 2019 at 12:26 AM Rohith Sharma K S <
> > > rohithsharm...@apache.org> wrote:
> > >
> > > > Hi folks,
> > > >
> > > > I have put together a release candidate (RC0) for Apache Hadoop
> 3.2.1.
> > > >
> > > > The RC is available at:
> > > > http://home.apache.org/~rohithsharmaks/hadoop-3.2.1-RC0/
> > > >
> > > > The RC tag in git is release-3.2.1-RC0:
> > > > https://github.com/apache/hadoop/tree/release-3.2.1-RC0
> > > >
> > > >
> > > > The maven artifacts are staged at
> > > >
> > https://repository.apache.org/content/repositories/orgapachehadoop-1226/
> > > >
> > > > You can find my public key at:
> > > > https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
> > > >
> > > > This vote will run for 7 days(5 weekdays), ending on 18th Sept at
> 11:59
> > > pm
> > > > PST.
> > > >
> > > > I have done testing with a pseudo cluster and distributed shell job.
> My
> > > +1
> > > > to start.
> > > >
> > > > Thanks & Regards
> > > > Rohith Sharma K S
> > > >
> > >
> >
> >
> > --
> > Thanks & Regards,
> > Anil Sadineni
> > Solutions Architect, Optlin Inc
> > Ph: 571-438-1974 | www.optlin.com
> >
>


[jira] [Created] (HADOOP-16584) S3A Test failures when S3Guard is not enabled

2019-09-17 Thread Siddharth Seth (Jira)
Siddharth Seth created HADOOP-16584:
---

 Summary: S3A Test failures when S3Guard is not enabled
 Key: HADOOP-16584
 URL: https://issues.apache.org/jira/browse/HADOOP-16584
 Project: Hadoop Common
  Issue Type: Task
  Components: fs/s3
 Environment: S
Reporter: Siddharth Seth


There's several S3 test failures when S3Guard is not enabled.
All of these tests pass once the tests are configured to use S3Guard.

{code}
ITestS3GuardTtl#testListingFilteredExpiredItems
[INFO] Running org.apache.hadoop.fs.s3a.ITestS3GuardTtl
[ERROR] Tests run: 10, Failures: 2, Errors: 0, Skipped: 4, Time elapsed: 
102.988 s <<< FAILURE! - in org.apache.hadoop.fs.s3a.ITestS3GuardTtl
[ERROR] 
testListingFilteredExpiredItems[0](org.apache.hadoop.fs.s3a.ITestS3GuardTtl)  
Time elapsed: 14.675 s  <<< FAILURE!
java.lang.AssertionError:
[Metastrore directory listing of 
s3a://sseth-dev-in/fork-0002/test/testListingFilteredExpiredItems]
Expecting actual not to be null
  at 
org.apache.hadoop.fs.s3a.ITestS3GuardTtl.getDirListingMetadata(ITestS3GuardTtl.java:367)
  at 
org.apache.hadoop.fs.s3a.ITestS3GuardTtl.testListingFilteredExpiredItems(ITestS3GuardTtl.java:335)
  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
  at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:498)
  at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
  at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
  at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
  at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
  at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
  at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
  at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
  at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
  at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
  at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  at java.lang.Thread.run(Thread.java:748)

[ERROR] 
testListingFilteredExpiredItems[1](org.apache.hadoop.fs.s3a.ITestS3GuardTtl)  
Time elapsed: 44.463 s  <<< FAILURE!
java.lang.AssertionError:
[Metastrore directory listing of 
s3a://sseth-dev-in/fork-0002/test/testListingFilteredExpiredItems]
Expecting actual not to be null
  at 
org.apache.hadoop.fs.s3a.ITestS3GuardTtl.getDirListingMetadata(ITestS3GuardTtl.java:367)
  at 
org.apache.hadoop.fs.s3a.ITestS3GuardTtl.testListingFilteredExpiredItems(ITestS3GuardTtl.java:335)
  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
  at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
  at java.lang.reflect.Method.invoke(Method.java:498)
  at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
  at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
  at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
  at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
  at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
  at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
  at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
  at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
  at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
  at java.util.concurrent.FutureTask.run(FutureTask.java:266)
  at java.lang.Thread.run(Thread.java:748)
{code}

Related to no metastore being used. Test failure happens in teardown with a 
NPE, since the setup did not complete. This one is likely a simple fix with 
some null checks in the teardown method.
 ITestAuthoritativePath (6 failures all with the same pattern)
{code}
  [ERROR] Tests run: 6, Failures: 0, Errors: 6, Skipped: 0, Time elapsed: 8.142 
s <<< FAILURE! - in org.apache.hadoop.fs.s3a.ITestAuthoritativePath
[ERROR] testPrefixVsDirectory(org.apache.hadoop.fs.s3a.ITestAuthoritativePath)  
Time elapsed: 6.821 s  <<< ERROR!
org.junit.AssumptionViolatedException: FS needs to have a metadatastore.
  at org.junit.Assume.assumeTrue(Assume.java:59)
  at 
org.apache.hadoop.fs.s3a.ITestAuthoritativePath.setup(ITestAuthoritativePath.java:63)
  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
  at 

[jira] [Created] (HADOOP-16583) Minor fixes to S3 testing instructions

2019-09-17 Thread Siddharth Seth (Jira)
Siddharth Seth created HADOOP-16583:
---

 Summary: Minor fixes to S3 testing instructions
 Key: HADOOP-16583
 URL: https://issues.apache.org/jira/browse/HADOOP-16583
 Project: Hadoop Common
  Issue Type: Task
  Components: fs/s3
Reporter: Siddharth Seth
Assignee: Siddharth Seth


testing.md has some instructions which don't work any longer, and needs an 
update.

Specifically - how to enable s3guard and switch between dynamodb and localdb as 
the store.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Compile 64bit native Hadoop 3.2.0 binaries for AIX 7.1

2019-09-17 Thread Candy Ho
Dear all,

I have mostly implemented the various 'fixes' and peculiarities of the
Unix-based AIX systems required for Hadoop and now I'm stuck with the
compile error

Linking C shared library target/usr/local/lib/libhadoop.a
ld: 0711-736 ERROR: Input file /usr/java8_64/jre/lib/ppc64/j9vm/libjvm.so
XCOFF64 object files are not allowed in 32-bit mode
collect2: error :ld returned 8 exit status
make:1254-004 The error code from the last command is 1

I'm using 64-bit java 8.

My main question is how do I force a 64bit build?

Thanks
Candy Ho


Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86

2019-09-17 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/448/

[Sep 17, 2019 4:34:54 PM] (weichiu) HDFS-14771. Backport HDFS-14617 to branch-2 
(Improve fsimage load time

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org

Re: [VOTE] Release Apache Hadoop 3.2.1 - RC0

2019-09-17 Thread Naganarasimha Garla
Verified the source and the binary tar and the sha512 checksums
Installed and verified the basic hadoop operations (ran few MR tasks)

+1.

Thanks,
+ Naga

On Wed, Sep 18, 2019 at 1:32 AM Anil Sadineni  wrote:

> +1 (non-binding)
>
> On Tue, Sep 17, 2019 at 9:55 AM Santosh Marella  wrote:
>
> > +1 (non-binding)
> >
> > On Wed, Sep 11, 2019 at 12:26 AM Rohith Sharma K S <
> > rohithsharm...@apache.org> wrote:
> >
> > > Hi folks,
> > >
> > > I have put together a release candidate (RC0) for Apache Hadoop 3.2.1.
> > >
> > > The RC is available at:
> > > http://home.apache.org/~rohithsharmaks/hadoop-3.2.1-RC0/
> > >
> > > The RC tag in git is release-3.2.1-RC0:
> > > https://github.com/apache/hadoop/tree/release-3.2.1-RC0
> > >
> > >
> > > The maven artifacts are staged at
> > >
> https://repository.apache.org/content/repositories/orgapachehadoop-1226/
> > >
> > > You can find my public key at:
> > > https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
> > >
> > > This vote will run for 7 days(5 weekdays), ending on 18th Sept at 11:59
> > pm
> > > PST.
> > >
> > > I have done testing with a pseudo cluster and distributed shell job. My
> > +1
> > > to start.
> > >
> > > Thanks & Regards
> > > Rohith Sharma K S
> > >
> >
>
>
> --
> Thanks & Regards,
> Anil Sadineni
> Solutions Architect, Optlin Inc
> Ph: 571-438-1974 | www.optlin.com
>


Re: [DISCUSS] Separate Hadoop Core trunk and Hadoop Ozone trunk source tree

2019-09-17 Thread Weiwei Yang
+1 (binding)

Thanks
Weiwei

On Wed, Sep 18, 2019 at 6:35 AM Wangda Tan  wrote:

> +1 (binding).
>
> From my experiences of Submarine project, I think moving to a separate repo
> helps.
>
> - Wangda
>
> On Tue, Sep 17, 2019 at 11:41 AM Subru Krishnan  wrote:
>
> > +1 (binding).
> >
> > IIUC, there will not be an Ozone module in trunk anymore as that was my
> > only concern from the original discussion thread? IMHO, this should be
> the
> > default approach for new modules.
> >
> > On Tue, Sep 17, 2019 at 9:58 AM Salvatore LaMendola (BLOOMBERG/ 731 LEX)
> <
> > slamendo...@bloomberg.net> wrote:
> >
> > > +1
> > >
> > > From: e...@apache.org At: 09/17/19 05:48:32To:
> > hdfs-...@hadoop.apache.org,
> > > mapreduce-...@hadoop.apache.org,  common-dev@hadoop.apache.org,
> > > yarn-...@hadoop.apache.org
> > > Subject: [DISCUSS] Separate Hadoop Core trunk and Hadoop Ozone trunk
> > > source tree
> > >
> > >
> > > TLDR; I propose to move Ozone related code out from Hadoop trunk and
> > > store it in a separated *Hadoop* git repository apache/hadoop-ozone.git
> > >
> > >
> > > When Ozone was adopted as a new Hadoop subproject it was proposed[1] to
> > > be part of the source tree but with separated release cadence, mainly
> > > because it had the hadoop-trunk/SNAPSHOT as compile time dependency.
> > >
> > > During the last Ozone releases this dependency is removed to provide
> > > more stable releases. Instead of using the latest trunk/SNAPSHOT build
> > > from Hadoop, Ozone uses the latest stable Hadoop (3.2.0 as of now).
> > >
> > > As we have no more strict dependency between Hadoop trunk SNAPSHOT and
> > > Ozone trunk I propose to separate the two code base from each other
> with
> > > creating a new Hadoop git repository (apache/hadoop-ozone.git):
> > >
> > > With moving Ozone to a separated git repository:
> > >
> > >   * It would be easier to contribute and understand the build (as of
> now
> > > we always need `-f pom.ozone.xml` as a Maven parameter)
> > >   * It would be possible to adjust build process without breaking
> > > Hadoop/Ozone builds.
> > >   * It would be possible to use different Readme/.asf.yaml/github
> > > template for the Hadoop Ozone and core Hadoop. (For example the current
> > > github template [2] has a link to the contribution guideline [3]. Ozone
> > > has an extended version [4] from this guideline with additional
> > > information.)
> > >   * Testing would be more safe as it won't be possible to change core
> > > Hadoop and Hadoop Ozone in the same patch.
> > >   * It would be easier to cut branches for Hadoop releases (based on
> the
> > > original consensus, Ozone should be removed from all the release
> > > branches after creating relase branches from trunk)
> > >
> > >
> > > What do you think?
> > >
> > > Thanks,
> > > Marton
> > >
> > > [1]:
> > >
> > >
> >
> https://lists.apache.org/thread.html/c85e5263dcc0ca1d13cbbe3bcfb53236784a39111b8
> > > c353f60582eb4@%3Chdfs-dev.hadoop.apache.org%3E
> > > [2]:
> > >
> > >
> >
> https://github.com/apache/hadoop/blob/trunk/.github/pull_request_template.md
> > > [3]:
> > https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute
> > > [4]:
> > >
> > >
> >
> https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute+to+Ozone
> > >
> > > -
> > > To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> > > For additional commands, e-mail: common-dev-h...@hadoop.apache.org
> > >
> > >
> > >
> >
>


Re: [DISCUSS] Separate Hadoop Core trunk and Hadoop Ozone trunk source tree

2019-09-17 Thread Wangda Tan
+1 (binding).

>From my experiences of Submarine project, I think moving to a separate repo
helps.

- Wangda

On Tue, Sep 17, 2019 at 11:41 AM Subru Krishnan  wrote:

> +1 (binding).
>
> IIUC, there will not be an Ozone module in trunk anymore as that was my
> only concern from the original discussion thread? IMHO, this should be the
> default approach for new modules.
>
> On Tue, Sep 17, 2019 at 9:58 AM Salvatore LaMendola (BLOOMBERG/ 731 LEX) <
> slamendo...@bloomberg.net> wrote:
>
> > +1
> >
> > From: e...@apache.org At: 09/17/19 05:48:32To:
> hdfs-...@hadoop.apache.org,
> > mapreduce-...@hadoop.apache.org,  common-dev@hadoop.apache.org,
> > yarn-...@hadoop.apache.org
> > Subject: [DISCUSS] Separate Hadoop Core trunk and Hadoop Ozone trunk
> > source tree
> >
> >
> > TLDR; I propose to move Ozone related code out from Hadoop trunk and
> > store it in a separated *Hadoop* git repository apache/hadoop-ozone.git
> >
> >
> > When Ozone was adopted as a new Hadoop subproject it was proposed[1] to
> > be part of the source tree but with separated release cadence, mainly
> > because it had the hadoop-trunk/SNAPSHOT as compile time dependency.
> >
> > During the last Ozone releases this dependency is removed to provide
> > more stable releases. Instead of using the latest trunk/SNAPSHOT build
> > from Hadoop, Ozone uses the latest stable Hadoop (3.2.0 as of now).
> >
> > As we have no more strict dependency between Hadoop trunk SNAPSHOT and
> > Ozone trunk I propose to separate the two code base from each other with
> > creating a new Hadoop git repository (apache/hadoop-ozone.git):
> >
> > With moving Ozone to a separated git repository:
> >
> >   * It would be easier to contribute and understand the build (as of now
> > we always need `-f pom.ozone.xml` as a Maven parameter)
> >   * It would be possible to adjust build process without breaking
> > Hadoop/Ozone builds.
> >   * It would be possible to use different Readme/.asf.yaml/github
> > template for the Hadoop Ozone and core Hadoop. (For example the current
> > github template [2] has a link to the contribution guideline [3]. Ozone
> > has an extended version [4] from this guideline with additional
> > information.)
> >   * Testing would be more safe as it won't be possible to change core
> > Hadoop and Hadoop Ozone in the same patch.
> >   * It would be easier to cut branches for Hadoop releases (based on the
> > original consensus, Ozone should be removed from all the release
> > branches after creating relase branches from trunk)
> >
> >
> > What do you think?
> >
> > Thanks,
> > Marton
> >
> > [1]:
> >
> >
> https://lists.apache.org/thread.html/c85e5263dcc0ca1d13cbbe3bcfb53236784a39111b8
> > c353f60582eb4@%3Chdfs-dev.hadoop.apache.org%3E
> > [2]:
> >
> >
> https://github.com/apache/hadoop/blob/trunk/.github/pull_request_template.md
> > [3]:
> https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute
> > [4]:
> >
> >
> https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute+to+Ozone
> >
> > -
> > To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> > For additional commands, e-mail: common-dev-h...@hadoop.apache.org
> >
> >
> >
>


[jira] [Created] (HADOOP-16582) LocalFileSystem's mkdirs() does not work as expected under viewfs.

2019-09-17 Thread Kihwal Lee (Jira)
Kihwal Lee created HADOOP-16582:
---

 Summary: LocalFileSystem's mkdirs() does not work as expected 
under viewfs.
 Key: HADOOP-16582
 URL: https://issues.apache.org/jira/browse/HADOOP-16582
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Kihwal Lee


When {{mkdirs(Path)}} is called against {{LocalFileSystem}}, the implementation 
in {{RawLocalFileSystem}} is called and the directory permission is determined 
by the umask.  However, if it is under {{ViewFileSystem}}, the default 
implementation in {{FileSystem}} is called and this causes explicit {{chmod()}} 
to 0777.

The {{mkdirs(Path)}} method needs to be overriden in
- ViewFileSystem to avoid calling the default implementation
- ChRootedFileSystem for proper resolution of viewfs mount table
- FilterFileSystem to avoid calling the default implementation

Only then the same method in the target ({{LocalFileSystem}} in this case) will 
be called.  Hdfs does not suffer from the same flaw since it applies umask in 
all cases, regardless of what version of {{mkdirs()}} was called.




--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16581) ValueQueue does not trigger an async refill when number of values falls below watermark

2019-09-17 Thread Yuval Degani (Jira)
Yuval Degani created HADOOP-16581:
-

 Summary: ValueQueue does not trigger an async refill when number 
of values falls below watermark
 Key: HADOOP-16581
 URL: https://issues.apache.org/jira/browse/HADOOP-16581
 Project: Hadoop Common
  Issue Type: Bug
  Components: common, kms
Affects Versions: 3.2.0, 2.7.4
Reporter: Yuval Degani
Assignee: Yuval Degani
 Fix For: 3.2.1


The ValueQueue facility was designed to cache EDEKs for KMS KeyProviders so 
that EDEKs could be served quickly, while the cache is replenished in a 
background thread.

The existing code for triggering an asynchronous refill is only triggered when 
a key queue becomes empty, rather than when it falls below the configured 
watermark.

This is a relatively minor fix in the main code, however, most of the tests 
require some changes as they verify the previous unintended behavior.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [DISCUSS] Separate Hadoop Core trunk and Hadoop Ozone trunk source tree

2019-09-17 Thread Subru Krishnan
+1 (binding).

IIUC, there will not be an Ozone module in trunk anymore as that was my
only concern from the original discussion thread? IMHO, this should be the
default approach for new modules.

On Tue, Sep 17, 2019 at 9:58 AM Salvatore LaMendola (BLOOMBERG/ 731 LEX) <
slamendo...@bloomberg.net> wrote:

> +1
>
> From: e...@apache.org At: 09/17/19 05:48:32To:  hdfs-...@hadoop.apache.org,
> mapreduce-...@hadoop.apache.org,  common-dev@hadoop.apache.org,
> yarn-...@hadoop.apache.org
> Subject: [DISCUSS] Separate Hadoop Core trunk and Hadoop Ozone trunk
> source tree
>
>
> TLDR; I propose to move Ozone related code out from Hadoop trunk and
> store it in a separated *Hadoop* git repository apache/hadoop-ozone.git
>
>
> When Ozone was adopted as a new Hadoop subproject it was proposed[1] to
> be part of the source tree but with separated release cadence, mainly
> because it had the hadoop-trunk/SNAPSHOT as compile time dependency.
>
> During the last Ozone releases this dependency is removed to provide
> more stable releases. Instead of using the latest trunk/SNAPSHOT build
> from Hadoop, Ozone uses the latest stable Hadoop (3.2.0 as of now).
>
> As we have no more strict dependency between Hadoop trunk SNAPSHOT and
> Ozone trunk I propose to separate the two code base from each other with
> creating a new Hadoop git repository (apache/hadoop-ozone.git):
>
> With moving Ozone to a separated git repository:
>
>   * It would be easier to contribute and understand the build (as of now
> we always need `-f pom.ozone.xml` as a Maven parameter)
>   * It would be possible to adjust build process without breaking
> Hadoop/Ozone builds.
>   * It would be possible to use different Readme/.asf.yaml/github
> template for the Hadoop Ozone and core Hadoop. (For example the current
> github template [2] has a link to the contribution guideline [3]. Ozone
> has an extended version [4] from this guideline with additional
> information.)
>   * Testing would be more safe as it won't be possible to change core
> Hadoop and Hadoop Ozone in the same patch.
>   * It would be easier to cut branches for Hadoop releases (based on the
> original consensus, Ozone should be removed from all the release
> branches after creating relase branches from trunk)
>
>
> What do you think?
>
> Thanks,
> Marton
>
> [1]:
>
> https://lists.apache.org/thread.html/c85e5263dcc0ca1d13cbbe3bcfb53236784a39111b8
> c353f60582eb4@%3Chdfs-dev.hadoop.apache.org%3E
> [2]:
>
> https://github.com/apache/hadoop/blob/trunk/.github/pull_request_template.md
> [3]: https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute
> [4]:
>
> https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute+to+Ozone
>
> -
> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: common-dev-h...@hadoop.apache.org
>
>
>


Re: [VOTE] Release Apache Hadoop 3.2.1 - RC0

2019-09-17 Thread Anil Sadineni
+1 (non-binding)

On Tue, Sep 17, 2019 at 9:55 AM Santosh Marella  wrote:

> +1 (non-binding)
>
> On Wed, Sep 11, 2019 at 12:26 AM Rohith Sharma K S <
> rohithsharm...@apache.org> wrote:
>
> > Hi folks,
> >
> > I have put together a release candidate (RC0) for Apache Hadoop 3.2.1.
> >
> > The RC is available at:
> > http://home.apache.org/~rohithsharmaks/hadoop-3.2.1-RC0/
> >
> > The RC tag in git is release-3.2.1-RC0:
> > https://github.com/apache/hadoop/tree/release-3.2.1-RC0
> >
> >
> > The maven artifacts are staged at
> > https://repository.apache.org/content/repositories/orgapachehadoop-1226/
> >
> > You can find my public key at:
> > https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
> >
> > This vote will run for 7 days(5 weekdays), ending on 18th Sept at 11:59
> pm
> > PST.
> >
> > I have done testing with a pseudo cluster and distributed shell job. My
> +1
> > to start.
> >
> > Thanks & Regards
> > Rohith Sharma K S
> >
>


-- 
Thanks & Regards,
Anil Sadineni
Solutions Architect, Optlin Inc
Ph: 571-438-1974 | www.optlin.com


Re: [DISCUSS] Separate Hadoop Core trunk and Hadoop Ozone trunk source tree

2019-09-17 Thread Dinesh Chitlangia
+1

-Dinesh




On Tue, Sep 17, 2019 at 12:58 PM Salvatore LaMendola (BLOOMBERG/ 731 LEX) <
slamendo...@bloomberg.net> wrote:

> +1
>
> From: e...@apache.org At: 09/17/19 05:48:32To:  hdfs-...@hadoop.apache.org,
> mapreduce-...@hadoop.apache.org,  common-dev@hadoop.apache.org,
> yarn-...@hadoop.apache.org
> Subject: [DISCUSS] Separate Hadoop Core trunk and Hadoop Ozone trunk
> source tree
>
>
> TLDR; I propose to move Ozone related code out from Hadoop trunk and
> store it in a separated *Hadoop* git repository apache/hadoop-ozone.git
>
>
> When Ozone was adopted as a new Hadoop subproject it was proposed[1] to
> be part of the source tree but with separated release cadence, mainly
> because it had the hadoop-trunk/SNAPSHOT as compile time dependency.
>
> During the last Ozone releases this dependency is removed to provide
> more stable releases. Instead of using the latest trunk/SNAPSHOT build
> from Hadoop, Ozone uses the latest stable Hadoop (3.2.0 as of now).
>
> As we have no more strict dependency between Hadoop trunk SNAPSHOT and
> Ozone trunk I propose to separate the two code base from each other with
> creating a new Hadoop git repository (apache/hadoop-ozone.git):
>
> With moving Ozone to a separated git repository:
>
>   * It would be easier to contribute and understand the build (as of now
> we always need `-f pom.ozone.xml` as a Maven parameter)
>   * It would be possible to adjust build process without breaking
> Hadoop/Ozone builds.
>   * It would be possible to use different Readme/.asf.yaml/github
> template for the Hadoop Ozone and core Hadoop. (For example the current
> github template [2] has a link to the contribution guideline [3]. Ozone
> has an extended version [4] from this guideline with additional
> information.)
>   * Testing would be more safe as it won't be possible to change core
> Hadoop and Hadoop Ozone in the same patch.
>   * It would be easier to cut branches for Hadoop releases (based on the
> original consensus, Ozone should be removed from all the release
> branches after creating relase branches from trunk)
>
>
> What do you think?
>
> Thanks,
> Marton
>
> [1]:
>
> https://lists.apache.org/thread.html/c85e5263dcc0ca1d13cbbe3bcfb53236784a39111b8
> c353f60582eb4@%3Chdfs-dev.hadoop.apache.org%3E
> [2]:
>
> https://github.com/apache/hadoop/blob/trunk/.github/pull_request_template.md
> [3]: https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute
> [4]:
>
> https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute+to+Ozone
>
> -
> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: common-dev-h...@hadoop.apache.org
>
>
>


Re:[DISCUSS] Separate Hadoop Core trunk and Hadoop Ozone trunk source tree

2019-09-17 Thread Salvatore LaMendola (BLOOMBERG/ 731 LEX)
+1

From: e...@apache.org At: 09/17/19 05:48:32To:  hdfs-...@hadoop.apache.org,  
mapreduce-...@hadoop.apache.org,  common-dev@hadoop.apache.org,  
yarn-...@hadoop.apache.org
Subject: [DISCUSS] Separate Hadoop Core trunk and Hadoop Ozone trunk source tree


TLDR; I propose to move Ozone related code out from Hadoop trunk and 
store it in a separated *Hadoop* git repository apache/hadoop-ozone.git


When Ozone was adopted as a new Hadoop subproject it was proposed[1] to 
be part of the source tree but with separated release cadence, mainly 
because it had the hadoop-trunk/SNAPSHOT as compile time dependency.

During the last Ozone releases this dependency is removed to provide 
more stable releases. Instead of using the latest trunk/SNAPSHOT build 
from Hadoop, Ozone uses the latest stable Hadoop (3.2.0 as of now).

As we have no more strict dependency between Hadoop trunk SNAPSHOT and 
Ozone trunk I propose to separate the two code base from each other with 
creating a new Hadoop git repository (apache/hadoop-ozone.git):

With moving Ozone to a separated git repository:

  * It would be easier to contribute and understand the build (as of now 
we always need `-f pom.ozone.xml` as a Maven parameter)
  * It would be possible to adjust build process without breaking 
Hadoop/Ozone builds.
  * It would be possible to use different Readme/.asf.yaml/github 
template for the Hadoop Ozone and core Hadoop. (For example the current 
github template [2] has a link to the contribution guideline [3]. Ozone 
has an extended version [4] from this guideline with additional 
information.)
  * Testing would be more safe as it won't be possible to change core 
Hadoop and Hadoop Ozone in the same patch.
  * It would be easier to cut branches for Hadoop releases (based on the 
original consensus, Ozone should be removed from all the release 
branches after creating relase branches from trunk)


What do you think?

Thanks,
Marton

[1]: 
https://lists.apache.org/thread.html/c85e5263dcc0ca1d13cbbe3bcfb53236784a39111b8
c353f60582eb4@%3Chdfs-dev.hadoop.apache.org%3E
[2]: 
https://github.com/apache/hadoop/blob/trunk/.github/pull_request_template.md
[3]: https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute
[4]: 
https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute+to+Ozone

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org




Re: [VOTE] Release Apache Hadoop 3.2.1 - RC0

2019-09-17 Thread Santosh Marella
+1 (non-binding)

On Wed, Sep 11, 2019 at 12:26 AM Rohith Sharma K S <
rohithsharm...@apache.org> wrote:

> Hi folks,
>
> I have put together a release candidate (RC0) for Apache Hadoop 3.2.1.
>
> The RC is available at:
> http://home.apache.org/~rohithsharmaks/hadoop-3.2.1-RC0/
>
> The RC tag in git is release-3.2.1-RC0:
> https://github.com/apache/hadoop/tree/release-3.2.1-RC0
>
>
> The maven artifacts are staged at
> https://repository.apache.org/content/repositories/orgapachehadoop-1226/
>
> You can find my public key at:
> https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
>
> This vote will run for 7 days(5 weekdays), ending on 18th Sept at 11:59 pm
> PST.
>
> I have done testing with a pseudo cluster and distributed shell job. My +1
> to start.
>
> Thanks & Regards
> Rohith Sharma K S
>


Re: [DISCUSS] Separate Hadoop Core trunk and Hadoop Ozone trunk source tree

2019-09-17 Thread Anu Engineer
+1
—Anu

> On Sep 17, 2019, at 2:49 AM, Elek, Marton  wrote:
> 
> 
> 
> TLDR; I propose to move Ozone related code out from Hadoop trunk and store it 
> in a separated *Hadoop* git repository apache/hadoop-ozone.git
> 
> 
> 
> 
> When Ozone was adopted as a new Hadoop subproject it was proposed[1] to be 
> part of the source tree but with separated release cadence, mainly because it 
> had the hadoop-trunk/SNAPSHOT as compile time dependency.
> 
> During the last Ozone releases this dependency is removed to provide more 
> stable releases. Instead of using the latest trunk/SNAPSHOT build from 
> Hadoop, Ozone uses the latest stable Hadoop (3.2.0 as of now).
> 
> As we have no more strict dependency between Hadoop trunk SNAPSHOT and Ozone 
> trunk I propose to separate the two code base from each other with creating a 
> new Hadoop git repository (apache/hadoop-ozone.git):
> 
> With moving Ozone to a separated git repository:
> 
> * It would be easier to contribute and understand the build (as of now we 
> always need `-f pom.ozone.xml` as a Maven parameter)
> * It would be possible to adjust build process without breaking Hadoop/Ozone 
> builds.
> * It would be possible to use different Readme/.asf.yaml/github template for 
> the Hadoop Ozone and core Hadoop. (For example the current github template 
> [2] has a link to the contribution guideline [3]. Ozone has an extended 
> version [4] from this guideline with additional information.)
> * Testing would be more safe as it won't be possible to change core Hadoop 
> and Hadoop Ozone in the same patch.
> * It would be easier to cut branches for Hadoop releases (based on the 
> original consensus, Ozone should be removed from all the release branches 
> after creating relase branches from trunk)
> 
> 
> What do you think?
> 
> Thanks,
> Marton
> 
> [1]: 
> https://lists.apache.org/thread.html/c85e5263dcc0ca1d13cbbe3bcfb53236784a39111b8c353f60582eb4@%3Chdfs-dev.hadoop.apache.org%3E
> [2]: 
> https://github.com/apache/hadoop/blob/trunk/.github/pull_request_template.md
> [3]: https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute
> [4]: 
> https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute+to+Ozone
> 
> -
> To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
> 

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2019-09-17 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1262/

[Sep 16, 2019 12:18:01 PM] (elek) HDDS-2044.Remove 'ozone' from the recon 
module names.
[Sep 16, 2019 12:55:06 PM] (elek) HDDS-2096. Ozone ACL document missing AddAcl 
API
[Sep 16, 2019 1:22:11 PM] (elek) HDDS-2109. Refactor scm.container.client config
[Sep 16, 2019 1:41:17 PM] (elek) HDDS-2124. Random next links
[Sep 16, 2019 2:58:10 PM] (elek) HDDS-2078. Get/Renew DelegationToken NPE after 
HDDS-1909
[Sep 16, 2019 7:17:33 PM] (aengineer) HDDS-2030. Generate simplifed reports by 
the dev-support/checks/*.sh
[Sep 16, 2019 7:57:41 PM] (xyao) HDDS-1879.  Support multiple excluded scopes 
when choosing datanodes in
[Sep 16, 2019 7:58:16 PM] (hanishakoneru) HDDS-2107. Datanodes should retry 
forever to connect to SCM in an
[Sep 16, 2019 10:48:09 PM] (aengineer) HDDS-2111. XSS fragments can be injected 
to the S3g landing page




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

XML :

   Parsing Error(s): 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-excerpt.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-output-missing-tags2.xml
 
   
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/resources/nvidia-smi-sample-output.xml
 

FindBugs :

   
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-mawo/hadoop-yarn-applications-mawo-core
 
   Class org.apache.hadoop.applications.mawo.server.common.TaskStatus 
implements Cloneable but does not define or use clone method At 
TaskStatus.java:does not define or use clone method At TaskStatus.java:[lines 
39-346] 
   Equals method for 
org.apache.hadoop.applications.mawo.server.worker.WorkerId assumes the argument 
is of type WorkerId At WorkerId.java:the argument is of type WorkerId At 
WorkerId.java:[line 114] 
   
org.apache.hadoop.applications.mawo.server.worker.WorkerId.equals(Object) does 
not check for null argument At WorkerId.java:null argument At 
WorkerId.java:[lines 114-115] 

Failed CTEST tests :

   test_test_libhdfs_ops_hdfs_static 
   test_test_libhdfs_threaded_hdfs_static 
   test_test_libhdfs_zerocopy_hdfs_static 
   test_test_native_mini_dfs 
   test_libhdfs_threaded_hdfspp_test_shim_static 
   test_hdfspp_mini_dfs_smoke_hdfspp_test_shim_static 
   libhdfs_mini_stress_valgrind_hdfspp_test_static 
   memcheck_libhdfs_mini_stress_valgrind_hdfspp_test_static 
   test_libhdfs_mini_stress_hdfspp_test_shim_static 
   test_hdfs_ext_hdfspp_test_shim_static 

Failed junit tests :

   hadoop.hdfs.server.federation.security.TestRouterHttpDelegationToken 
   hadoop.hdfs.server.federation.router.TestRouterFaultTolerant 
   hadoop.hdfs.server.federation.router.TestRouterWithSecureStartup 
   hadoop.fs.adl.live.TestAdlSdkConfiguration 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1262/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1262/artifact/out/diff-compile-javac-root.txt
  [332K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1262/artifact/out/diff-checkstyle-root.txt
  [17M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1262/artifact/out/diff-patch-hadolint.txt
  [8.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1262/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1262/artifact/out/diff-patch-pylint.txt
  [220K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1262/artifact/out/diff-patch-shellcheck.txt
  [24K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1262/artifact/out/diff-patch-shelldocs.txt
  [44K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1262/artifact/out/whitespace-eol.txt
  [9.6M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1262/artifact/out/whitespace-tabs.txt
  [1.1M]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/1262/artifact/out/xml.txt
  [16K]

   findbugs:

   

[jira] [Created] (HADOOP-16580) Disable retry of FailoverOnNetworkExceptionRetry in case of AccessControlException

2019-09-17 Thread Adam Antal (Jira)
Adam Antal created HADOOP-16580:
---

 Summary: Disable retry of FailoverOnNetworkExceptionRetry in case 
of AccessControlException
 Key: HADOOP-16580
 URL: https://issues.apache.org/jira/browse/HADOOP-16580
 Project: Hadoop Common
  Issue Type: Bug
  Components: common
Affects Versions: 3.3.0
Reporter: Adam Antal
Assignee: Adam Antal


HADOOP-14982 handled the case where a SaslException is thrown. The issue still 
persists, since the exception that is thrown is an *AccessControlException* 
because user has no kerberos credentials. 

My suggestion is that we should add this case as well to 
{{FailoverOnNetworkExceptionRetry}}.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16579) Upgrade to Apache Curator 4.2.0 in Hadoop

2019-09-17 Thread Mate Szalay-Beko (Jira)
Mate Szalay-Beko created HADOOP-16579:
-

 Summary: Upgrade to Apache Curator 4.2.0 in Hadoop
 Key: HADOOP-16579
 URL: https://issues.apache.org/jira/browse/HADOOP-16579
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Mate Szalay-Beko


Currently in Hadoop we are using [ZooKeeper version 
3.4.13|https://github.com/apache/hadoop/blob/7f9073132dcc9db157a6792635d2ed099f2ef0d2/hadoop-project/pom.xml#L90].
 ZooKeeper 3.5.5 is the latest stable Apache ZooKeeper release. It contains 
many new features (including SSL related improvements which can be very 
important for production use; see [the release 
notes|https://zookeeper.apache.org/doc/r3.5.5/releasenotes.html]).

Apache Curator is a high level ZooKeeper client library, that makes it easier 
to use the low level ZooKeeper API. Currently [in Hadoop we are using Curator 
2.13.0|https://github.com/apache/hadoop/blob/7f9073132dcc9db157a6792635d2ed099f2ef0d2/hadoop-project/pom.xml#L91]
 and [in Ozone we use Curator 
2.12.0|https://github.com/apache/hadoop/blob/7f9073132dcc9db157a6792635d2ed099f2ef0d2/pom.ozone.xml#L146].

Curator 2.x is supporting only the ZooKeeper 3.4.x releases, while Curator 3.x 
is compatible only with the new ZooKeeper 3.5.x releases. Fortunately, the 
latest Curator 4.x versions are compatible with both ZooKeeper 3.4.x and 3.5.x. 
(see [the relevant Curator 
page|https://curator.apache.org/zk-compatibility.html]). Many Apache projects 
have already migrated to Curator 4 (like HBase, Phoenix, Druid, etc.), other 
components are doing it right now (e.g. Hive).

*The aims of this task are* to:
 - change Curator version in Hadoop to the latest stable 4.x version (currently 
4.2.0)
 - also make sure we don't have multiple ZooKeeper versions in the classpath to 
avoid runtime problems (it is 
[recommended|https://curator.apache.org/zk-compatibility.html] to exclude the 
ZooKeeper which come with Curator, so that there will be only a single 
ZooKeeper version used in Hadoop)

In this ticket we still don't want to change the default ZooKeeper version in 
Hadoop, we only want to make it possible for the community to be able to build 
/ use Hadoop with the new ZooKeeper (e.g. if they need to secure the ZooKeeper 
communication with SSL, what is only supported in the new ZooKeeper version). 
Upgrading to Curator 4.x should keep Hadoop to be compatible with both 
ZooKeeper 3.4 and 3.5.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16371) Option to disable GCM for SSL connections when running on Java 8

2019-09-17 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-16371.
-
Fix Version/s: 3.3.0
   Resolution: Fixed

> Option to disable GCM for SSL connections when running on Java 8
> 
>
> Key: HADOOP-16371
> URL: https://issues.apache.org/jira/browse/HADOOP-16371
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
>Priority: Major
> Fix For: 3.3.0
>
>
> This was the original objective of HADOOP-16050. HADOOP-16050 was changed to 
> mimic HADOOP-15669 and added (or attempted to add) support for 
> Wildfly-OpenSSL in S3A.
> Due to the number of issues have seen with S3A + WildFly OpenSSL (see 
> HADOOP-16346), HADOOP-16050 was reverted.
> As shown in the description of HADOOP-16050, and the analysis done in 
> HADOOP-15669, GCM has major performance issues when running on Java 8. 
> Removing it from the list of available ciphers can drastically improve 
> performance, perhaps not as much as using OpenSSL, but still a considerable 
> amount.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[DISCUSS] Separate Hadoop Core trunk and Hadoop Ozone trunk source tree

2019-09-17 Thread Elek, Marton




TLDR; I propose to move Ozone related code out from Hadoop trunk and 
store it in a separated *Hadoop* git repository apache/hadoop-ozone.git





When Ozone was adopted as a new Hadoop subproject it was proposed[1] to 
be part of the source tree but with separated release cadence, mainly 
because it had the hadoop-trunk/SNAPSHOT as compile time dependency.


During the last Ozone releases this dependency is removed to provide 
more stable releases. Instead of using the latest trunk/SNAPSHOT build 
from Hadoop, Ozone uses the latest stable Hadoop (3.2.0 as of now).


As we have no more strict dependency between Hadoop trunk SNAPSHOT and 
Ozone trunk I propose to separate the two code base from each other with 
creating a new Hadoop git repository (apache/hadoop-ozone.git):


With moving Ozone to a separated git repository:

 * It would be easier to contribute and understand the build (as of now 
we always need `-f pom.ozone.xml` as a Maven parameter)
 * It would be possible to adjust build process without breaking 
Hadoop/Ozone builds.
 * It would be possible to use different Readme/.asf.yaml/github 
template for the Hadoop Ozone and core Hadoop. (For example the current 
github template [2] has a link to the contribution guideline [3]. Ozone 
has an extended version [4] from this guideline with additional 
information.)
 * Testing would be more safe as it won't be possible to change core 
Hadoop and Hadoop Ozone in the same patch.
 * It would be easier to cut branches for Hadoop releases (based on the 
original consensus, Ozone should be removed from all the release 
branches after creating relase branches from trunk)



What do you think?

Thanks,
Marton

[1]: 
https://lists.apache.org/thread.html/c85e5263dcc0ca1d13cbbe3bcfb53236784a39111b8c353f60582eb4@%3Chdfs-dev.hadoop.apache.org%3E
[2]: 
https://github.com/apache/hadoop/blob/trunk/.github/pull_request_template.md

[3]: https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute
[4]: 
https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute+to+Ozone


-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org