[jira] [Created] (HADOOP-16594) add sm4 crypto to hdfs

2019-09-23 Thread zZtai (Jira)
zZtai created HADOOP-16594:
--

 Summary: add sm4 crypto to hdfs
 Key: HADOOP-16594
 URL: https://issues.apache.org/jira/browse/HADOOP-16594
 Project: Hadoop Common
  Issue Type: New Feature
  Components: common
Affects Versions: 3.1.1
Reporter: zZtai






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86

2019-09-23 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/454/

[Sep 23, 2019 3:06:27 PM] (ekrogen) HADOOP-16581. Addendum: Remove use of Java 
8 functionality. Contributed
[Sep 23, 2019 8:13:24 PM] (jhung) YARN-9762. Add submission context label to 
audit logs. Contributed by

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org

Re: [NOTICE] Building trunk needs protoc 3.7.1

2019-09-23 Thread Duo Zhang
The new protobuf plugin related issues have all been pushed to trunk(though
I think we'd better port them to all active branches).

So what's the next step? Shade and relocated protobuf? HBase has already
done this before so I do not think it will take too much time. If we all
agree on the solution, I think we can finish this in one week.

But maybe a problem is that, is it OK to upgrade protobuf in a minor
release? Of course if we shade and relocate protobuf it will less hurt to
users as they can depend on protobuf 2.5 explicitly if they want, but still
a bit uncomfortable.

Thanks.

Wangda Tan  于2019年9月24日周二 上午2:29写道:

> Hi Vinay,
>
> Thanks for the clarification.
>
> Do you have a timeline about all you described works w.r.t.  the
> compatibility will be completed? I'm asking this is because we need to
> release 3.3.0 earlier if possible since there're 1k+ of patches in 3.3.0
> already, we should get it out earlier.
>
> If the PB work will take more time, do you think if we should create a
> branch for 3.3, revert PB changes from branch-3.3, and keep on working on
> PB for the next minor release? (or major release if we do see some
> compatibility issue in the future).
>
> Just my $0.02
>
> Thanks,
> Wangda
>
> On Mon, Sep 23, 2019 at 5:43 AM Steve Loughran  >
> wrote:
>
> > aah, that makes sense
> >
> > On Sun, Sep 22, 2019 at 6:11 PM Vinayakumar B 
> > wrote:
> >
> > > Thanks Steve.
> > >
> > > Idea is not to shade all artifacts.
> > > Instead maintain one artifact ( hadoop-thirdparty) which have all such
> > > dependencies ( com.google.* may be), add  this artifact as dependency
> in
> > > hadoop modules. Use shaded classes directly in the code of hadoop
> modules
> > > instead of shading at package phase.
> > >
> > > Hbase, ozone and ratis already following this way. The artifact (
> > > hadoop-thirdparty) with shaded dependencies can be maintained in a
> > separate
> > > repo as suggested by stack on HADOOP-13363 or could be maintained as a
> > > separate module in Hadoop repo. If maintained in separate repo, need to
> > > build this only when there are changes related to shaded dependencies.
> > >
> > >
> > > -Vinay
> > >
> > > On Sun, 22 Sep 2019, 10:11 pm Steve Loughran, 
> > wrote:
> > >
> > > >
> > > >
> > > > On Sun, Sep 22, 2019 at 3:22 PM Vinayakumar B <
> vinayakum...@apache.org
> > >
> > > > wrote:
> > > >
> > > >>Protobuf provides Wire compatibility between releases.. but not
> > > >> guarantees the source compatibility in generated sources. There will
> > be
> > > a
> > > >> problem in compatibility if anyone uses generated protobuf message
> > > outside
> > > >> of Hadoop modules. Which ideally shouldn't be as generated sources
> are
> > > not
> > > >> public APIs.
> > > >>
> > > >>There should not be any compatibility problems between releases
> in
> > > >> terms
> > > >> of communication provided both uses same syntax (proto2) of proto
> > > message.
> > > >> This I have verified by communication between protobuf 2.5.0 client
> > with
> > > >> protobuf 3.7.1 server.
> > > >>
> > > >>To avoid the downstream transitive dependency classpath problem,
> > who
> > > >> might be using protobuf 2.5.0 classes, planning to shade the 3.7.1
> > > classes
> > > >> and its usages in all hadoop modules.. and keep 2.5.0 jar back in
> > hadoop
> > > >> classpath.
> > > >>
> > > >> Hope I have answered your question.
> > > >>
> > > >> -Vinay
> > > >>
> > > >>
> > > > While I support the move and CP isolation, this is going to (finally)
> > > > force us to make shaded versions of all artifacts which we publish
> with
> > > the
> > > > intent of them being loaded on the classpath of other applications
> > > >
> > >
> >
>


Re: [NOTICE] Building trunk needs protoc 3.7.1

2019-09-23 Thread Wangda Tan
Hi Vinay,

Thanks for the clarification.

Do you have a timeline about all you described works w.r.t.  the
compatibility will be completed? I'm asking this is because we need to
release 3.3.0 earlier if possible since there're 1k+ of patches in 3.3.0
already, we should get it out earlier.

If the PB work will take more time, do you think if we should create a
branch for 3.3, revert PB changes from branch-3.3, and keep on working on
PB for the next minor release? (or major release if we do see some
compatibility issue in the future).

Just my $0.02

Thanks,
Wangda

On Mon, Sep 23, 2019 at 5:43 AM Steve Loughran 
wrote:

> aah, that makes sense
>
> On Sun, Sep 22, 2019 at 6:11 PM Vinayakumar B 
> wrote:
>
> > Thanks Steve.
> >
> > Idea is not to shade all artifacts.
> > Instead maintain one artifact ( hadoop-thirdparty) which have all such
> > dependencies ( com.google.* may be), add  this artifact as dependency in
> > hadoop modules. Use shaded classes directly in the code of hadoop modules
> > instead of shading at package phase.
> >
> > Hbase, ozone and ratis already following this way. The artifact (
> > hadoop-thirdparty) with shaded dependencies can be maintained in a
> separate
> > repo as suggested by stack on HADOOP-13363 or could be maintained as a
> > separate module in Hadoop repo. If maintained in separate repo, need to
> > build this only when there are changes related to shaded dependencies.
> >
> >
> > -Vinay
> >
> > On Sun, 22 Sep 2019, 10:11 pm Steve Loughran, 
> wrote:
> >
> > >
> > >
> > > On Sun, Sep 22, 2019 at 3:22 PM Vinayakumar B  >
> > > wrote:
> > >
> > >>Protobuf provides Wire compatibility between releases.. but not
> > >> guarantees the source compatibility in generated sources. There will
> be
> > a
> > >> problem in compatibility if anyone uses generated protobuf message
> > outside
> > >> of Hadoop modules. Which ideally shouldn't be as generated sources are
> > not
> > >> public APIs.
> > >>
> > >>There should not be any compatibility problems between releases in
> > >> terms
> > >> of communication provided both uses same syntax (proto2) of proto
> > message.
> > >> This I have verified by communication between protobuf 2.5.0 client
> with
> > >> protobuf 3.7.1 server.
> > >>
> > >>To avoid the downstream transitive dependency classpath problem,
> who
> > >> might be using protobuf 2.5.0 classes, planning to shade the 3.7.1
> > classes
> > >> and its usages in all hadoop modules.. and keep 2.5.0 jar back in
> hadoop
> > >> classpath.
> > >>
> > >> Hope I have answered your question.
> > >>
> > >> -Vinay
> > >>
> > >>
> > > While I support the move and CP isolation, this is going to (finally)
> > > force us to make shaded versions of all artifacts which we publish with
> > the
> > > intent of them being loaded on the classpath of other applications
> > >
> >
>


[jira] [Reopened] (HADOOP-16577) Build fails as can't retrieve websocket-servlet

2019-09-23 Thread Jira


 [ 
https://issues.apache.org/jira/browse/HADOOP-16577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erkin Alp Güney reopened HADOOP-16577:
--

It appeared again.

> Build fails as can't retrieve websocket-servlet
> ---
>
> Key: HADOOP-16577
> URL: https://issues.apache.org/jira/browse/HADOOP-16577
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: Erkin Alp Güney
>Priority: Blocker
>  Labels: build, dependencies
>
> I encountered this error when building Hadoop:
> Downloading: 
> https://repository.apache.org/content/repositories/snapshots/org/eclipse/jetty/websocket/websocket-server/9.3.27.v20190418/websocket-server-9.3.27.v20190418.jar
> Sep 15, 2019 7:54:39 AM 
> org.apache.maven.wagon.providers.http.httpclient.impl.execchain.RetryExec 
> execute
> INFO: I/O exception 
> (org.apache.maven.wagon.providers.http.httpclient.NoHttpResponseException) 
> caught when processing request to {s}->https://repository.apache.org:443: The 
> target server failed to respond
> Sep 15, 2019 7:54:39 AM 
> org.apache.maven.wagon.providers.http.httpclient.impl.execchain.RetryExec 
> execute



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [NOTICE] Building trunk needs protoc 3.7.1

2019-09-23 Thread Steve Loughran
aah, that makes sense

On Sun, Sep 22, 2019 at 6:11 PM Vinayakumar B 
wrote:

> Thanks Steve.
>
> Idea is not to shade all artifacts.
> Instead maintain one artifact ( hadoop-thirdparty) which have all such
> dependencies ( com.google.* may be), add  this artifact as dependency in
> hadoop modules. Use shaded classes directly in the code of hadoop modules
> instead of shading at package phase.
>
> Hbase, ozone and ratis already following this way. The artifact (
> hadoop-thirdparty) with shaded dependencies can be maintained in a separate
> repo as suggested by stack on HADOOP-13363 or could be maintained as a
> separate module in Hadoop repo. If maintained in separate repo, need to
> build this only when there are changes related to shaded dependencies.
>
>
> -Vinay
>
> On Sun, 22 Sep 2019, 10:11 pm Steve Loughran,  wrote:
>
> >
> >
> > On Sun, Sep 22, 2019 at 3:22 PM Vinayakumar B 
> > wrote:
> >
> >>Protobuf provides Wire compatibility between releases.. but not
> >> guarantees the source compatibility in generated sources. There will be
> a
> >> problem in compatibility if anyone uses generated protobuf message
> outside
> >> of Hadoop modules. Which ideally shouldn't be as generated sources are
> not
> >> public APIs.
> >>
> >>There should not be any compatibility problems between releases in
> >> terms
> >> of communication provided both uses same syntax (proto2) of proto
> message.
> >> This I have verified by communication between protobuf 2.5.0 client with
> >> protobuf 3.7.1 server.
> >>
> >>To avoid the downstream transitive dependency classpath problem, who
> >> might be using protobuf 2.5.0 classes, planning to shade the 3.7.1
> classes
> >> and its usages in all hadoop modules.. and keep 2.5.0 jar back in hadoop
> >> classpath.
> >>
> >> Hope I have answered your question.
> >>
> >> -Vinay
> >>
> >>
> > While I support the move and CP isolation, this is going to (finally)
> > force us to make shaded versions of all artifacts which we publish with
> the
> > intent of them being loaded on the classpath of other applications
> >
>


[jira] [Resolved] (HADOOP-16138) hadoop fs mkdir / of nonexistent abfs container raises NPE

2019-09-23 Thread Gabor Bota (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota resolved HADOOP-16138.
-
Target Version/s: 3.3.0
  Resolution: Fixed

> hadoop fs mkdir / of nonexistent abfs container raises NPE
> --
>
> Key: HADOOP-16138
> URL: https://issues.apache.org/jira/browse/HADOOP-16138
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Minor
>
> If you try to do a mkdir on the root of a nonexistent container, you get an 
> NPE
> {code}
> hadoop fs -mkdir  abfs://contain...@abfswales1.dfs.core.windows.net/  
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: [DISCUSS] Separate Hadoop Core trunk and Hadoop Ozone trunk source tree [discussion -> lazy vote]

2019-09-23 Thread Elek, Marton
> Do you see a Submarine like split-also-into-a-TLP for Ozone? If not 
now, sometime further down the line?


Good question, and I don't know what is the best answer right now. It's 
definitely an option, But Submarine move hasn't been finished, so it's 
not yet possible to learn form the experiences (which can be a usefull 
input for the decision).


I think it's a bigger/more important question and I would prefer to 
start a new thread about it.


>  If so, why not do both at the same time?

That's an easier question: I think the repo separation is an easier 
step, with immediate benefits, therefore I would prefer to do it as soon 
as possible.


Moving to a separated TLP may take months (discussion, vote, proposal, 
board approval, etc.). While this code organization step can be done 
easily after the 0.4.1 Ozone release (which is very close, I hope).


As it should be done anyway (with or without separated TLP) I propose to 
do it after the next Ozone release (in the next 1-2 weeks).




As the overall feedback was positive (in fact many of the answers were 
simple +1 votes) I don't think the thread should be repeated under 
[VOTE] subject. Therefore I call it for a lazy consensus. If you have 
any objections (against doing the repo separation now or doing it at 
all) please express in the next 3 days...


Thanks a lot,
Marton

On 9/22/19 4:02 PM, Vinod Kumar Vavilapalli wrote:

Looks to me that the advantages of this additional step are only incremental 
given that you've already decoupled releases and dependencies.

Do you see a Submarine like split-also-into-a-TLP for Ozone? If not now, 
sometime further down the line? If so, why not do both at the same time? I felt 
the same way with Submarine, but couldn't follow up in time.

Thanks
+Vinod


On Sep 18, 2019, at 4:04 AM, Wangda Tan  wrote:

+1 (binding).

 From my experiences of Submarine project, I think moving to a separate repo
helps.

- Wangda

On Tue, Sep 17, 2019 at 11:41 AM Subru Krishnan  wrote:


+1 (binding).

IIUC, there will not be an Ozone module in trunk anymore as that was my
only concern from the original discussion thread? IMHO, this should be the
default approach for new modules.

On Tue, Sep 17, 2019 at 9:58 AM Salvatore LaMendola (BLOOMBERG/ 731 LEX) <
slamendo...@bloomberg.net> wrote:


+1

From: e...@apache.org At: 09/17/19 05:48:32To:

hdfs-...@hadoop.apache.org,

mapreduce-...@hadoop.apache.org,  common-dev@hadoop.apache.org,
yarn-...@hadoop.apache.org
Subject: [DISCUSS] Separate Hadoop Core trunk and Hadoop Ozone trunk
source tree


TLDR; I propose to move Ozone related code out from Hadoop trunk and
store it in a separated *Hadoop* git repository apache/hadoop-ozone.git


When Ozone was adopted as a new Hadoop subproject it was proposed[1] to
be part of the source tree but with separated release cadence, mainly
because it had the hadoop-trunk/SNAPSHOT as compile time dependency.

During the last Ozone releases this dependency is removed to provide
more stable releases. Instead of using the latest trunk/SNAPSHOT build
from Hadoop, Ozone uses the latest stable Hadoop (3.2.0 as of now).

As we have no more strict dependency between Hadoop trunk SNAPSHOT and
Ozone trunk I propose to separate the two code base from each other with
creating a new Hadoop git repository (apache/hadoop-ozone.git):

With moving Ozone to a separated git repository:

  * It would be easier to contribute and understand the build (as of now
we always need `-f pom.ozone.xml` as a Maven parameter)
  * It would be possible to adjust build process without breaking
Hadoop/Ozone builds.
  * It would be possible to use different Readme/.asf.yaml/github
template for the Hadoop Ozone and core Hadoop. (For example the current
github template [2] has a link to the contribution guideline [3]. Ozone
has an extended version [4] from this guideline with additional
information.)
  * Testing would be more safe as it won't be possible to change core
Hadoop and Hadoop Ozone in the same patch.
  * It would be easier to cut branches for Hadoop releases (based on the
original consensus, Ozone should be removed from all the release
branches after creating relase branches from trunk)


What do you think?

Thanks,
Marton

[1]:



https://lists.apache.org/thread.html/c85e5263dcc0ca1d13cbbe3bcfb53236784a39111b8

c353f60582eb4@%3Chdfs-dev.hadoop.apache.org%3E
[2]:



https://github.com/apache/hadoop/blob/trunk/.github/pull_request_template.md

[3]:

https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute

[4]:



https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute+to+Ozone


-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org








-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: 

[jira] [Created] (HADOOP-16593) Polish the protobuf plugin for hadoop-yarn-csi

2019-09-23 Thread Duo Zhang (Jira)
Duo Zhang created HADOOP-16593:
--

 Summary: Polish the protobuf plugin for hadoop-yarn-csi
 Key: HADOOP-16593
 URL: https://issues.apache.org/jira/browse/HADOOP-16593
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Duo Zhang


As discussed here:

https://github.com/apache/hadoop/pull/1496#discussion_r326931072

We should align the execution id in the parent pom.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org