[jira] [Created] (HDFS-12245) Update INodeId javadoc

2017-08-01 Thread Wei-Chiu Chuang (JIRA)
Wei-Chiu Chuang created HDFS-12245:
--

 Summary: Update INodeId javadoc
 Key: HDFS-12245
 URL: https://issues.apache.org/jira/browse/HDFS-12245
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Reporter: Wei-Chiu Chuang


The INodeId javadoc states that id 1 to 1000 is reserved and root inode id 
start from 1001. That is no longer true after HDFS-4434.

Also, it's a little weird in INodeId
{code}
  public static final long LAST_RESERVED_ID = 2 << 14 - 1;
  public static final long ROOT_INODE_ID = LAST_RESERVED_ID + 1;
{code}
It seems the intent was for LAST_RESERVED_ID to be (2^14) - 1 = 32767. But due 
to Java operator precedence, LAST_RESERVED_ID = 2^(14-1) = 16384. Maybe it 
doesn't matter, not sure.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: About HDFS consistency model

2017-08-01 Thread Hongxu Ma
Thanks again, Ravi!

These links are helpful, a good root for me to trace this topic, maybe I can 
find more useful info, thanks!

在 01/08/2017 04:09, Ravi Prakash 写道:
Hi Hongxu!

It might be best to go through JIRA comments. Some of them have design docs 
too. e.g. https://issues.apache.org/jira/browse/HDFS-265 , 
https://issues.apache.org/jira/browse/HDFS-744 etc.

I know there may be some windy conversations, but the good thing about 
open-source is that all these deliberations are for everyone to see.

HTH
Ravi.

On Mon, Jul 31, 2017 at 3:13 AM, Hongxu Ma 
> wrote:
Hi Dev
In short, is there a full introduction to hdfs consistency model?

I have already read some miscellaneous stuffs, e.g.
- some simple scenario
- read+read, ok
- write+write, forbidden (guaranteed be lease)
- design doc of append, truncate
- hflush/hsync
- top pages of google "hdfs consistency model"

But I didn't find a detailed doc/link to clarify this topic, especially how to 
deal with concurrent read+write, e.g.:
Can a later reader can read the contents which the writer (doesn't close the 
file yet) appended? Is hflush/hsync the only ways to ensure the consistency? 
etc..

I known the GFS paper has a specific section to clarify this topic, so I think 
hdfs may has a similar one.
Very thanks for helpful link.


--
Regards,
Hongxu.



--
Regards,
Hongxu.


[jira] [Resolved] (HDFS-5465) Update the package names for hsftp / hftp in the documentation

2017-08-01 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai resolved HDFS-5465.
--
Resolution: Not A Problem

This is no longer an issue as hftp / hsftp have been deprecated.

> Update the package names for hsftp / hftp in the documentation
> --
>
> Key: HDFS-5465
> URL: https://issues.apache.org/jira/browse/HDFS-5465
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Haohui Mai
>Assignee: Haohui Mai
>Priority: Minor
>
> HDFS-5436 move HftpFileSystem and HsftpFileSystem to a different package. The 
> documentation should be updated as well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 2.7.4 (RC0)

2017-08-01 Thread Ye Zhou
Hi, Konstantin.
Thanks for leading the release.

+1 (non-binding)

-Built from the source on mac with jdk_1.8.0_40
-Built on internal Jenkins with jdk_1.8.0_40.
-Deployed on a cluster with 121 nodes(Including RM, NM, NN, DN)
-Basic shell commands
-Distcp 3.7TB data to HDFS
-Run GridMix test which submitted 15K MR jobs in 5 hours(trace generated
from real production job trace)

On Mon, Jul 31, 2017 at 6:57 PM, Konstantin Shvachko 
wrote:

> Uploaded new binaries hadoop-2.7.4-RC0.tar.gz, which adds lib/native/.
> Same place: http://home.apache.org/~shv/hadoop-2.7.4-RC0/
>
> Thanks,
> --Konstantin
>
> On Mon, Jul 31, 2017 at 3:56 PM, Chris Douglas 
> wrote:
>
> > On Mon, Jul 31, 2017 at 3:02 PM, Konstantin Shvachko
> >  wrote:
> > > For the packaging, here is the exact phrasing from the sited
> > release-policy
> > > document relevant to binaries:
> > > "As a convenience to users that might not have the appropriate tools to
> > > build a compiled version of the source, binary/bytecode packages MAY be
> > > distributed alongside official Apache releases. In all such cases, the
> > > binary/bytecode package MUST have the same version number as the source
> > > release and MUST only add binary/bytecode files that are the result of
> > > compiling that version of the source code release and its
> dependencies."
> > > I don't think my binary package violates any of these.
> >
> > +1 The PMC VOTE applies to source code, only. If someone wants to
> > rebuild the binary tarball with native libs and replace this one,
> > that's fine.
> >
> > My reading of the above is that source code must be distributed with
> > binaries, not that we omit the source code from binary releases... -C
> >
> > > But I'll upload an additional tar.gz with native bits and no src, as
> you
> > > guys requested.
> > > Will keep it as RC0 as there is no source code change and it comes from
> > the
> > > same build.
> > > Hope this is satisfactory.
> > >
> > > Thanks,
> > > --Konstantin
> > >
> > > On Mon, Jul 31, 2017 at 1:53 PM, Andrew Wang  >
> > > wrote:
> > >
> > >> I agree with Brahma on the two issues flagged (having src in the
> binary
> > >> tarball, missing native libs). These are regressions from prior
> > releases.
> > >>
> > >> As an aside, "we release binaries as a convenience" doesn't relax the
> > >> quality bar. The binaries are linked on our website and distributed
> > through
> > >> official Apache channels. They have to adhere to Apache release
> > >> requirements. And, most users consume our work via Maven dependencies,
> > >> which are binary artifacts.
> > >>
> > >> http://www.apache.org/legal/release-policy.html goes into this in
> more
> > >> detail. A release must minimally include source packages, and can also
> > >> include binary artifacts.
> > >>
> > >> Best,
> > >> Andrew
> > >>
> > >> On Mon, Jul 31, 2017 at 12:30 PM, Konstantin Shvachko <
> > >> shv.had...@gmail.com> wrote:
> > >>
> > >>> To avoid any confusion in this regard. I built RC0 manually in
> > compliance
> > >>> with Apache release policy
> > >>> http://www.apache.org/legal/release-policy.html
> > >>> I edited the HowToReleasePreDSBCR page to make sure people don't use
> > >>> Jenkins option for building.
> > >>>
> > >>> A side note. This particular build is broken anyways, so no worries
> > there.
> > >>> I think though it would be useful to have it working for testing and
> > as a
> > >>> packaging standard.
> > >>>
> > >>> Thanks,
> > >>> --Konstantin
> > >>>
> > >>> On Mon, Jul 31, 2017 at 11:40 AM, Allen Wittenauer <
> > >>> a...@effectivemachines.com
> > >>> > wrote:
> > >>>
> > >>> >
> > >>> > > On Jul 31, 2017, at 11:20 AM, Konstantin Shvachko <
> > >>> shv.had...@gmail.com>
> > >>> > wrote:
> > >>> > >
> > >>> > > https://wiki.apache.org/hadoop/HowToReleasePreDSBCR
> > >>> >
> > >>> > FYI:
> > >>> >
> > >>> > If you are using ASF Jenkins to create an ASF
> release
> > >>> > artifact, it's pretty much an automatic vote failure as any such
> > >>> release is
> > >>> > in violation of ASF policy.
> > >>> >
> > >>> >
> > >>>
> > >>
> > >>
> >
>



-- 

*Zhou, Ye  **周晔*


[jira] [Created] (HDFS-12244) Ozone: the static cache provided by ContainerCache does not work in Unit tests

2017-08-01 Thread Tsz Wo Nicholas Sze (JIRA)
Tsz Wo Nicholas Sze created HDFS-12244:
--

 Summary: Ozone: the static cache provided by ContainerCache does 
not work in Unit tests 
 Key: HDFS-12244
 URL: https://issues.apache.org/jira/browse/HDFS-12244
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: ozone
Reporter: Tsz Wo Nicholas Sze


Since a cluster may have >1 datanodes, a static ContainerCache is shared among 
the datanodes.  When one datanode shutdown, the cache will be shutdown so that 
the other datanodes cannot use the cache any more.  It results in 
"leveldb.DBException: Closed"

{code}
org.iq80.leveldb.DBException: Closed
at org.fusesource.leveldbjni.internal.JniDB.get(JniDB.java:75)
at org.apache.hadoop.utils.LevelDBStore.get(LevelDBStore.java:109)
at 
org.apache.hadoop.ozone.container.common.impl.KeyManagerImpl.getKey(KeyManagerImpl.java:116)
at 
org.apache.hadoop.ozone.container.common.impl.Dispatcher.handleGetSmallFile(Dispatcher.java:677)
at 
org.apache.hadoop.ozone.container.common.impl.Dispatcher.smallFileHandler(Dispatcher.java:293)
at 
org.apache.hadoop.ozone.container.common.impl.Dispatcher.dispatch(Dispatcher.java:121)
at 
org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.dispatch(ContainerStateMachine.java:94)
...
{code}




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12243) Trash emptier should use Time.monotonicNow()

2017-08-01 Thread Wei-Chiu Chuang (JIRA)
Wei-Chiu Chuang created HDFS-12243:
--

 Summary: Trash emptier should use Time.monotonicNow()
 Key: HDFS-12243
 URL: https://issues.apache.org/jira/browse/HDFS-12243
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: fs
Reporter: Wei-Chiu Chuang
Assignee: Wei-Chiu Chuang
Priority: Minor






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12242) Limit the protocols on client and service RPC ports

2017-08-01 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HDFS-12242:


 Summary: Limit the protocols on client and service RPC ports
 Key: HDFS-12242
 URL: https://issues.apache.org/jira/browse/HDFS-12242
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 3.0.0-beta1
Reporter: Arpit Agarwal
Assignee: Arpit Agarwal


This Jira is to address the following comments from [~xyao] on HDFS-10391.

bq. Line 433-463 (original NamenodeRPCServer.java): should we remove the 
service related RPCs from the client RPC server as we are not allow fallback 
bq. Line 343-345: should we ensure only the service RPCs are added/set here? 

Full [comment 
here|https://issues.apache.org/jira/browse/HDFS-10391?focusedCommentId=16065636=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16065636].



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12241) HttpFS to support overloaded FileSystem#rename API

2017-08-01 Thread Wei-Chiu Chuang (JIRA)
Wei-Chiu Chuang created HDFS-12241:
--

 Summary: HttpFS to support overloaded FileSystem#rename API
 Key: HDFS-12241
 URL: https://issues.apache.org/jira/browse/HDFS-12241
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: httpfs
Reporter: Wei-Chiu Chuang
Assignee: Wei-Chiu Chuang


Httpfs is essentially the parity of webhdfs. But it does not implement 
{{FileSystem#rename(final Path src, final Path dst, final Rename... options)}}, 
which mean it does not support trash.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12240) Document WebHDFS rename API parameter renameoptions

2017-08-01 Thread Wei-Chiu Chuang (JIRA)
Wei-Chiu Chuang created HDFS-12240:
--

 Summary: Document WebHDFS rename API parameter renameoptions
 Key: HDFS-12240
 URL: https://issues.apache.org/jira/browse/HDFS-12240
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Wei-Chiu Chuang


The {{FileSystem#rename}} API has an overloaded version that carries an extra 
parameter "renameoptions". The extra parameter can be used to support trash or 
support overwriting.

The WebHDFS Rest API does not document this parameter, so file this jira to get 
it documented.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12239) Ozone: OzoneClient : Remove createContainer handling from client

2017-08-01 Thread Anu Engineer (JIRA)
Anu Engineer created HDFS-12239:
---

 Summary: Ozone: OzoneClient : Remove createContainer handling from 
client
 Key: HDFS-12239
 URL: https://issues.apache.org/jira/browse/HDFS-12239
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Affects Versions: HDFS-7240
Reporter: Anu Engineer
 Fix For: HDFS-7240


In HDFS-12178, we have committed some special handling of creating containers. 
This is not needed in the long run. This JIRA tracks removing that.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12238) Ozone: Add valid trace ID check in sendCommandAsync

2017-08-01 Thread Anu Engineer (JIRA)
Anu Engineer created HDFS-12238:
---

 Summary: Ozone: Add valid trace ID check in sendCommandAsync
 Key: HDFS-12238
 URL: https://issues.apache.org/jira/browse/HDFS-12238
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Affects Versions: HDFS-7240
Reporter: Anu Engineer


In the function {{XceiverClientHandler#sendCommandAsync}} we should add a check 
{code}
   if(StringUtils.isEmpty(request.getTraceID())) {
  throw new IllegalArgumentException("Invalid trace ID");
}
{code}

To ensure that ozone clients always send a valid trace ID. However, when you do 
that a set of current tests that do add a valid trace ID will fail. So we need 
to fix these tests too.

{code}
  TestContainerMetrics.testContainerMetrics
  TestOzoneContainer.testBothGetandPutSmallFile
  TestOzoneContainer.testCloseContainer
  TestOzoneContainer.testOzoneContainerViaDataNode
  TestOzoneContainer.testXcieverClientAsync
  TestOzoneContainer.testCreateOzoneContainer
  TestOzoneContainer.testDeleteContainer
  TestContainerServer.testClientServer
  TestContainerServer.testClientServerWithContainerDispatcher
  TestKeys.testPutAndGetKeyWithDnRestart
{code}

This is based on a comment from [~vagarychen] in HDFS-11580.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 2.7.4 (RC0)

2017-08-01 Thread Edwina Lu
+1 (non-binding). 

Thanks,

Edwina

On 7/31/17, 6:57 PM, "Konstantin Shvachko"  wrote:

Uploaded new binaries hadoop-2.7.4-RC0.tar.gz, which adds lib/native/.
Same place: http://home.apache.org/~shv/hadoop-2.7.4-RC0/

Thanks,
--Konstantin

On Mon, Jul 31, 2017 at 3:56 PM, Chris Douglas  wrote:

> On Mon, Jul 31, 2017 at 3:02 PM, Konstantin Shvachko
>  wrote:
> > For the packaging, here is the exact phrasing from the sited
> release-policy
> > document relevant to binaries:
> > "As a convenience to users that might not have the appropriate tools to
> > build a compiled version of the source, binary/bytecode packages MAY be
> > distributed alongside official Apache releases. In all such cases, the
> > binary/bytecode package MUST have the same version number as the source
> > release and MUST only add binary/bytecode files that are the result of
> > compiling that version of the source code release and its dependencies."
> > I don't think my binary package violates any of these.
>
> +1 The PMC VOTE applies to source code, only. If someone wants to
> rebuild the binary tarball with native libs and replace this one,
> that's fine.
>
> My reading of the above is that source code must be distributed with
> binaries, not that we omit the source code from binary releases... -C
>
> > But I'll upload an additional tar.gz with native bits and no src, as you
> > guys requested.
> > Will keep it as RC0 as there is no source code change and it comes from
> the
> > same build.
> > Hope this is satisfactory.
> >
> > Thanks,
> > --Konstantin
> >
> > On Mon, Jul 31, 2017 at 1:53 PM, Andrew Wang 
> > wrote:
> >
> >> I agree with Brahma on the two issues flagged (having src in the binary
> >> tarball, missing native libs). These are regressions from prior
> releases.
> >>
> >> As an aside, "we release binaries as a convenience" doesn't relax the
> >> quality bar. The binaries are linked on our website and distributed
> through
> >> official Apache channels. They have to adhere to Apache release
> >> requirements. And, most users consume our work via Maven dependencies,
> >> which are binary artifacts.
> >>
> >> http://www.apache.org/legal/release-policy.html goes into this in more
> >> detail. A release must minimally include source packages, and can also
> >> include binary artifacts.
> >>
> >> Best,
> >> Andrew
> >>
> >> On Mon, Jul 31, 2017 at 12:30 PM, Konstantin Shvachko <
> >> shv.had...@gmail.com> wrote:
> >>
> >>> To avoid any confusion in this regard. I built RC0 manually in
> compliance
> >>> with Apache release policy
> >>> http://www.apache.org/legal/release-policy.html
> >>> I edited the HowToReleasePreDSBCR page to make sure people don't use
> >>> Jenkins option for building.
> >>>
> >>> A side note. This particular build is broken anyways, so no worries
> there.
> >>> I think though it would be useful to have it working for testing and
> as a
> >>> packaging standard.
> >>>
> >>> Thanks,
> >>> --Konstantin
> >>>
> >>> On Mon, Jul 31, 2017 at 11:40 AM, Allen Wittenauer <
> >>> a...@effectivemachines.com
> >>> > wrote:
> >>>
> >>> >
> >>> > > On Jul 31, 2017, at 11:20 AM, Konstantin Shvachko <
> >>> shv.had...@gmail.com>
> >>> > wrote:
> >>> > >
> >>> > > https://wiki.apache.org/hadoop/HowToReleasePreDSBCR
> >>> >
> >>> > FYI:
> >>> >
> >>> > If you are using ASF Jenkins to create an ASF 
release
> >>> > artifact, it's pretty much an automatic vote failure as any such
> >>> release is
> >>> > in violation of ASF policy.
> >>> >
> >>> >
> >>>
> >>
> >>
>




[jira] [Resolved] (HDFS-11459) testSetrepDecreasing UT fails due to timeout error

2017-08-01 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-11459?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal resolved HDFS-11459.
--
Resolution: Cannot Reproduce

[~yeshavora], haven't seen this in a while and we cannot repro it locally. Also 
don't see it fail in Jenkins runs, so I am resolving it.

Please reactivate if you see this test fail again.

> testSetrepDecreasing UT fails due to timeout error
> --
>
> Key: HDFS-11459
> URL: https://issues.apache.org/jira/browse/HDFS-11459
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Yesha Vora
> Attachments: testSetrepDecreasing.log
>
>
> {code}
> Error Message
> test timed out after 12 milliseconds
> Stacktrace
> java.lang.Exception: test timed out after 12 milliseconds
>   at java.lang.Thread.sleep(Native Method)
>   at 
> org.apache.hadoop.fs.shell.SetReplication.waitForReplication(SetReplication.java:127)
>   at 
> org.apache.hadoop.fs.shell.SetReplication.processArguments(SetReplication.java:77)
>   at 
> org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:119)
>   at org.apache.hadoop.fs.shell.Command.run(Command.java:165)
>   at org.apache.hadoop.fs.FsShell.run(FsShell.java:297)
>   at 
> org.apache.hadoop.hdfs.TestSetrepIncreasing.setrep(TestSetrepIncreasing.java:58)
>   at 
> org.apache.hadoop.hdfs.TestSetrepDecreasing.testSetrepDecreasing(TestSetrepDecreasing.java:27){code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2017-08-01 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/481/

[Jul 31, 2017 8:00:10 AM] (aajisaka) HADOOP-14677. mvn clean compile fails. 
Contributed by Andras Bokor.
[Jul 31, 2017 6:33:55 PM] (arp) HDFS-12082. BlockInvalidateLimit value is 
incorrectly set after namenode
[Jul 31, 2017 9:09:16 PM] (junping_du) Revert "MAPREDUCE-5875. Make Counter 
limits consistent across JobClient,
[Jul 31, 2017 10:07:22 PM] (wang) HADOOP-14420. generateReports property is not 
applicable for
[Jul 31, 2017 10:09:34 PM] (wang) HADOOP-14644. Increase max heap size of Maven 
javadoc plugin.
[Aug 1, 2017 12:02:44 AM] (arp) HDFS-12154. Incorrect javadoc description in
[Aug 1, 2017 1:53:32 AM] (aajisaka) YARN-6873. Moving logging APIs over to 
slf4j in
[Aug 1, 2017 3:03:43 AM] (aw) HADOOP-14343. Wrong pid file name in error 
message when starting secure
[Aug 1, 2017 3:12:40 AM] (lei) HADOOP-14397. Pull up the builder pattern to 
FileSystem and add
[Aug 1, 2017 3:15:03 AM] (aajisaka) Revert "YARN-6873. Moving logging APIs over 
to slf4j in
[Aug 1, 2017 5:56:42 AM] (aajisaka) MAPREDUCE-6921. 
TestUmbilicalProtocolWithJobToken#testJobTokenRpc fails.
[Aug 1, 2017 6:15:43 AM] (aajisaka) HADOOP-14245. Use Mockito.when instead of 
Mockito.stub. Contributed by




-1 overall


The following subsystems voted -1:
findbugs unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

FindBugs :

   module:hadoop-hdfs-project/hadoop-hdfs-client 
   Possible exposure of partially initialized object in 
org.apache.hadoop.hdfs.DFSClient.initThreadsNumForStripedReads(int) At 
DFSClient.java:object in 
org.apache.hadoop.hdfs.DFSClient.initThreadsNumForStripedReads(int) At 
DFSClient.java:[line 2906] 
   org.apache.hadoop.hdfs.server.protocol.SlowDiskReports.equals(Object) 
makes inefficient use of keySet iterator instead of entrySet iterator At 
SlowDiskReports.java:keySet iterator instead of entrySet iterator At 
SlowDiskReports.java:[line 105] 

FindBugs :

   module:hadoop-hdfs-project/hadoop-hdfs 
   Possible null pointer dereference in 
org.apache.hadoop.hdfs.qjournal.server.JournalNode.getJournalsStatus() due to 
return value of called method Dereferenced at 
JournalNode.java:org.apache.hadoop.hdfs.qjournal.server.JournalNode.getJournalsStatus()
 due to return value of called method Dereferenced at JournalNode.java:[line 
302] 
   
org.apache.hadoop.hdfs.server.common.HdfsServerConstants$StartupOption.setClusterId(String)
 unconditionally sets the field clusterId At HdfsServerConstants.java:clusterId 
At HdfsServerConstants.java:[line 193] 
   
org.apache.hadoop.hdfs.server.common.HdfsServerConstants$StartupOption.setForce(int)
 unconditionally sets the field force At HdfsServerConstants.java:force At 
HdfsServerConstants.java:[line 217] 
   
org.apache.hadoop.hdfs.server.common.HdfsServerConstants$StartupOption.setForceFormat(boolean)
 unconditionally sets the field isForceFormat At 
HdfsServerConstants.java:isForceFormat At HdfsServerConstants.java:[line 229] 
   
org.apache.hadoop.hdfs.server.common.HdfsServerConstants$StartupOption.setInteractiveFormat(boolean)
 unconditionally sets the field isInteractiveFormat At 
HdfsServerConstants.java:isInteractiveFormat At HdfsServerConstants.java:[line 
237] 
   Possible null pointer dereference in 
org.apache.hadoop.hdfs.server.datanode.DataStorage.linkBlocksHelper(File, File, 
int, HardLink, boolean, File, List) due to return value of called method 
Dereferenced at 
DataStorage.java:org.apache.hadoop.hdfs.server.datanode.DataStorage.linkBlocksHelper(File,
 File, int, HardLink, boolean, File, List) due to return value of called method 
Dereferenced at DataStorage.java:[line 1339] 
   Possible null pointer dereference in 
org.apache.hadoop.hdfs.server.namenode.NNStorageRetentionManager.purgeOldLegacyOIVImages(String,
 long) due to return value of called method Dereferenced at 
NNStorageRetentionManager.java:org.apache.hadoop.hdfs.server.namenode.NNStorageRetentionManager.purgeOldLegacyOIVImages(String,
 long) due to return value of called method Dereferenced at 
NNStorageRetentionManager.java:[line 258] 
   Possible null pointer dereference in 
org.apache.hadoop.hdfs.server.namenode.NNUpgradeUtil$1.visitFile(Path, 
BasicFileAttributes) due to return value of called method Dereferenced at 
NNUpgradeUtil.java:org.apache.hadoop.hdfs.server.namenode.NNUpgradeUtil$1.visitFile(Path,
 BasicFileAttributes) due to return value of called method Dereferenced at 
NNUpgradeUtil.java:[line 133] 
   Useless condition:argv.length >= 1 at this point At DFSAdmin.java:[line 
2100] 
   Useless condition:numBlocks == -1 at this point At 
ImageLoaderCurrent.java:[line 727] 

FindBugs :

   

[jira] [Created] (HDFS-12237) libhdfs++: PROTOC_IS_COMPATIBLE check fails if protobuf library is built from source

2017-08-01 Thread Anatoli Shein (JIRA)
Anatoli Shein created HDFS-12237:


 Summary: libhdfs++: PROTOC_IS_COMPATIBLE check fails if protobuf 
library is built from source
 Key: HDFS-12237
 URL: https://issues.apache.org/jira/browse/HDFS-12237
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Anatoli Shein


Looks like the PROTOC_IS_COMPATIBLE check fails when Protobuf library is built 
from source. This happens because the check if performed during the cmake 
phase, and the protobuf library needed for this test is build from source only 
during the make phase, so the check fails with "ld: cannot find -lprotobuf" 
because it was not built yet. We should probably restrict this test to run only 
in cases when Protobuf library is already present and not being built from 
source.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



When permission is disabled, why setOwner() && setPermission() still check the permission?

2017-08-01 Thread Brahma Reddy Battula
Hi All

why  the "dfs.permissions.enabled" flag was not considered for this setOwner() 
and setPermission() check..?

and why extra super user required for setOwner()? is the check required even in 
case of permissions disabled?

Any idea on this..?



Thanks
Brahma Reddy Battula



[jira] [Reopened] (HDFS-8298) HA: NameNode should not shut down completely without quorum, doesn't recover from temporary network outages

2017-08-01 Thread Hari Sekhon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Sekhon reopened HDFS-8298:
---

> HA: NameNode should not shut down completely without quorum, doesn't recover 
> from temporary network outages
> ---
>
> Key: HDFS-8298
> URL: https://issues.apache.org/jira/browse/HDFS-8298
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: ha, namenode, qjm
>Affects Versions: 2.6.0
>Reporter: Hari Sekhon
>
> In an HDFS HA setup if there is a temporary problem with contacting journal 
> nodes (eg. network interruption), the NameNode shuts down entirely, when it 
> should instead go in to a standby mode so that it can stay online and retry 
> to achieve quorum later.
> If both NameNodes shut themselves off like this then even after the temporary 
> network outage is resolved, the entire cluster remains offline indefinitely 
> until operator intervention, whereas it could have self-repaired after 
> re-contacting the journalnodes and re-achieving quorum.
> {code}2015-04-15 15:59:26,900 FATAL namenode.FSEditLog 
> (JournalSet.java:mapJournalsAndReportErrors(398)) - Error: flush failed for 
> required journal (JournalAndStre
> am(mgr=QJM to [:8485, :8485, :8485], stream=QuorumOutputStream 
> starting at txid 54270281))
> java.io.IOException: Interrupted waiting 2ms for a quorum of nodes to 
> respond.
> at 
> org.apache.hadoop.hdfs.qjournal.client.AsyncLoggerSet.waitForWriteQuorum(AsyncLoggerSet.java:134)
> at 
> org.apache.hadoop.hdfs.qjournal.client.QuorumOutputStream.flushAndSync(QuorumOutputStream.java:107)
> at 
> org.apache.hadoop.hdfs.server.namenode.EditLogOutputStream.flush(EditLogOutputStream.java:113)
> at 
> org.apache.hadoop.hdfs.server.namenode.EditLogOutputStream.flush(EditLogOutputStream.java:107)
> at 
> org.apache.hadoop.hdfs.server.namenode.JournalSet$JournalSetOutputStream$8.apply(JournalSet.java:533)
> at 
> org.apache.hadoop.hdfs.server.namenode.JournalSet.mapJournalsAndReportErrors(JournalSet.java:393)
> at 
> org.apache.hadoop.hdfs.server.namenode.JournalSet.access$100(JournalSet.java:57)
> at 
> org.apache.hadoop.hdfs.server.namenode.JournalSet$JournalSetOutputStream.flush(JournalSet.java:529)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.logSync(FSEditLog.java:639)
> at 
> org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor.run(LeaseManager.java:388)
> at java.lang.Thread.run(Thread.java:745)
> 2015-04-15 15:59:26,901 WARN  client.QuorumJournalManager 
> (QuorumOutputStream.java:abort(72)) - Aborting QuorumOutputStream starting at 
> txid 54270281
> 2015-04-15 15:59:26,904 INFO  util.ExitUtil (ExitUtil.java:terminate(124)) - 
> Exiting with status 1
> 2015-04-15 15:59:27,001 INFO  namenode.NameNode (StringUtils.java:run(659)) - 
> SHUTDOWN_MSG:
> /
> SHUTDOWN_MSG: Shutting down NameNode at /
> /{code}
> Hari Sekhon
> http://www.linkedin.com/in/harisekhon



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



RE: [VOTE] Release Apache Hadoop 2.7.4 (RC0)

2017-08-01 Thread Brahma Reddy Battula
Hi Konstantin,

Thanks a lot again for your efforts.

+1  (non-binding)

-Built from the source on Suse-Linux with jdk_ 1.8.0_40
-Installed the HA cluster
-Verified basic shell commands
-Ran sample jobs
-Did the regression on IBR Feature, Balancer/mover,fsck

Downloaded the latest tarball, it contains the native and nodemanger can start 
(NM will not start without native). And install pseudo cluster and did basic 
verification.

IMHO, we should include natives in the tarball for user convenience (who 
doesn't have build tool can use for quick regression) 

As we are separately giving hadoop-2.7.4-RC0-src.tar.gz, still we need to 
include /src in tarball..?

As Andrew mentioned, these two are regression to prior release.


--Brahma Reddy Battula

-Original Message-
From: Konstantin Shvachko [mailto:shv.had...@gmail.com] 
Sent: 01 August 2017 09:57
To: Chris Douglas
Cc: Andrew Wang; Allen Wittenauer; common-...@hadoop.apache.org; 
hdfs-dev@hadoop.apache.org; mapreduce-...@hadoop.apache.org; 
yarn-...@hadoop.apache.org
Subject: Re: [VOTE] Release Apache Hadoop 2.7.4 (RC0)

Uploaded new binaries hadoop-2.7.4-RC0.tar.gz, which adds lib/native/.
Same place: http://home.apache.org/~shv/hadoop-2.7.4-RC0/

Thanks,
--Konstantin

On Mon, Jul 31, 2017 at 3:56 PM, Chris Douglas  wrote:

> On Mon, Jul 31, 2017 at 3:02 PM, Konstantin Shvachko 
>  wrote:
> > For the packaging, here is the exact phrasing from the sited
> release-policy
> > document relevant to binaries:
> > "As a convenience to users that might not have the appropriate tools 
> > to build a compiled version of the source, binary/bytecode packages 
> > MAY be distributed alongside official Apache releases. In all such 
> > cases, the binary/bytecode package MUST have the same version number 
> > as the source release and MUST only add binary/bytecode files that 
> > are the result of compiling that version of the source code release and its 
> > dependencies."
> > I don't think my binary package violates any of these.
>
> +1 The PMC VOTE applies to source code, only. If someone wants to
> rebuild the binary tarball with native libs and replace this one, 
> that's fine.
>
> My reading of the above is that source code must be distributed with 
> binaries, not that we omit the source code from binary releases... -C
>
> > But I'll upload an additional tar.gz with native bits and no src, as 
> > you guys requested.
> > Will keep it as RC0 as there is no source code change and it comes 
> > from
> the
> > same build.
> > Hope this is satisfactory.
> >
> > Thanks,
> > --Konstantin
> >
> > On Mon, Jul 31, 2017 at 1:53 PM, Andrew Wang 
> > 
> > wrote:
> >
> >> I agree with Brahma on the two issues flagged (having src in the 
> >> binary tarball, missing native libs). These are regressions from 
> >> prior
> releases.
> >>
> >> As an aside, "we release binaries as a convenience" doesn't relax 
> >> the quality bar. The binaries are linked on our website and 
> >> distributed
> through
> >> official Apache channels. They have to adhere to Apache release 
> >> requirements. And, most users consume our work via Maven 
> >> dependencies, which are binary artifacts.
> >>
> >> http://www.apache.org/legal/release-policy.html goes into this in 
> >> more detail. A release must minimally include source packages, and 
> >> can also include binary artifacts.
> >>
> >> Best,
> >> Andrew
> >>
> >> On Mon, Jul 31, 2017 at 12:30 PM, Konstantin Shvachko < 
> >> shv.had...@gmail.com> wrote:
> >>
> >>> To avoid any confusion in this regard. I built RC0 manually in
> compliance
> >>> with Apache release policy
> >>> http://www.apache.org/legal/release-policy.html
> >>> I edited the HowToReleasePreDSBCR page to make sure people don't 
> >>> use Jenkins option for building.
> >>>
> >>> A side note. This particular build is broken anyways, so no 
> >>> worries
> there.
> >>> I think though it would be useful to have it working for testing 
> >>> and
> as a
> >>> packaging standard.
> >>>
> >>> Thanks,
> >>> --Konstantin
> >>>
> >>> On Mon, Jul 31, 2017 at 11:40 AM, Allen Wittenauer < 
> >>> a...@effectivemachines.com
> >>> > wrote:
> >>>
> >>> >
> >>> > > On Jul 31, 2017, at 11:20 AM, Konstantin Shvachko <
> >>> shv.had...@gmail.com>
> >>> > wrote:
> >>> > >
> >>> > > https://wiki.apache.org/hadoop/HowToReleasePreDSBCR
> >>> >
> >>> > FYI:
> >>> >
> >>> > If you are using ASF Jenkins to create an ASF 
> >>> > release artifact, it's pretty much an automatic vote failure as 
> >>> > any such
> >>> release is
> >>> > in violation of ASF policy.
> >>> >
> >>> >
> >>>
> >>
> >>
>


[jira] [Resolved] (HDFS-12236) FsckServlet can not create SaslRpcClient with auth KERBEROS_SSL

2017-08-01 Thread Lantao Jin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-12236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lantao Jin resolved HDFS-12236.
---
Resolution: Duplicate

Open a {{Hadoop}} issue 
[HADOOP-14708|https://issues.apache.org/jira/browse/HADOOP-14708] instead this 
{{HDFS}}

> FsckServlet can not create SaslRpcClient with auth KERBEROS_SSL
> ---
>
> Key: HDFS-12236
> URL: https://issues.apache.org/jira/browse/HDFS-12236
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.3, 2.8.1, 3.0.0-alpha3
>Reporter: Lantao Jin
>
> FSCK started by xx (auth:KERBEROS_SSL) failed with exception msg "fsck 
> encountered internal errors!"
> FSCK use FSCKServlet to submit RPC to NameNode, it use {{KERBEROS_SSL}} as 
> its {{AuthenticationMethod}} in {{JspHelper.java}}
> {code}
>   /** Same as getUGI(context, request, conf, KERBEROS_SSL, true). */
>   public static UserGroupInformation getUGI(ServletContext context,
>   HttpServletRequest request, Configuration conf) throws IOException {
> return getUGI(context, request, conf, AuthenticationMethod.KERBEROS_SSL, 
> true);
>   }
> {code}
> But when setup SaslConnection with server, KERBEROS_SSL will failed to create 
> SaslClient instance. See {{SaslRpcClient.java}}
> {code}
> private SaslClient createSaslClient(SaslAuth authType)
>   throws SaslException, IOException {
>   
>   case KERBEROS: {
> if (ugi.getRealAuthenticationMethod().getAuthMethod() !=
> AuthMethod.KERBEROS) {
>   return null; // client isn't using kerberos
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12236) FsckServlet can not create SaslRpcClient with auth KERBEROS_SSL

2017-08-01 Thread Lantao Jin (JIRA)
Lantao Jin created HDFS-12236:
-

 Summary: FsckServlet can not create SaslRpcClient with auth 
KERBEROS_SSL
 Key: HDFS-12236
 URL: https://issues.apache.org/jira/browse/HDFS-12236
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: security
Affects Versions: 3.0.0-alpha3, 2.8.1, 2.7.3
Reporter: Lantao Jin


FSCK started by xx (auth:KERBEROS_SSL) failed with exception msg "fsck 
encountered internal errors!"

FSCK use FSCKServlet to submit RPC to NameNode, it use {{KERBEROS_SSL}} as its 
{{AuthenticationMethod}} in {{JspHelper.java}}
{code}
  /** Same as getUGI(context, request, conf, KERBEROS_SSL, true). */
  public static UserGroupInformation getUGI(ServletContext context,
  HttpServletRequest request, Configuration conf) throws IOException {
return getUGI(context, request, conf, AuthenticationMethod.KERBEROS_SSL, 
true);
  }
{code}

But when setup SaslConnection with server, KERBEROS_SSL will failed to create 
SaslClient instance. See {{SaslRpcClient.java}}
{code}
private SaslClient createSaslClient(SaslAuth authType)
  throws SaslException, IOException {
  
  case KERBEROS: {
if (ugi.getRealAuthenticationMethod().getAuthMethod() !=
AuthMethod.KERBEROS) {
  return null; // client isn't using kerberos
}
{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-12235) Ozone: DeleteKey-3: KSM SCM block deletion message and ACK interactions

2017-08-01 Thread Weiwei Yang (JIRA)
Weiwei Yang created HDFS-12235:
--

 Summary: Ozone: DeleteKey-3: KSM SCM block deletion message and 
ACK interactions
 Key: HDFS-12235
 URL: https://issues.apache.org/jira/browse/HDFS-12235
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Affects Versions: HDFS-7240
Reporter: Weiwei Yang
Assignee: Weiwei Yang


KSM and SCM interaction for delete key operation, both KSM and SCM stores key 
state info in a backlog, KSM needs to scan this log and send block-deletion 
command to SCM, once SCM is fully aware of the message, KSM removes the key 
completely from namespace.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org