Re: [VOTE] Release Apache Hadoop 3.1.0 (RC0)

2018-03-28 Thread Ajay Kumar
+1 (non-binding)

- verified binary checksum
- built from source and setup 4 node cluster
- run basic hdfs command
- run wordcount, pi & TestDFSIO (read/write)

Ajay

On 3/28/18, 5:45 PM, "Jonathan Hung"  wrote:

Hi Wangda, thanks for handling this release.

+1 (non-binding)

- verified binary checksum
- launched single node RM
- verified refreshQueues functionality
  - verified capacity scheduler conf mutation disabled in this case
- verified capacity scheduler conf mutation with leveldb storage
  - verified refreshQueues mutation is disabled in this case


Jonathan Hung

On Thu, Mar 22, 2018 at 9:10 AM, Wangda Tan  wrote:

> Thanks @Bharat for the quick check, the previously staged repository has
> some issues. I re-deployed jars to nexus.
>
> Here's the new repo (1087)
>
> https://repository.apache.org/content/repositories/orgapachehadoop-1087/
>
> Other artifacts remain same, no additional code changes.
>
> On Wed, Mar 21, 2018 at 11:54 PM, Bharat Viswanadham <
> bviswanad...@hortonworks.com> wrote:
>
> > Hi Wangda,
> > Maven Artifact repositories is not having all Hadoop jars. (It is 
missing
> > many like hadoop-hdfs, hadoop-client etc.,)
> > https://repository.apache.org/content/repositories/orgapachehadoop-1086/
> >
> >
> > Thanks,
> > Bharat
> >
> >
> > On 3/21/18, 11:44 PM, "Wangda Tan"  wrote:
> >
> > Hi folks,
> >
> > Thanks to the many who helped with this release since Dec 2017 [1].
> > We've
> > created RC0 for Apache Hadoop 3.1.0. The artifacts are available
> here:
> >
> > http://people.apache.org/~wangda/hadoop-3.1.0-RC0/
> >
> > The RC tag in git is release-3.1.0-RC0.
> >
> > The maven artifacts are available via repository.apache.org at
> > https://repository.apache.org/content/repositories/
> > orgapachehadoop-1086/
> >
> > This vote will run 7 days (5 weekdays), ending on Mar 28 at 11:59 pm
> > Pacific.
> >
> > 3.1.0 contains 727 [2] fixed JIRA issues since 3.0.0. Notable
> additions
> > include the first class GPU/FPGA support on YARN, Native services,
> > Support
> > rich placement constraints in YARN, S3-related enhancements, allow
> HDFS
> > block replicas to be provided by an external storage system, etc.
> >
> > We’d like to use this as a starting release for 3.1.x [1], depending
> > on how
> > it goes, get it stabilized and potentially use a 3.1.1 in several
> > weeks as
> > the stable release.
> >
> > We have done testing with a pseudo cluster and distributed shell 
job.
> > My +1
> > to start.
> >
> > Best,
> > Wangda/Vinod
> >
> > [1]
> > https://lists.apache.org/thread.html/b3fb3b6da8b6357a68513a6dfd104b
> > c9e19e559aedc5ebedb4ca08c8@%3Cyarn-dev.hadoop.apache.org%3E
> > [2] project in (YARN, HADOOP, MAPREDUCE, HDFS) AND fixVersion in
> > (3.1.0)
> > AND fixVersion not in (3.0.0, 3.0.0-beta1) AND status = Resolved
> ORDER
> > BY
> > fixVersion ASC
> >
> >
> >
>




[jira] [Resolved] (HDFS-13367) Ozone: ObjectStoreRestPlugin initialization depend on HdslDatanodeService

2018-03-28 Thread DENG FEI (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-13367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

DENG FEI resolved HDFS-13367.
-
Resolution: Invalid

> Ozone: ObjectStoreRestPlugin initialization depend on HdslDatanodeService
> -
>
> Key: HDFS-13367
> URL: https://issues.apache.org/jira/browse/HDFS-13367
> Project: Hadoop HDFS
>  Issue Type: Task
>  Components: ozone
>Affects Versions: HDFS-7240
> Environment: ObjectStoreRestPlugin obtains DatanodeDetails from 
> HdslDatanodeService, should be initialized after HdslDatanodeService, if it 
> does not follow the order, should warning
>Reporter: DENG FEI
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13367) Ozone: ObjectStoreRestPlugin initialization depend on HdslDatanodeService

2018-03-28 Thread DENG FEI (JIRA)
DENG FEI created HDFS-13367:
---

 Summary: Ozone: ObjectStoreRestPlugin initialization depend on 
HdslDatanodeService
 Key: HDFS-13367
 URL: https://issues.apache.org/jira/browse/HDFS-13367
 Project: Hadoop HDFS
  Issue Type: Task
  Components: ozone
Affects Versions: HDFS-7240
 Environment: ObjectStoreRestPlugin obtains DatanodeDetails from 
HdslDatanodeService, should be initialized after HdslDatanodeService, if it 
does not follow the order, should warning
Reporter: DENG FEI






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Windows/x64

2018-03-28 Thread Apache Jenkins Server
For more details, see https://builds.apache.org/job/hadoop-trunk-win/420/

[Mar 27, 2018 5:31:03 PM] (inigoiri) HDFS-13352. RBF: Add xsl stylesheet for 
hdfs-rbf-default.xml.
[Mar 27, 2018 6:21:19 PM] (james.clampffer) HDFS-13338. Update BUILDING.txt for 
building native libraries. 
[Mar 27, 2018 8:33:41 PM] (weichiu) HADOOP-15312. Undocumented KeyProvider 
configuration keys. Contributed
[Mar 28, 2018 12:39:46 AM] (subru) YARN-8010. Add config in 
FederationRMFailoverProxy to not bypass facade
[Mar 28, 2018 2:33:18 AM] (sunilg) YARN-8075. DShell does not fail when we ask 
more GPUs than available
[Mar 28, 2018 3:00:08 AM] (yqlin) HDFS-13347. RBF: Cache datanode reports. 
Contributed by Inigo Goiri.
[Mar 28, 2018 9:35:38 AM] (wwei) YARN-7734. Fix UT failure




-1 overall


The following subsystems voted -1:
compile mvninstall unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc javac


The following subsystems are considered long running:
(runtime bigger than 1h 00m 00s)
unit


Specific tests:

Failed junit tests :

   hadoop.conf.TestCommonConfigurationFields 
   hadoop.crypto.TestCryptoStreamsWithOpensslAesCtrCryptoCodec 
   hadoop.fs.contract.rawlocal.TestRawlocalContractAppend 
   hadoop.fs.TestFileUtil 
   hadoop.fs.TestFsShellCopy 
   hadoop.fs.TestFsShellList 
   hadoop.fs.TestLocalFileSystem 
   hadoop.fs.TestRawLocalFileSystemContract 
   hadoop.fs.TestSymlinkLocalFSFileContext 
   hadoop.fs.TestTrash 
   hadoop.http.TestHttpServer 
   hadoop.http.TestHttpServerLogs 
   hadoop.io.nativeio.TestNativeIO 
   hadoop.ipc.TestSocketFactory 
   hadoop.metrics2.impl.TestStatsDMetrics 
   hadoop.metrics2.sink.TestRollingFileSystemSinkWithLocal 
   hadoop.security.TestSecurityUtil 
   hadoop.security.TestShellBasedUnixGroupsMapping 
   hadoop.security.token.TestDtUtilShell 
   hadoop.util.TestNativeCodeLoader 
   hadoop.util.TestWinUtils 
   hadoop.fs.TestResolveHdfsSymlink 
   hadoop.hdfs.client.impl.TestBlockReaderLocalLegacy 
   hadoop.hdfs.crypto.TestHdfsCryptoStreams 
   hadoop.hdfs.qjournal.client.TestQuorumJournalManager 
   hadoop.hdfs.qjournal.server.TestJournalNode 
   hadoop.hdfs.qjournal.server.TestJournalNodeSync 
   hadoop.hdfs.security.TestDelegationTokenForProxyUser 
   hadoop.hdfs.server.blockmanagement.TestBlocksWithNotEnoughRacks 
   hadoop.hdfs.server.blockmanagement.TestNameNodePrunesMissingStorages 
   hadoop.hdfs.server.blockmanagement.TestOverReplicatedBlocks 
   hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl 
   hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistFiles 
   hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistLockedMemory 
   hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistPolicy 
   
hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaPlacement 
   
hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery 
   hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyWriter 
   hadoop.hdfs.server.datanode.fsdataset.impl.TestProvidedImpl 
   hadoop.hdfs.server.datanode.fsdataset.impl.TestSpaceReservation 
   hadoop.hdfs.server.datanode.fsdataset.impl.TestWriteToReplica 
   hadoop.hdfs.server.datanode.TestBlockPoolSliceStorage 
   hadoop.hdfs.server.datanode.TestBlockRecovery 
   hadoop.hdfs.server.datanode.TestBlockScanner 
   hadoop.hdfs.server.datanode.TestDataNodeFaultInjector 
   hadoop.hdfs.server.datanode.TestDataNodeMetrics 
   hadoop.hdfs.server.datanode.TestDataNodeUUID 
   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure 
   hadoop.hdfs.server.datanode.TestDirectoryScanner 
   hadoop.hdfs.server.datanode.TestHSync 
   hadoop.hdfs.server.datanode.web.TestDatanodeHttpXFrame 
   hadoop.hdfs.server.diskbalancer.command.TestDiskBalancerCommand 
   hadoop.hdfs.server.diskbalancer.TestDiskBalancerRPC 
   hadoop.hdfs.server.mover.TestMover 
   hadoop.hdfs.server.mover.TestStorageMover 
   hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA 
   hadoop.hdfs.server.namenode.ha.TestHASafeMode 
   hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA 
   hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics 
   
hadoop.hdfs.server.namenode.snapshot.TestINodeFileUnderConstructionWithSnapshot 
   hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot 
   hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots 
   hadoop.hdfs.server.namenode.snapshot.TestSnapRootDescendantDiff 
   hadoop.hdfs.server.namenode.snapshot.TestSnapshotDiffReport 
   hadoop.hdfs.server.namenode.TestAddBlock 
   hadoop.hdfs.server.namenode.TestAuditLoggerWithCommands 
   hadoop.hdfs.server.namenode.TestCacheDirectives 
   hadoop.hdfs.server.namenode.TestCheckpoint 
   

Apache Hadoop qbt Report: branch2+JDK7 on Linux/x86

2018-03-28 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-branch2-java7-linux-x86/178/

[Mar 28, 2018 6:44:44 PM] (zezhang) YARN-7623. Fix the CapacityScheduler Queue 
configuration documentation.
[Mar 28, 2018 6:51:59 PM] (subru) Revert "YARN-8010. Add config in 
FederationRMFailoverProxy to not bypass
[Mar 28, 2018 6:51:59 PM] (subru) YARN-8010. Add config in 
FederationRMFailoverProxy to not bypass facade
[Mar 28, 2018 6:59:36 PM] (inigoiri) HDFS-13347. RBF: Cache datanode reports. 
Contributed by Inigo Goiri.
[Mar 28, 2018 7:04:20 PM] (cdouglas) HADOOP-15320. Remove customized 
getFileBlockLocations for hadoop-azure
[Mar 28, 2018 8:08:04 PM] (arp) HDFS-13314. NameNode should optionally exit if 
it detects FsImage




-1 overall


The following subsystems voted -1:
docker


Powered by Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org

Re: [VOTE] Release Apache Hadoop 3.1.0 (RC0)

2018-03-28 Thread Jonathan Hung
Hi Wangda, thanks for handling this release.

+1 (non-binding)

- verified binary checksum
- launched single node RM
- verified refreshQueues functionality
  - verified capacity scheduler conf mutation disabled in this case
- verified capacity scheduler conf mutation with leveldb storage
  - verified refreshQueues mutation is disabled in this case


Jonathan Hung

On Thu, Mar 22, 2018 at 9:10 AM, Wangda Tan  wrote:

> Thanks @Bharat for the quick check, the previously staged repository has
> some issues. I re-deployed jars to nexus.
>
> Here's the new repo (1087)
>
> https://repository.apache.org/content/repositories/orgapachehadoop-1087/
>
> Other artifacts remain same, no additional code changes.
>
> On Wed, Mar 21, 2018 at 11:54 PM, Bharat Viswanadham <
> bviswanad...@hortonworks.com> wrote:
>
> > Hi Wangda,
> > Maven Artifact repositories is not having all Hadoop jars. (It is missing
> > many like hadoop-hdfs, hadoop-client etc.,)
> > https://repository.apache.org/content/repositories/orgapachehadoop-1086/
> >
> >
> > Thanks,
> > Bharat
> >
> >
> > On 3/21/18, 11:44 PM, "Wangda Tan"  wrote:
> >
> > Hi folks,
> >
> > Thanks to the many who helped with this release since Dec 2017 [1].
> > We've
> > created RC0 for Apache Hadoop 3.1.0. The artifacts are available
> here:
> >
> > http://people.apache.org/~wangda/hadoop-3.1.0-RC0/
> >
> > The RC tag in git is release-3.1.0-RC0.
> >
> > The maven artifacts are available via repository.apache.org at
> > https://repository.apache.org/content/repositories/
> > orgapachehadoop-1086/
> >
> > This vote will run 7 days (5 weekdays), ending on Mar 28 at 11:59 pm
> > Pacific.
> >
> > 3.1.0 contains 727 [2] fixed JIRA issues since 3.0.0. Notable
> additions
> > include the first class GPU/FPGA support on YARN, Native services,
> > Support
> > rich placement constraints in YARN, S3-related enhancements, allow
> HDFS
> > block replicas to be provided by an external storage system, etc.
> >
> > We’d like to use this as a starting release for 3.1.x [1], depending
> > on how
> > it goes, get it stabilized and potentially use a 3.1.1 in several
> > weeks as
> > the stable release.
> >
> > We have done testing with a pseudo cluster and distributed shell job.
> > My +1
> > to start.
> >
> > Best,
> > Wangda/Vinod
> >
> > [1]
> > https://lists.apache.org/thread.html/b3fb3b6da8b6357a68513a6dfd104b
> > c9e19e559aedc5ebedb4ca08c8@%3Cyarn-dev.hadoop.apache.org%3E
> > [2] project in (YARN, HADOOP, MAPREDUCE, HDFS) AND fixVersion in
> > (3.1.0)
> > AND fixVersion not in (3.0.0, 3.0.0-beta1) AND status = Resolved
> ORDER
> > BY
> > fixVersion ASC
> >
> >
> >
>


[jira] [Created] (HDFS-13365) RBF: Adding trace support

2018-03-28 Thread JIRA
Íñigo Goiri created HDFS-13365:
--

 Summary: RBF: Adding trace support
 Key: HDFS-13365
 URL: https://issues.apache.org/jira/browse/HDFS-13365
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Íñigo Goiri
Assignee: Íñigo Goiri


We should support HTrace and add spans.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13364) RBF: Support NamenodeProtocol in the Router

2018-03-28 Thread JIRA
Íñigo Goiri created HDFS-13364:
--

 Summary: RBF: Support NamenodeProtocol in the Router
 Key: HDFS-13364
 URL: https://issues.apache.org/jira/browse/HDFS-13364
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Íñigo Goiri
Assignee: Íñigo Goiri


The Router should support the NamenodeProtocol to get blocks, versions, etc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13363) Record file path when FSDirAclOp throws AclException

2018-03-28 Thread Wei-Chiu Chuang (JIRA)
Wei-Chiu Chuang created HDFS-13363:
--

 Summary: Record file path when FSDirAclOp throws AclException
 Key: HDFS-13363
 URL: https://issues.apache.org/jira/browse/HDFS-13363
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Wei-Chiu Chuang


When AclTransformation methods throws AclException, it does not record the file 
path that has the exception. These AclTransformation methods are invoked in 
FSDirAclOp methods, which know the file path. Therefore even if it throws an 
exception, we would never know which file has those invalid ACLs.

 

These FSDirAclOp methods can catch AclException, and then add the file path in 
the error message.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [VOTE] Merging branch HDFS-8707 (native HDFS client) to trunk

2018-03-28 Thread Eric Badger
Thanks for opening up the JIRA! It's pretty common for me to rebuild clean
because I'm doing a lot development on both trunk and branch-2.8 and am
constantly switching which one I'm compiling.

Eric

On Wed, Mar 28, 2018 at 10:49 AM, Jim Clampffer 
wrote:

> It's certainly possible to add a flag to disable the libhdfs++ build, I
> opened HDFS-13362 for that.  I can put up an initial patch that includes a
> flag for cmake so that you could do -Dnative_cmake_args="skipLibhdfspp"
> or something similar.  I don't have a good understanding of maven and
> antrun so if someone wants to help plumb a maven flag to the cmake flag I'd
> appreciate it.
>
> Is it common to do completely clean builds for interactive use?
> Incremental rebuilds of libhdfs++ shouldn't be adding more than a few
> seconds to the process.
>
> On Tue, Mar 27, 2018 at 3:09 PM, Anu Engineer 
> wrote:
>
>> Would it be possible to add a maven flag like –skipShade, that helps in
>> reducing the compile time for people who does not need to build libhdfs++ ?
>>
>>
>>
>> Thanks
>> Anu
>>
>>
>>
>>
>>
>> *From: *Jim Clampffer 
>> *Date: *Tuesday, March 27, 2018 at 11:09 AM
>> *To: *Eric Badger 
>> *Cc: *Deepak Majeti , Jitendra Pandey <
>> jiten...@hortonworks.com>, Anu Engineer ,
>> Mukul Kumar Singh , Owen O'Malley <
>> owen.omal...@gmail.com>, Chris Douglas , Hdfs-dev <
>> hdfs-dev@hadoop.apache.org>
>> *Subject: *Re: [VOTE] Merging branch HDFS-8707 (native HDFS client) to
>> trunk
>>
>>
>>
>> Hi Eric,
>>
>> There isn't a way to completely skip compiling libhdfs++ as part of the
>> native build.  You could pass 
>> -Dnative_cmake_args="-DHDFSPP_LIBRARY_ONLY=TRUE"
>> to maven to avoid building all of the libhdfs++ tests, examples, and tools
>> though.  That cut the native client build time from 4:10 to 2:20 for me.
>>
>> -Jim
>>
>>
>>
>> On Tue, Mar 27, 2018 at 12:25 PM, Eric Badger  wrote:
>>
>> Is there a way to skip the libhdfs++ compilation during a native build? I
>> just went to build native trunk to test out some container-executor changes
>> and it spent 7:49 minutes out of 14:31 minutes in Apache Hadoop HDFS Native
>> Client. For me it basically doubled the compilation time.
>>
>>
>>
>> Eric
>>
>>
>>
>> On Fri, Mar 16, 2018 at 4:01 PM, Deepak Majeti 
>> wrote:
>>
>> Thanks for all your hard work on getting this feature (with > 200
>> sub-tasks) in James!
>>
>> On Fri, Mar 16, 2018 at 12:05 PM, Jim Clampffer <
>> james.clampf...@gmail.com>
>> wrote:
>>
>>
>> > With 6 +1s, 0 0s, and 0 -1s the vote passes.  I'll be merging this into
>> > trunk shortly.
>> >
>> > Thanks everyone who participated in the discussion and vote!  And many
>> > thanks to everyone who contributed code and feedback throughout the
>> > development process! Particularly Bob, Anatoli, Xiaowei and Deepak who
>> > provided lots of large pieces of code as well as folks like Owen, Chris
>> D,
>> > Allen, and Stephen W who provided various support and guidance with the
>> > Apache process and project design.
>> >
>> > On Wed, Mar 14, 2018 at 1:32 PM, Jitendra Pandey <
>> jiten...@hortonworks.com
>> > >
>> > wrote:
>> >
>> > > +1 (binding)
>> > >
>> > > On 3/14/18, 9:57 AM, "Anu Engineer" 
>> wrote:
>> > >
>> > > +1 (binding). Thanks for all the hard work and getting this client
>> > > ready.
>> > > It is nice to have an official and supported native client for
>> HDFS.
>> > >
>> > > Thanks
>> > > Anu
>> > >
>> > > On 3/13/18, 8:16 PM, "Mukul Kumar Singh" 
>> > > wrote:
>> > >
>> > > +1 (binding)
>> > >
>> > > Thanks,
>> > > Mukul
>> > >
>> > > On 14/03/18, 2:06 AM, "Owen O'Malley" > >
>> > > wrote:
>> > >
>> > > +1 (binding)
>> > >
>> > > .. Owen
>> > >
>> > > On Sun, Mar 11, 2018 at 6:20 PM, Chris Douglas <
>> > > cdoug...@apache.org> wrote:
>> > >
>> > > > +1 (binding) -C
>> > > >
>> > > > On Thu, Mar 8, 2018 at 9:31 AM, Jim Clampffer <
>> > > james.clampf...@gmail.com>
>> > > > wrote:
>> > > > > Hi Everyone,
>> > > > >
>> > > > > The feedback was generally positive on the discussion
>> > > thread [1] so I'd
>> > > > > like to start a formal vote for merging HDFS-8707
>> > > (libhdfs++) into trunk.
>> > > > > The vote will be open for 7 days and end 6PM EST on
>> > > 3/15/18.
>> > > > >
>> > > > > This branch includes a C++ implementation of an HDFS
>> > > client for use in
>> > > > > applications that don't run an in-process JVM.  Right
>> now
>> > > the branch only
>> > > > > supports reads and metadata calls.
>> > >  

Re: [VOTE] Merging branch HDFS-8707 (native HDFS client) to trunk

2018-03-28 Thread Jim Clampffer
It's certainly possible to add a flag to disable the libhdfs++ build, I
opened HDFS-13362 for that.  I can put up an initial patch that includes a
flag for cmake so that you could do -Dnative_cmake_args="skipLibhdfspp" or
something similar.  I don't have a good understanding of maven and antrun
so if someone wants to help plumb a maven flag to the cmake flag I'd
appreciate it.

Is it common to do completely clean builds for interactive use?
Incremental rebuilds of libhdfs++ shouldn't be adding more than a few
seconds to the process.

On Tue, Mar 27, 2018 at 3:09 PM, Anu Engineer 
wrote:

> Would it be possible to add a maven flag like –skipShade, that helps in
> reducing the compile time for people who does not need to build libhdfs++ ?
>
>
>
> Thanks
> Anu
>
>
>
>
>
> *From: *Jim Clampffer 
> *Date: *Tuesday, March 27, 2018 at 11:09 AM
> *To: *Eric Badger 
> *Cc: *Deepak Majeti , Jitendra Pandey <
> jiten...@hortonworks.com>, Anu Engineer ,
> Mukul Kumar Singh , Owen O'Malley <
> owen.omal...@gmail.com>, Chris Douglas , Hdfs-dev <
> hdfs-dev@hadoop.apache.org>
> *Subject: *Re: [VOTE] Merging branch HDFS-8707 (native HDFS client) to
> trunk
>
>
>
> Hi Eric,
>
> There isn't a way to completely skip compiling libhdfs++ as part of the
> native build.  You could pass -Dnative_cmake_args="-DHDFSPP_LIBRARY_ONLY=TRUE"
> to maven to avoid building all of the libhdfs++ tests, examples, and tools
> though.  That cut the native client build time from 4:10 to 2:20 for me.
>
> -Jim
>
>
>
> On Tue, Mar 27, 2018 at 12:25 PM, Eric Badger  wrote:
>
> Is there a way to skip the libhdfs++ compilation during a native build? I
> just went to build native trunk to test out some container-executor changes
> and it spent 7:49 minutes out of 14:31 minutes in Apache Hadoop HDFS Native
> Client. For me it basically doubled the compilation time.
>
>
>
> Eric
>
>
>
> On Fri, Mar 16, 2018 at 4:01 PM, Deepak Majeti 
> wrote:
>
> Thanks for all your hard work on getting this feature (with > 200
> sub-tasks) in James!
>
> On Fri, Mar 16, 2018 at 12:05 PM, Jim Clampffer  >
> wrote:
>
>
> > With 6 +1s, 0 0s, and 0 -1s the vote passes.  I'll be merging this into
> > trunk shortly.
> >
> > Thanks everyone who participated in the discussion and vote!  And many
> > thanks to everyone who contributed code and feedback throughout the
> > development process! Particularly Bob, Anatoli, Xiaowei and Deepak who
> > provided lots of large pieces of code as well as folks like Owen, Chris
> D,
> > Allen, and Stephen W who provided various support and guidance with the
> > Apache process and project design.
> >
> > On Wed, Mar 14, 2018 at 1:32 PM, Jitendra Pandey <
> jiten...@hortonworks.com
> > >
> > wrote:
> >
> > > +1 (binding)
> > >
> > > On 3/14/18, 9:57 AM, "Anu Engineer"  wrote:
> > >
> > > +1 (binding). Thanks for all the hard work and getting this client
> > > ready.
> > > It is nice to have an official and supported native client for
> HDFS.
> > >
> > > Thanks
> > > Anu
> > >
> > > On 3/13/18, 8:16 PM, "Mukul Kumar Singh" 
> > > wrote:
> > >
> > > +1 (binding)
> > >
> > > Thanks,
> > > Mukul
> > >
> > > On 14/03/18, 2:06 AM, "Owen O'Malley" 
> > > wrote:
> > >
> > > +1 (binding)
> > >
> > > .. Owen
> > >
> > > On Sun, Mar 11, 2018 at 6:20 PM, Chris Douglas <
> > > cdoug...@apache.org> wrote:
> > >
> > > > +1 (binding) -C
> > > >
> > > > On Thu, Mar 8, 2018 at 9:31 AM, Jim Clampffer <
> > > james.clampf...@gmail.com>
> > > > wrote:
> > > > > Hi Everyone,
> > > > >
> > > > > The feedback was generally positive on the discussion
> > > thread [1] so I'd
> > > > > like to start a formal vote for merging HDFS-8707
> > > (libhdfs++) into trunk.
> > > > > The vote will be open for 7 days and end 6PM EST on
> > > 3/15/18.
> > > > >
> > > > > This branch includes a C++ implementation of an HDFS
> > > client for use in
> > > > > applications that don't run an in-process JVM.  Right
> now
> > > the branch only
> > > > > supports reads and metadata calls.
> > > > >
> > > > > Features (paraphrasing the list from the discussion
> > > thread):
> > > > > -Avoiding the JVM means applications that use libhdfs++
> > > can explicitly
> > > > > control resources (memory, FDs, threads).  The driving
> > > goal for this
> > > > > project was to let C/C++ applications access HDFS while
> > > maintaining a
> > > > > single heap.
> > > > > 

[jira] [Created] (HDFS-13362) add a flag to skip the libhdfs++ build

2018-03-28 Thread James Clampffer (JIRA)
James Clampffer created HDFS-13362:
--

 Summary: add a flag to skip the libhdfs++ build
 Key: HDFS-13362
 URL: https://issues.apache.org/jira/browse/HDFS-13362
 Project: Hadoop HDFS
  Issue Type: Task
Reporter: James Clampffer


libhdfs++ has significantly increased build times for the native client on 
trunk.  This covers adding a flag that would let people build libhdfs without 
all of libhdfs++ if they don't need it; it should be built by default to 
maintain compatibility with as many environments as possible.

Some thoughts:
-The increase in compile time only impacts clean builds.  Incremental rebuilds 
aren't significantly more expensive than they used to be if the code hasn't 
changed.
-Compile times for libhdfs++ can most likely be reduced but that's a longer 
term project.  boost::asio and tr1::optional are header-only libraries that are 
heavily templated so every compilation unit that includes them has to do a lot 
of parsing.

Is it common to do completely clean builds frequently for interactive users?  
Are there opinions on what would be an acceptable compilation time?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2018-03-28 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/734/

[Mar 27, 2018 5:31:03 PM] (inigoiri) HDFS-13352. RBF: Add xsl stylesheet for 
hdfs-rbf-default.xml.
[Mar 27, 2018 6:21:19 PM] (james.clampffer) HDFS-13338. Update BUILDING.txt for 
building native libraries. 
[Mar 27, 2018 8:33:41 PM] (weichiu) HADOOP-15312. Undocumented KeyProvider 
configuration keys. Contributed
[Mar 28, 2018 12:39:46 AM] (subru) YARN-8010. Add config in 
FederationRMFailoverProxy to not bypass facade




-1 overall


The following subsystems voted -1:
findbugs unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

FindBugs :

   module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
   org.apache.hadoop.yarn.api.records.Resource.getResources() may expose 
internal representation by returning Resource.resources At Resource.java:by 
returning Resource.resources At Resource.java:[line 234] 

Failed junit tests :

   hadoop.conf.TestCommonConfigurationFields 
   hadoop.hdfs.server.datanode.TestDirectoryScanner 
   hadoop.hdfs.TestRollingUpgrade 
   hadoop.yarn.server.nodemanager.webapp.TestContainerLogsPage 
   hadoop.yarn.server.resourcemanager.recovery.TestZKRMStateStore 
   hadoop.yarn.server.TestDiskFailures 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/734/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/734/artifact/out/diff-compile-javac-root.txt
  [288K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/734/artifact/out/diff-checkstyle-root.txt
  [17M]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/734/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/734/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/734/artifact/out/diff-patch-shelldocs.txt
  [12K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/734/artifact/out/whitespace-eol.txt
  [9.2M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/734/artifact/out/whitespace-tabs.txt
  [1.1M]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/734/artifact/out/xml.txt
  [4.0K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/734/artifact/out/branch-findbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api-warnings.html
  [8.0K]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/734/artifact/out/diff-javadoc-javadoc-root.txt
  [760K]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/734/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
  [160K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/734/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [248K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/734/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
  [48K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/734/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
  [80K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/734/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/734/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-jobclient.txt
  [84K]

Powered by Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org

[jira] [Created] (HDFS-13361) Ozone: Remove commands from command queue when the datanode is declared dead

2018-03-28 Thread Shashikant Banerjee (JIRA)
Shashikant Banerjee created HDFS-13361:
--

 Summary: Ozone: Remove commands from command queue when the 
datanode is declared dead
 Key: HDFS-13361
 URL: https://issues.apache.org/jira/browse/HDFS-13361
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Affects Versions: HDFS-7240
Reporter: Shashikant Banerjee
Assignee: Shashikant Banerjee
 Fix For: HDFS-7240


SCM can queue commands for Datanodes to pickup. However, a dead datanode may 
never pick up the commands.The command queue needs to be cleaned for the 
datanode once its declared dead.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13360) The configuration of implement of DtaNodeServicePlugin should obtain from datanode instance

2018-03-28 Thread DENG FEI (JIRA)
DENG FEI created HDFS-13360:
---

 Summary: The configuration of  implement of DtaNodeServicePlugin 
should obtain from datanode instance
 Key: HDFS-13360
 URL: https://issues.apache.org/jira/browse/HDFS-13360
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: ozone
Affects Versions: HDFS-7240
Reporter: DENG FEI


MiniOzoneClassicCluster configure Ozone is enable,but ObjectStoreRestPlugin & 
HdslDatanodeService load configuration from resouces.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13359) DataXceiver hungs due to the lock in FsDatasetImpl#getBlockInputStream

2018-03-28 Thread Yiqun Lin (JIRA)
Yiqun Lin created HDFS-13359:


 Summary: DataXceiver hungs due to the lock in 
FsDatasetImpl#getBlockInputStream
 Key: HDFS-13359
 URL: https://issues.apache.org/jira/browse/HDFS-13359
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 3.0.0
Reporter: Yiqun Lin
Assignee: Yiqun Lin


DataXceiver hungs due to the lock that locked by 
{{FsDatasetImpl#getBlockInputStream}}.
{code}
  @Override // FsDatasetSpi
  public InputStream getBlockInputStream(ExtendedBlock b,
  long seekOffset) throws IOException {

ReplicaInfo info;
synchronized(this) {
  info = volumeMap.get(b.getBlockPoolId(), b.getLocalBlock());
}
...
  }
{code}
The lock {{synchronized(this)}} used here is expensive, there is already one 
{{AutoCloseableLock}} type lock defined for {{ReplicaMap}}. We can use it 
instead.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13358) RBF: Support for Delegation Token

2018-03-28 Thread Sherwood Zheng (JIRA)
Sherwood Zheng created HDFS-13358:
-

 Summary: RBF: Support for Delegation Token
 Key: HDFS-13358
 URL: https://issues.apache.org/jira/browse/HDFS-13358
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Sherwood Zheng
Assignee: Sherwood Zheng






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org