[GitHub] [hadoop-ozone] cxorm opened a new pull request #91: HDDS-2361. Ozone Manager init & start command prints out unnecessary line in the beginning.

2019-10-25 Thread GitBox
cxorm opened a new pull request #91: HDDS-2361. Ozone Manager init & start 
command prints out unnecessary line in the beginning.
URL: https://github.com/apache/hadoop-ozone/pull/91
 
 
   If OZONE_MANAGER_CLASSPATH is null, we will not show the string.
   
   ## What changes were proposed in this pull request?
   
   Modify the if-condition in ```hadoop-ozone-manager.sh```

   
   ## What is the link to the Apache JIRA
   https://issues.apache.org/jira/browse/HDDS-2361
   
   
   ## How was this patch tested?
   
   Ran the command in ```hadoop-ozone/```:
   ```mvn clean package -Pdist -Dtar -DskipTests```
   
   And ran the commands after deployment
   ```ozone --daemon start om```
   ```ozone --daemon stop om```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2056) Datanode unable to start command handler thread with security enabled

2019-10-25 Thread Mukul Kumar Singh (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh resolved HDDS-2056.
-
Resolution: Duplicate

> Datanode unable to start command handler thread with security enabled
> -
>
> Key: HDDS-2056
> URL: https://issues.apache.org/jira/browse/HDDS-2056
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Shashikant Banerjee
>Assignee: Xiaoyu Yao
>Priority: Major
> Fix For: 0.5.0
>
>
>  
> {code:java}
> 2019-08-29 02:50:23,536 ERROR 
> org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine: 
> Critical Error : Command processor thread encountered an error. Thread: 
> Thread[Command processor thread,5,main]
> java.lang.IllegalArgumentException: Null user
>         at 
> org.apache.hadoop.security.UserGroupInformation.createRemoteUser(UserGroupInformation.java:1269)
>         at 
> org.apache.hadoop.security.UserGroupInformation.createRemoteUser(UserGroupInformation.java:1256)
>         at 
> org.apache.hadoop.hdds.security.token.BlockTokenVerifier.verify(BlockTokenVerifier.java:116)
>         at 
> org.apache.hadoop.ozone.container.common.transport.server.XceiverServer.submitRequest(XceiverServer.java:68)
>         at 
> org.apache.hadoop.ozone.container.common.transport.server.ratis.XceiverServerRatis.submitRequest(XceiverServerRatis.java:482)
>         at 
> org.apache.hadoop.ozone.container.common.statemachine.commandhandler.CloseContainerCommandHandler.handle(CloseContainerCommandHandler.java:109)
>         at 
> org.apache.hadoop.ozone.container.common.statemachine.commandhandler.CommandDispatcher.handle(CommandDispatcher.java:93)
>         at 
> org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.lambda$initCommandHandlerThread$1(DatanodeStateMachine.java:432)
>         at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 2.10.0 (RC1)

2019-10-25 Thread Jonathan Hung
Some more thoughts: for the javadoc issue, I think we can just support
building on java 7.

For the release notes issue, I can work with the authors of the major
features to come up with release notes and update them before pushing it to
site. The release notes in the published artifacts won't be up to date, but
I think that's fine.

I'll go ahead with this plan if no objections.

Jonathan Hung


On Fri, Oct 25, 2019 at 12:19 PM Jonathan Hung  wrote:

> Thanks for looking Erik.
>
> For the release notes, yeah I think it's because there's no release notes
> for the corresponding JIRAs. I've added details for these features to the
> index.md.vm file which should show up on the homepage for 2.10.0 (e.g.
> https://hadoop.apache.org/docs/r2.9.0/index.html). We could add release
> notes for these JIRAs, but that would require recreating the tar.gzs since
> the release notes are bundled in there.
>
> For the javadoc issue, I was able to repro this issue, seems it's because
> the org.apache.hadoop.yarn.client.ClientRMProxy import was removed in
> FederationProxyProviderUtil in YARN-7900 in branch-2 (but not in other
> branches). But it's referenced in javadocs in this file so it throws this
> error. Re-adding this import and building with java 8 allows it to succeed.
>
> I checked javadoc html for FederationProxyProviderUtil in the produced
> artifacts and it appears to be correct.
>
> I think we could easily overwrite the current RC1 artifacts with ones
> containing proper release notes. Not sure what to do about the javadoc
> issue though, that would require overwriting the release-2.10.0-RC1 tag
> which I don't want to do. What do others think?
>
> Jonathan Hung
>
>
> On Fri, Oct 25, 2019 at 9:21 AM Erik Krogen  wrote:
>
>> Thanks for putting this together, Jonathan!
>>
>> I noticed that the RELEASENOTES.md makes no mention of any of the major
>> features you mentioned in your email about the RC. Is this expected? I
>> guess it is caused by the lack of a release note on the JIRAs for those
>> features.
>>
>> I also noticed that building a distribution package (mvn -DskipTests
>> package -Pdist) fails on Java 8 (1.8.0_172) with a bunch of Javadoc errors.
>> It works fine on Java 7. Is this expected?
>>
>> Other verifications I performed:
>>
>>- Verified all signatures in RC1
>>- Verified all checksums in RC1
>>- Visually inspected contents of src tarball
>>- Built from source on Mac OSX 10.14.6 and RHEL7 (Java 8)
>>- mvn -DskipTests package
>>- Visually inspected contents of binary tarball
>>
>> Thanks,
>> Erik
>>
>> --
>> *From:* Konstantin Shvachko 
>> *Sent:* Wednesday, October 23, 2019 6:10 PM
>> *To:* Jonathan Hung 
>> *Cc:* Hdfs-dev ; mapreduce-dev <
>> mapreduce-...@hadoop.apache.org>; yarn-dev ;
>> Hadoop Common 
>> *Subject:* Re: [VOTE] Release Apache Hadoop 2.10.0 (RC1)
>>
>> +1 on RC1
>>
>> - Verified signatures
>> - Verified maven artifacts on Nexus for sources
>> - Checked rat reports
>> - Checked documentation
>> - Checked packaging contents
>> - Built from sources on RHEL 7 box
>> - Ran unit tests for new HDFS features with Java 8
>>
>> Thanks,
>> --Konstantin
>>
>> On Tue, Oct 22, 2019 at 2:55 PM Jonathan Hung 
>> wrote:
>>
>> > Hi folks,
>> >
>> > This is the second release candidate for the first release of Apache
>> Hadoop
>> > 2.10 line. It contains 362 fixes/improvements since 2.9 [1]. It includes
>> > features such as:
>> >
>> > - User-defined resource types
>> > - Native GPU support as a schedulable resource type
>> > - Consistent reads from standby node
>> > - Namenode port based selective encryption
>> > - Improvements related to rolling upgrade support from 2.x to 3.x
>> > - Cost based fair call queue
>> >
>> > The RC1 artifacts are at:
>> https://nam06.safelinks.protection.outlook.com/?url=http:%2F%2Fhome.apache.org%2F~jhung%2Fhadoop-2.10.0-RC1%2F&data=02%7C01%7Cekrogen%40linkedin.com%7C1fee1e5911d8415a418b08d7581f0c7e%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C1%7C637074762694349124&sdata=ZX7lF4N3fV38ggkplLU56ybhKBZrx%2FUKMkfxm2WJ7eU%3D&reserved=0
>> >
>> > RC tag is release-2.10.0-RC1.
>> >
>> > The maven artifacts are hosted here:
>> >
>> https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Frepository.apache.org%2Fcontent%2Frepositories%2Forgapachehadoop-1243%2F&data=02%7C01%7Cekrogen%40linkedin.com%7C1fee1e5911d8415a418b08d7581f0c7e%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C1%7C637074762694349124&sdata=DsJDfoj8eg3E%2F%2BNEwOAI41LhcRJ2hOWycS923ds3Seg%3D&reserved=0
>> >
>> > My public key is available here:
>> >
>> https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdist.apache.org%2Frepos%2Fdist%2Frelease%2Fhadoop%2Fcommon%2FKEYS&data=02%7C01%7Cekrogen%40linkedin.com%7C1fee1e5911d8415a418b08d7581f0c7e%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C1%7C637074762694349124&sdata=1694z6xhj5NtxwYBpwnRBx%2BgK0npGIUs5O580K3KPJw%3D&reserved=0
>> >
>> > The vote will run for 5 weekdays, until Tuesday, October 29 at 3:00 pm

[GitHub] [hadoop-ozone] swagle opened a new pull request #90: HDDS-2366. Remoce ozone.enabled as a flag and config item.

2019-10-25 Thread GitBox
swagle opened a new pull request #90: HDDS-2366. Remoce ozone.enabled as a flag 
and config item.
URL: https://github.com/apache/hadoop-ozone/pull/90
 
 
   ## What changes were proposed in this pull request?
   
   Removed all checks for ozone.enabled and configuration items.
   
   ## What is the link to the Apache JIRA
   https://issues.apache.org/jira/browse/HDDS-2366
   
   ## How was this patch tested?
   Verified mvn install and checkstyle goals succeed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 opened a new pull request #89: HDDS-2322. DoubleBuffer flush termination and OM shutdown's after that. Make entry returned from cache a new copy.

2019-10-25 Thread GitBox
bharatviswa504 opened a new pull request #89: HDDS-2322. DoubleBuffer flush 
termination and OM shutdown's after that. Make entry returned from cache a new 
copy.
URL: https://github.com/apache/hadoop-ozone/pull/89
 
 
   ## What changes were proposed in this pull request?
   
   Whenever value is returned from cache, it returns a copy object. So, that 
when doubleBuffer flushes, we don't see random ConcurrentModificationException 
errors like this.
   
   ## What is the link to the Apache JIRA
   
   https://issues.apache.org/jira/browse/HDDS-2322
   
   Please replace this section with the link to the Apache JIRA)
   
   Ran freon tests with 100K Keys multiple times, now I am not seeing the error 
given in Jira description. Without this patch, we will see the error. (If you 
are not seeing error run multiple times in non-HA or just enable 
ozone.om.ratis.enable, and you will see the error in the first run)
   
   **Command used for testing:**
   ozone freon rk --numOfBuckets=1 --numOfVolumes=1 --numOfKeys=10 
--keySize=0
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-2366) Remove ozone.enabled flag

2019-10-25 Thread Bharat Viswanadham (Jira)
Bharat Viswanadham created HDDS-2366:


 Summary: Remove ozone.enabled flag
 Key: HDDS-2366
 URL: https://issues.apache.org/jira/browse/HDDS-2366
 Project: Hadoop Distributed Data Store
  Issue Type: Task
Reporter: Bharat Viswanadham


Now when ozone is started the start-ozone.sh/stop-ozone.sh script check whether 
this property is enabled or not to start ozone services. Now, this property and 
this check can be removed.

 

This was needed when ozone is part of Hadoop, and we don't want to start ozone 
services by default.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] hanishakoneru commented on issue #81: HDDS-2347 XCeiverClientGrpc's parallel use leads to NPE

2019-10-25 Thread GitBox
hanishakoneru commented on issue #81: HDDS-2347 XCeiverClientGrpc's parallel 
use leads to NPE
URL: https://github.com/apache/hadoop-ozone/pull/81#issuecomment-546508203
 
 
   Thank you for working on this @fapifta. Integration test failures do no look 
related to this PR.
   LGTM. +1.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] hanishakoneru commented on issue #51: HDDS-2311. Fix logic of RetryPolicy in OzoneClientSideTranslatorPB.

2019-10-25 Thread GitBox
hanishakoneru commented on issue #51: HDDS-2311. Fix logic of RetryPolicy in 
OzoneClientSideTranslatorPB.
URL: https://github.com/apache/hadoop-ozone/pull/51#issuecomment-546501577
 
 
   /retest


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-14308) DFSStripedInputStream curStripeBuf is not freed by unbuffer()

2019-10-25 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14308?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang resolved HDFS-14308.

Fix Version/s: 3.2.2
   3.1.4
   3.3.0
   Resolution: Fixed

Thanks [~zhaoyim] !

> DFSStripedInputStream curStripeBuf is not freed by unbuffer()
> -
>
> Key: HDFS-14308
> URL: https://issues.apache.org/jira/browse/HDFS-14308
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ec
>Affects Versions: 3.0.0
>Reporter: Joe McDonnell
>Assignee: Zhao Yi Ming
>Priority: Major
> Fix For: 3.3.0, 3.1.4, 3.2.2
>
> Attachments: ec_heap_dump.png
>
>
> Some users of HDFS cache opened HDFS file handles to avoid repeated 
> roundtrips to the NameNode. For example, Impala caches up to 20,000 HDFS file 
> handles by default. Recent tests on erasure coded files show that the open 
> file handles can consume a large amount of memory when not in use.
> For example, here is output from Impala's JMX endpoint when 608 file handles 
> are cached
> {noformat}
> {
> "name": "java.nio:type=BufferPool,name=direct",
> "modelerType": "sun.management.ManagementFactoryHelper$1",
> "Name": "direct",
> "TotalCapacity": 1921048960,
> "MemoryUsed": 1921048961,
> "Count": 633,
> "ObjectName": "java.nio:type=BufferPool,name=direct"
> },{noformat}
> This shows direct buffer memory usage of 3MB per DFSStripedInputStream. 
> Attached is output from Eclipse MAT showing that the direct buffers come from 
> DFSStripedInputStream objects. Both Impala and HBase call unbuffer() when a 
> file handle is being cached and potentially unused for significant chunks of 
> time, yet this shows that the memory remains in use.
> To support caching file handles on erasure coded files, DFSStripedInputStream 
> should avoid holding buffers after the unbuffer() call. See HDFS-7694. 
> "unbuffer()" is intended to move an input stream to a lower memory state to 
> support these caching use cases. In particular, the curStripeBuf seems to be 
> allocated from the BUFFER_POOL on a resetCurStripeBuffer(true) call. It is 
> not freed until close().



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 2.10.0 (RC1)

2019-10-25 Thread Jonathan Hung
Thanks for looking Erik.

For the release notes, yeah I think it's because there's no release notes
for the corresponding JIRAs. I've added details for these features to the
index.md.vm file which should show up on the homepage for 2.10.0 (e.g.
https://hadoop.apache.org/docs/r2.9.0/index.html). We could add release
notes for these JIRAs, but that would require recreating the tar.gzs since
the release notes are bundled in there.

For the javadoc issue, I was able to repro this issue, seems it's because
the org.apache.hadoop.yarn.client.ClientRMProxy import was removed in
FederationProxyProviderUtil in YARN-7900 in branch-2 (but not in other
branches). But it's referenced in javadocs in this file so it throws this
error. Re-adding this import and building with java 8 allows it to succeed.

I checked javadoc html for FederationProxyProviderUtil in the produced
artifacts and it appears to be correct.

I think we could easily overwrite the current RC1 artifacts with ones
containing proper release notes. Not sure what to do about the javadoc
issue though, that would require overwriting the release-2.10.0-RC1 tag
which I don't want to do. What do others think?

Jonathan Hung


On Fri, Oct 25, 2019 at 9:21 AM Erik Krogen  wrote:

> Thanks for putting this together, Jonathan!
>
> I noticed that the RELEASENOTES.md makes no mention of any of the major
> features you mentioned in your email about the RC. Is this expected? I
> guess it is caused by the lack of a release note on the JIRAs for those
> features.
>
> I also noticed that building a distribution package (mvn -DskipTests
> package -Pdist) fails on Java 8 (1.8.0_172) with a bunch of Javadoc errors.
> It works fine on Java 7. Is this expected?
>
> Other verifications I performed:
>
>- Verified all signatures in RC1
>- Verified all checksums in RC1
>- Visually inspected contents of src tarball
>- Built from source on Mac OSX 10.14.6 and RHEL7 (Java 8)
>- mvn -DskipTests package
>- Visually inspected contents of binary tarball
>
> Thanks,
> Erik
>
> --
> *From:* Konstantin Shvachko 
> *Sent:* Wednesday, October 23, 2019 6:10 PM
> *To:* Jonathan Hung 
> *Cc:* Hdfs-dev ; mapreduce-dev <
> mapreduce-...@hadoop.apache.org>; yarn-dev ;
> Hadoop Common 
> *Subject:* Re: [VOTE] Release Apache Hadoop 2.10.0 (RC1)
>
> +1 on RC1
>
> - Verified signatures
> - Verified maven artifacts on Nexus for sources
> - Checked rat reports
> - Checked documentation
> - Checked packaging contents
> - Built from sources on RHEL 7 box
> - Ran unit tests for new HDFS features with Java 8
>
> Thanks,
> --Konstantin
>
> On Tue, Oct 22, 2019 at 2:55 PM Jonathan Hung 
> wrote:
>
> > Hi folks,
> >
> > This is the second release candidate for the first release of Apache
> Hadoop
> > 2.10 line. It contains 362 fixes/improvements since 2.9 [1]. It includes
> > features such as:
> >
> > - User-defined resource types
> > - Native GPU support as a schedulable resource type
> > - Consistent reads from standby node
> > - Namenode port based selective encryption
> > - Improvements related to rolling upgrade support from 2.x to 3.x
> > - Cost based fair call queue
> >
> > The RC1 artifacts are at:
> https://nam06.safelinks.protection.outlook.com/?url=http:%2F%2Fhome.apache.org%2F~jhung%2Fhadoop-2.10.0-RC1%2F&data=02%7C01%7Cekrogen%40linkedin.com%7C1fee1e5911d8415a418b08d7581f0c7e%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C1%7C637074762694349124&sdata=ZX7lF4N3fV38ggkplLU56ybhKBZrx%2FUKMkfxm2WJ7eU%3D&reserved=0
> >
> > RC tag is release-2.10.0-RC1.
> >
> > The maven artifacts are hosted here:
> >
> https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Frepository.apache.org%2Fcontent%2Frepositories%2Forgapachehadoop-1243%2F&data=02%7C01%7Cekrogen%40linkedin.com%7C1fee1e5911d8415a418b08d7581f0c7e%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C1%7C637074762694349124&sdata=DsJDfoj8eg3E%2F%2BNEwOAI41LhcRJ2hOWycS923ds3Seg%3D&reserved=0
> >
> > My public key is available here:
> >
> https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdist.apache.org%2Frepos%2Fdist%2Frelease%2Fhadoop%2Fcommon%2FKEYS&data=02%7C01%7Cekrogen%40linkedin.com%7C1fee1e5911d8415a418b08d7581f0c7e%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C1%7C637074762694349124&sdata=1694z6xhj5NtxwYBpwnRBx%2BgK0npGIUs5O580K3KPJw%3D&reserved=0
> >
> > The vote will run for 5 weekdays, until Tuesday, October 29 at 3:00 pm
> PDT.
> >
> > Thanks,
> > Jonathan Hung
> >
> > [1]
> >
> >
> https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fissues.apache.org%2Fjira%2Fissues%2F%3Fjql%3Dproject%2520in%2520(HDFS%252C%2520YARN%252C%2520HADOOP%252C%2520MAPREDUCE)%2520AND%2520resolution%2520%253D%2520Fixed%2520AND%2520fixVersion%2520%253D%25202.10.0%2520AND%2520fixVersion%2520not%2520in%2520(2.9.2%252C%25202.9.1%252C%25202.9.0&data=02%7C01%7Cekrogen%40linkedin.com%7C1fee1e5911d8415a418b08d7581f0c7e%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637074762694349124&sdata=U2Pt0Vx3k3XWJGoH0S8iF

[GitHub] [hadoop-ozone] adoroszlai opened a new pull request #88: HDDS-2272. Avoid buffer copying in GrpcReplicationClient

2019-10-25 Thread GitBox
adoroszlai opened a new pull request #88: HDDS-2272. Avoid buffer copying in 
GrpcReplicationClient
URL: https://github.com/apache/hadoop-ozone/pull/88
 
 
   ## What changes were proposed in this pull request?
   
   Eliminate `BufferedOutputStream`, write `ByteString` directly to 
`FileStream` to avoid a buffer copy.
   Also, use `ByteString.writeTo(OutputStream)`, although it still seems to 
copy the byte array internally.
   
   https://issues.apache.org/jira/browse/HDDS-2272
   
   ## How was this patch tested?
   
   Tested closed container replication manually with a 300MB container.  
Verified that container is correctly replicated to other datanode.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] adoroszlai opened a new pull request #87: HDDS-2273. Avoid buffer copying in GrpcReplicationService

2019-10-25 Thread GitBox
adoroszlai opened a new pull request #87: HDDS-2273. Avoid buffer copying in 
GrpcReplicationService
URL: https://github.com/apache/hadoop-ozone/pull/87
 
 
   ## What changes were proposed in this pull request?
   
   Use `ByteString.Output` stream instead of `ByteArrayOutputStream`.  Its 
initial size is configured to 1MB (same as previous buffer size), and is 
flushed when that size is reached.  This helps to avoid allocating multiple 
buffers as well as buffer copy when converting to `ByteString`.
   
   https://issues.apache.org/jira/browse/HDDS-2273
   
   ## How was this patch tested?
   
   Tested closed container replication manually with a 300MB container.  
Verified that container is correctly replicated to other datanode.  Also 
verified that flush happens when buffer is full.
   
   ```
   datanode_1  | - Streaming container data (1) to other datanode
   datanode_1  | - Sending 1048576 bytes (of type LiteralByteString) for 
container 1
   datanode_1  | - Sending 530637 bytes (of type LiteralByteString) for 
container 1
   datanode_1  | - 1579213 bytes written to the rpc stream from container 1
   ...
   datanode_5  | - Container is downloaded to 
/tmp/container-copy/container-1.tar.gz
   datanode_5  | - Container 1 is downloaded, starting to import.
   datanode_5  | - Container 1 is replicated successfully
   datanode_5  | - Container 1 is replicated.
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] adoroszlai commented on issue #72: HDDS-2341. Validate tar entry path during extraction

2019-10-25 Thread GitBox
adoroszlai commented on issue #72: HDDS-2341. Validate tar entry path during 
extraction
URL: https://github.com/apache/hadoop-ozone/pull/72#issuecomment-546444851
 
 
   Thanks everyone again for reviews and @arp7 for merging it.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] hanishakoneru commented on issue #40: HDDS-2285. GetBlock and ReadChunk command from the client should be s…

2019-10-25 Thread GitBox
hanishakoneru commented on issue #40: HDDS-2285. GetBlock and ReadChunk command 
from the client should be s…
URL: https://github.com/apache/hadoop-ozone/pull/40#issuecomment-546441415
 
 
   /retest


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] hanishakoneru commented on a change in pull request #40: HDDS-2285. GetBlock and ReadChunk command from the client should be s…

2019-10-25 Thread GitBox
hanishakoneru commented on a change in pull request #40: HDDS-2285. GetBlock 
and ReadChunk command from the client should be s…
URL: https://github.com/apache/hadoop-ozone/pull/40#discussion_r339160033
 
 

 ##
 File path: 
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/XceiverClientGrpc.java
 ##
 @@ -107,6 +111,7 @@ public XceiverClientGrpc(Pipeline pipeline, Configuration 
config,
 OzoneConfigKeys.OZONE_NETWORK_TOPOLOGY_AWARE_READ_KEY,
 OzoneConfigKeys.OZONE_NETWORK_TOPOLOGY_AWARE_READ_DEFAULT);
 this.caCert = caCert;
+this.getBlockDNcache = new HashMap<>();
 
 Review comment:
   Thanks @bharatviswa504.
   Fixed it.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] arp7 merged pull request #72: HDDS-2341. Validate tar entry path during extraction

2019-10-25 Thread GitBox
arp7 merged pull request #72: HDDS-2341. Validate tar entry path during 
extraction
URL: https://github.com/apache/hadoop-ozone/pull/72
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Reopened] (HDDS-2206) Separate handling for OMException and IOException in the Ozone Manager

2019-10-25 Thread Arpit Agarwal (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal reopened HDDS-2206:
-

Reverted this based on offline conversation with [~aengineer].

Anu has requested we add a config key to control this behavior.

> Separate handling for OMException and IOException in the Ozone Manager
> --
>
> Key: HDDS-2206
> URL: https://issues.apache.org/jira/browse/HDDS-2206
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Supratim Deka
>Assignee: Supratim Deka
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> As part of improving error propagation from the OM for ease of 
> troubleshooting and diagnosis, the proposal is to handle IOExceptions 
> separately from the business exceptions which are thrown as OMExceptions.
> Handling for OMExceptions will not be changed in this jira.
> Handling for IOExceptions will include logging the stacktrace on the server, 
> and propagation to the client under the control of a config parameter.
> Similar handling is also proposed for SCMException.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [VOTE] Release Apache Hadoop 2.10.0 (RC1)

2019-10-25 Thread Erik Krogen
Thanks for putting this together, Jonathan!

I noticed that the RELEASENOTES.md makes no mention of any of the major 
features you mentioned in your email about the RC. Is this expected? I guess it 
is caused by the lack of a release note on the JIRAs for those features.

I also noticed that building a distribution package (mvn -DskipTests package 
-Pdist) fails on Java 8 (1.8.0_172) with a bunch of Javadoc errors. It works 
fine on Java 7. Is this expected?

Other verifications I performed:

  *   Verified all signatures in RC1
  *   Verified all checksums in RC1
  *   Visually inspected contents of src tarball
  *   Built from source on Mac OSX 10.14.6 and RHEL7 (Java 8)
 *   mvn -DskipTests package
  *   Visually inspected contents of binary tarball

Thanks,
Erik


From: Konstantin Shvachko 
Sent: Wednesday, October 23, 2019 6:10 PM
To: Jonathan Hung 
Cc: Hdfs-dev ; mapreduce-dev 
; yarn-dev ; 
Hadoop Common 
Subject: Re: [VOTE] Release Apache Hadoop 2.10.0 (RC1)

+1 on RC1

- Verified signatures
- Verified maven artifacts on Nexus for sources
- Checked rat reports
- Checked documentation
- Checked packaging contents
- Built from sources on RHEL 7 box
- Ran unit tests for new HDFS features with Java 8

Thanks,
--Konstantin

On Tue, Oct 22, 2019 at 2:55 PM Jonathan Hung  wrote:

> Hi folks,
>
> This is the second release candidate for the first release of Apache Hadoop
> 2.10 line. It contains 362 fixes/improvements since 2.9 [1]. It includes
> features such as:
>
> - User-defined resource types
> - Native GPU support as a schedulable resource type
> - Consistent reads from standby node
> - Namenode port based selective encryption
> - Improvements related to rolling upgrade support from 2.x to 3.x
> - Cost based fair call queue
>
> The RC1 artifacts are at: 
> https://nam06.safelinks.protection.outlook.com/?url=http:%2F%2Fhome.apache.org%2F~jhung%2Fhadoop-2.10.0-RC1%2F&data=02%7C01%7Cekrogen%40linkedin.com%7C1fee1e5911d8415a418b08d7581f0c7e%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C1%7C637074762694349124&sdata=ZX7lF4N3fV38ggkplLU56ybhKBZrx%2FUKMkfxm2WJ7eU%3D&reserved=0
>
> RC tag is release-2.10.0-RC1.
>
> The maven artifacts are hosted here:
> https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Frepository.apache.org%2Fcontent%2Frepositories%2Forgapachehadoop-1243%2F&data=02%7C01%7Cekrogen%40linkedin.com%7C1fee1e5911d8415a418b08d7581f0c7e%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C1%7C637074762694349124&sdata=DsJDfoj8eg3E%2F%2BNEwOAI41LhcRJ2hOWycS923ds3Seg%3D&reserved=0
>
> My public key is available here:
> https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdist.apache.org%2Frepos%2Fdist%2Frelease%2Fhadoop%2Fcommon%2FKEYS&data=02%7C01%7Cekrogen%40linkedin.com%7C1fee1e5911d8415a418b08d7581f0c7e%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C1%7C637074762694349124&sdata=1694z6xhj5NtxwYBpwnRBx%2BgK0npGIUs5O580K3KPJw%3D&reserved=0
>
> The vote will run for 5 weekdays, until Tuesday, October 29 at 3:00 pm PDT.
>
> Thanks,
> Jonathan Hung
>
> [1]
>
> https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fissues.apache.org%2Fjira%2Fissues%2F%3Fjql%3Dproject%2520in%2520(HDFS%252C%2520YARN%252C%2520HADOOP%252C%2520MAPREDUCE)%2520AND%2520resolution%2520%253D%2520Fixed%2520AND%2520fixVersion%2520%253D%25202.10.0%2520AND%2520fixVersion%2520not%2520in%2520(2.9.2%252C%25202.9.1%252C%25202.9.0&data=02%7C01%7Cekrogen%40linkedin.com%7C1fee1e5911d8415a418b08d7581f0c7e%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637074762694349124&sdata=U2Pt0Vx3k3XWJGoH0S8iFgXbrU0X8TF3ACcAAg9axog%3D&reserved=0)
>


Re: [VOTE] Release Apache Hadoop 2.10.0 (RC1)

2019-10-25 Thread Jim Brennan
+1 (non-binding) on RC1
I built from source on Mac and RHEL7, ran hdfs, nodemanager, and
resourcemanager unit tests, and set up a one-node cluster and ran some test
jobs (pi and sleep).
- Jim


On Tue, Oct 22, 2019 at 4:55 PM Jonathan Hung  wrote:

> Hi folks,
>
> This is the second release candidate for the first release of Apache Hadoop
> 2.10 line. It contains 362 fixes/improvements since 2.9 [1]. It includes
> features such as:
>
> - User-defined resource types
> - Native GPU support as a schedulable resource type
> - Consistent reads from standby node
> - Namenode port based selective encryption
> - Improvements related to rolling upgrade support from 2.x to 3.x
> - Cost based fair call queue
>
> The RC1 artifacts are at: http://home.apache.org/~jhung/hadoop-2.10.0-RC1/
>
> RC tag is release-2.10.0-RC1.
>
> The maven artifacts are hosted here:
> https://repository.apache.org/content/repositories/orgapachehadoop-1243/
>
> My public key is available here:
> https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
>
> The vote will run for 5 weekdays, until Tuesday, October 29 at 3:00 pm PDT.
>
> Thanks,
> Jonathan Hung
>
> [1]
>
> https://issues.apache.org/jira/issues/?jql=project%20in%20(HDFS%2C%20YARN%2C%20HADOOP%2C%20MAPREDUCE)%20AND%20resolution%20%3D%20Fixed%20AND%20fixVersion%20%3D%202.10.0%20AND%20fixVersion%20not%20in%20(2.9.2%2C%202.9.1%2C%202.9.0)
>


[GitHub] [hadoop-ozone] sodonnel opened a new pull request #86: HDDS-2329 Destroy pipelines on any decommission or maintenance nodes

2019-10-25 Thread GitBox
sodonnel opened a new pull request #86: HDDS-2329 Destroy pipelines on any 
decommission or maintenance nodes
URL: https://github.com/apache/hadoop-ozone/pull/86
 
 
   ## What changes were proposed in this pull request?
   
   When a node is marked for decommission or maintenance, the first step in 
taking the node out of service is to destroy any pipelines the node is involved 
in and confirm they have been destroyed before getting the container list for 
the node.
   
   This commit added a new class called the DatanodeAdminMonitor, which is 
responsible for tracking nodes as they go through the decommission workflow.
   
   When a node is marked for decommission, it gets added a to a queue in this 
monitor. The monitor runs periodically (30 seconds by default) and process any 
queued nodes. After processing they are tracked inside the monitor as they 
decommission workflow progresses (closing pipelines, getting the container 
list, replicating the container, etc).
   
   With this commit, a node can be added to the monitor for decommission or 
maintenace and it will have its pipelines closed.
   
   It will not make any further progress after the pipelines have been closed 
and further commits will address the next states.
   
   ## What is the link to the Apache JIRA
   
   https://issues.apache.org/jira/browse/HDDS-2329
   
   ## How was this patch tested?
   
   Some manual tests and new unit tests have been added.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] cxorm opened a new pull request #85: HDDS-2219. Move all the ozone dist scripts/configs to one location

2019-10-25 Thread GitBox
cxorm opened a new pull request #85: HDDS-2219. Move all the ozone dist 
scripts/configs to one location
URL: https://github.com/apache/hadoop-ozone/pull/85
 
 
   ## What changes were proposed in this pull request?
   Relocate the separated scripts and confuguration file to the same directory,
   and modify the ```dist-layout-stitching``` to layout these files to right 
directory in distribution.
   
   ## What is the link to the Apache JIRA
   https://issues.apache.org/jira/browse/HDDS-2219
   
   ## How was this patch tested?
   Just ran the command in ```hadoop-ozone/```:
   ```mvn clean package -Pdist -Dtar -DskipTests```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bshashikant commented on issue #81: HDDS-2347 XCeiverClientGrpc's parallel use leads to NPE

2019-10-25 Thread GitBox
bshashikant commented on issue #81: HDDS-2347 XCeiverClientGrpc's parallel use 
leads to NPE
URL: https://github.com/apache/hadoop-ozone/pull/81#issuecomment-546295056
 
 
   The changes look good to me.  I am +1 on this.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bshashikant commented on a change in pull request #40: HDDS-2285. GetBlock and ReadChunk command from the client should be s…

2019-10-25 Thread GitBox
bshashikant commented on a change in pull request #40: HDDS-2285. GetBlock and 
ReadChunk command from the client should be s…
URL: https://github.com/apache/hadoop-ozone/pull/40#discussion_r338982546
 
 

 ##
 File path: 
hadoop-hdds/client/src/main/java/org/apache/hadoop/hdds/scm/XceiverClientGrpc.java
 ##
 @@ -107,6 +111,7 @@ public XceiverClientGrpc(Pipeline pipeline, Configuration 
config,
 OzoneConfigKeys.OZONE_NETWORK_TOPOLOGY_AWARE_READ_KEY,
 OzoneConfigKeys.OZONE_NETWORK_TOPOLOGY_AWARE_READ_DEFAULT);
 this.caCert = caCert;
+this.getBlockDNcache = new HashMap<>();
 
 Review comment:
   yes, this should be concurrent.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14935) Unified constant in DFSNetworkTopology#isNodeInScope

2019-10-25 Thread Lisheng Sun (Jira)
Lisheng Sun created HDFS-14935:
--

 Summary: Unified constant in DFSNetworkTopology#isNodeInScope
 Key: HDFS-14935
 URL: https://issues.apache.org/jira/browse/HDFS-14935
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Lisheng Sun






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-14934) [SBN Read] Standby NN throws many InterruptedExceptions when dfs.ha.tail-edits.period is 0

2019-10-25 Thread Takanobu Asanuma (Jira)
Takanobu Asanuma created HDFS-14934:
---

 Summary: [SBN Read] Standby NN throws many InterruptedExceptions 
when dfs.ha.tail-edits.period is 0
 Key: HDFS-14934
 URL: https://issues.apache.org/jira/browse/HDFS-14934
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Takanobu Asanuma


When dfs.ha.tail-edits.period is 0ms (or very short-time), there are many warn 
logs in standby NN.

{noformat}
2019-10-25 16:25:46,945 [Logger channel (from parallel executor) to /:] WARN  concurrent.ExecutorHelper 
(ExecutorHelper.java:logThrowableFromAfterExecute(55)) - Thread (Thread[Logger 
channel (from parallel executor) to /:,5,main]) 
interrupted: 
java.lang.InterruptedException
at 
com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:509)
at 
com.google.common.util.concurrent.FluentFuture$TrustedFuture.get(FluentFuture.java:82)
at 
org.apache.hadoop.util.concurrent.ExecutorHelper.logThrowableFromAfterExecute(ExecutorHelper.java:48)
at 
org.apache.hadoop.util.concurrent.HadoopThreadPoolExecutor.afterExecute(HadoopThreadPoolExecutor.java:90)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1157)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
{noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org