[jira] [Created] (HDDS-326) Enable and disable ReplicationActivityStatus based on node status

2018-08-06 Thread Elek, Marton (JIRA)
Elek, Marton created HDDS-326:
-

 Summary: Enable and disable ReplicationActivityStatus based on 
node status
 Key: HDDS-326
 URL: https://issues.apache.org/jira/browse/HDDS-326
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Elek, Marton
 Fix For: 0.2.1


In HDDS-245 we introduce a ReplicationActivityStatus which can store the actual 
state of the replication: it could be enabled or disabled. Replication should 
be enabled after leaving the chill mode.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-327) CloseContainer command handler in HDDS Dispatcher should not throw exception if the container is already closed

2018-08-06 Thread Shashikant Banerjee (JIRA)
Shashikant Banerjee created HDDS-327:


 Summary: CloseContainer command handler in HDDS Dispatcher should 
not throw exception if the container is already closed
 Key: HDDS-327
 URL: https://issues.apache.org/jira/browse/HDDS-327
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Shashikant Banerjee
Assignee: Shashikant Banerjee
 Fix For: 0.2.1


Currently, closeContainer handler in the HDDS Dispatcher throws an exception if 
the container is open state. If the container is already closed, it should not 
throw any exception but just return.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-328) ContainerIO:

2018-08-06 Thread Elek, Marton (JIRA)
Elek, Marton created HDDS-328:
-

 Summary: ContainerIO:
 Key: HDDS-328
 URL: https://issues.apache.org/jira/browse/HDDS-328
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
  Components: Ozone Datanode
Reporter: Elek, Marton
Assignee: Elek, Marton
 Fix For: 0.2.1


In HDDS-75 we pack the container data to an archive file, copy to other 
datanodes and create the container from the archive.

As I wrote in the comment of HDDS-75 I propose to separate the patch to make it 
easier to review.

In this patch we need to extend the existing Container interface with adding 
export/import methods to save the container data to one binary input/output 
stream. 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13794) [PROVIDED Phase 2] Teach BlockAliasMap.Writer `remove` method.

2018-08-06 Thread Ewan Higgs (JIRA)
Ewan Higgs created HDFS-13794:
-

 Summary: [PROVIDED Phase 2] Teach BlockAliasMap.Writer `remove` 
method.
 Key: HDFS-13794
 URL: https://issues.apache.org/jira/browse/HDFS-13794
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Ewan Higgs
Assignee: Ewan Higgs


When updating the BlockAliasMap we may need to deal with deleted blocks. 
Otherwise the BlockAliasMap will grow indefinitely(!).

Therefore, the BlockAliasMap.Writer needs a method for removing blocks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Reopened] (HDDS-179) CloseContainer command should be executed only if all the prior "Write" type container requests get executed

2018-08-06 Thread Shashikant Banerjee (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-179?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee reopened HDDS-179:
--

Reopening this issue, as the synchronizatio between closeContainer and 
writeChunks need synchronization during containerStateMachien#applyTransaction 
phase.

> CloseContainer command should be executed only if all the  prior "Write" type 
> container requests get executed
> -
>
> Key: HDDS-179
> URL: https://issues.apache.org/jira/browse/HDDS-179
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client, Ozone Datanode
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
> Fix For: 0.2.1
>
> Attachments: HDDS-179.01.patch, HDDS-179.02.patch, HDDS-179.03.patch, 
> HDDS-179.04.patch, HDDS-179.05.patch
>
>
> When a close Container command request comes to a Datanode (via SCM hearbeat 
> response) through the Ratis protocol, all the prior enqueued "Write" type of 
> request like putKey, WriteChunk, DeleteKey, CompactChunk etc should be 
> executed first before CloseContainer request gets executed. This 
> synchronization needs to be handled in the containerStateMachine. This Jira 
> aims to address this.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[DISCUSS] Alpha Release of Ozone

2018-08-06 Thread Elek, Marton

Hi All,

I would like to discuss creating an Alpha release for Ozone. The core 
functionality of Ozone is complete but there are two missing features; 
Security and HA, work on these features are progressing in Branches 
HDDS-4 and HDDS-151. Right now, Ozone can handle millions of keys and 
has a Hadoop compatible file system, which allows applications like 
Hive, Spark, and YARN use Ozone.


Having an Alpha release of Ozone will help in getting some early 
feedback (this release will be marked as an Alpha -- and not production 
ready).


Going through a complete release cycle will help us flush out Ozone 
release process, update user documentation and nail down deployment models.


Please share your thoughts on the Alpha release (over mail or in 
HDDS-214), as voted on by the community earlier, Ozone release will be 
independent of Hadoop releases.


Thanks a lot,
Marton Elek




-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13795) Fix potential NPE in {{InMemoryLevelDBAliasMapServer}}

2018-08-06 Thread Virajith Jalaparti (JIRA)
Virajith Jalaparti created HDFS-13795:
-

 Summary: Fix potential NPE in {{InMemoryLevelDBAliasMapServer}}
 Key: HDFS-13795
 URL: https://issues.apache.org/jira/browse/HDFS-13795
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Virajith Jalaparti
Assignee: Virajith Jalaparti






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13796) Allow verbosity of InMemoryLevelDBAliasMapServer to configurable

2018-08-06 Thread Virajith Jalaparti (JIRA)
Virajith Jalaparti created HDFS-13796:
-

 Summary: Allow verbosity of InMemoryLevelDBAliasMapServer to 
configurable
 Key: HDFS-13796
 URL: https://issues.apache.org/jira/browse/HDFS-13796
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Virajith Jalaparti
Assignee: Virajith Jalaparti


Currently, the {{RPC.Server}} used by the {{InMemoryLevelDBAliasMapServer}}. 
has {{setVerbose(true)}}. This leads to too many log messages when running with 
large number of nodes/rpcs to the {{InMemoryLevelDBAliasMapServer}}. This JIRA 
proposes to make this configurable, and set to {{false}} by default.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13797) WebHDFS redirects to datanodes by hostname, ignoring bind addresses

2018-08-06 Thread Philip Zeyliger (JIRA)
Philip Zeyliger created HDFS-13797:
--

 Summary: WebHDFS redirects to datanodes by hostname, ignoring bind 
addresses
 Key: HDFS-13797
 URL: https://issues.apache.org/jira/browse/HDFS-13797
 Project: Hadoop HDFS
  Issue Type: Task
  Components: webhdfs
Reporter: Philip Zeyliger


In testing Impala, we run datanodes bound to {{127.0.0.1}}. This works great, 
except for WebHDFS, where the namenode redirects to a datanode by hostname, 
leading to a "connection refused" error (since the datanode isn't listening). 
Our workaround in Impala is to map hostname to {{127.0.0.1}} in {{/etc/hosts}} 
which isn't so hot. (The workaround is at 
https://github.com/apache/impala/blob/3334c167a69a6506d595898dff73fb16410266e6/bin/bootstrap_system.sh#L187
 .)

I think there's a bug here. Reading 
https://hadoop.apache.org/docs/r3.0.3/hadoop-project-dist/hadoop-hdfs/HdfsMultihoming.html,
 it looks like WebHDFS should perhaps respect 
{{dfs.client.use.datanode.hostname}}, but I think it doesn't and always uses 
the hostname. I think the relevant point where the namenode figures this out is 
https://github.com/apache/hadoop/blob/ca20e0d7e9767a7362dddfea8ec19548947d3fd7/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/web/resources/NamenodeWebHdfsMethods.java#L406
 .

I recognize this is a non-production edge case: it makes no sense to listen 
only on localhost except for testing.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13798) Create InMemoryAliasMap location if missing.

2018-08-06 Thread Virajith Jalaparti (JIRA)
Virajith Jalaparti created HDFS-13798:
-

 Summary: Create InMemoryAliasMap location if missing.
 Key: HDFS-13798
 URL: https://issues.apache.org/jira/browse/HDFS-13798
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Virajith Jalaparti
Assignee: Virajith Jalaparti


If the {{InMemoryAliasMap}} location does not exist, it throws an error. This 
can stop the Namenode from starting. This JIRA creates the location if missing.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13799) TestEditLogTailer#testTriggersLogRollsForAllStandbyNN test fail due to missing synchronization between rollEditsRpcExecutor and tailerThread shutdown

2018-08-06 Thread Hrishikesh Gadre (JIRA)
Hrishikesh Gadre created HDFS-13799:
---

 Summary: TestEditLogTailer#testTriggersLogRollsForAllStandbyNN 
test fail due to missing synchronization between rollEditsRpcExecutor and 
tailerThread shutdown
 Key: HDFS-13799
 URL: https://issues.apache.org/jira/browse/HDFS-13799
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: ha
Affects Versions: 3.0.0
Reporter: Hrishikesh Gadre


TestEditLogTailer#testTriggersLogRollsForAllStandbyNN unit test is failing in 
our internal environment with following error,
{noformat}
java.lang.AssertionError: Test resulted in an unexpected exit
at 
org.apache.hadoop.hdfs.server.namenode.ha.TestEditLogTailer.testTriggersLogRollsForAllStandbyNN(TestEditLogTailer.java:245){noformat}
This test failure is due to following error during shutdown of the 
MiniDfsCluster
{noformat}
2018-07-31 21:59:27,806 [main] INFO  hdfs.MiniDFSCluster 
(MiniDFSCluster.java:shutdown(1965)) - Shutting down the Mini HDFS Cluster
2018-07-31 21:59:27,806 [main] FATAL hdfs.MiniDFSCluster 
(MiniDFSCluster.java:shutdown(1968)) - Test resulted in an unexpected exit
1: java.util.concurrent.RejectedExecutionException: Task 
java.util.concurrent.FutureTask@1ce1d2b6 rejected from 
java.util.concurrent.ThreadPoolExecutor@12263f5a[Terminated, pool size = 0, 
active threads = 0, queued tasks = 0, completed tasks = 0]
at org.apache.hadoop.util.ExitUtil.terminate(ExitUtil.java:265)
at 
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:441)
at 
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$400(EditLogTailer.java:380)
at 
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:397)
at 
org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:482)
at 
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:393)
Caused by: java.util.concurrent.RejectedExecutionException: Task 
java.util.concurrent.FutureTask@1ce1d2b6 rejected from 
java.util.concurrent.ThreadPoolExecutor@12263f5a[Terminated, pool size = 0, 
active threads = 0, queued tasks = 0, completed tasks = 0]
at 
java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2047)
at 
java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:823)
at 
java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1369)
at 
java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:134)
at 
java.util.concurrent.Executors$DelegatedExecutorService.submit(Executors.java:681)
at 
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.triggerActiveLogRoll(EditLogTailer.java:351)
at 
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:411)
... 4 more{noformat}
It looks like the EditLogTailer class is not handling the shutdown correctly. 
Specifically the EditLogTailer#stop() method shuts down the 
rollEditsRpcExecutor executor service before setting the tailerThread#shouldRun 
flag. This is a race condition since the tailerThread can try to submit a new 
task to this executor service which has been asked to shutdown. If that 
happens, it will receive an unexpected RejectedExecutionException, resulting in 
a test failure. The solution should be to properly synchronize shutdown of 
tailerThread with rollEditsRpcExecutor.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: [DISCUSS] Alpha Release of Ozone

2018-08-06 Thread Anu Engineer
+1,  It will allow many users to get a first look at Ozone/HDDS. 

Thanks
Anu


On 8/6/18, 10:34 AM, "Elek, Marton"  wrote:

Hi All,

I would like to discuss creating an Alpha release for Ozone. The core 
functionality of Ozone is complete but there are two missing features; 
Security and HA, work on these features are progressing in Branches 
HDDS-4 and HDDS-151. Right now, Ozone can handle millions of keys and 
has a Hadoop compatible file system, which allows applications like 
Hive, Spark, and YARN use Ozone.

Having an Alpha release of Ozone will help in getting some early 
feedback (this release will be marked as an Alpha -- and not production 
ready).

Going through a complete release cycle will help us flush out Ozone 
release process, update user documentation and nail down deployment models.

Please share your thoughts on the Alpha release (over mail or in 
HDDS-214), as voted on by the community earlier, Ozone release will be 
independent of Hadoop releases.

Thanks a lot,
Marton Elek




-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org





答复: [DISCUSS] Alpha Release of Ozone

2018-08-06 Thread Lin,Yiqun(vip.com)
+1, looking forward to seeing the Ozone/HDDS, : ).

-邮件原件-
发件人: Anu Engineer [mailto:aengin...@hortonworks.com]
发送时间: 2018年8月7日 6:51
收件人: Elek, Marton; common-...@hadoop.apache.org; 
mapreduce-...@hadoop.apache.org; yarn-...@hadoop.apache.org; 
hdfs-dev@hadoop.apache.org
主题: Re: [DISCUSS] Alpha Release of Ozone

+1,  It will allow many users to get a first look at Ozone/HDDS.

Thanks
Anu


On 8/6/18, 10:34 AM, "Elek, Marton"  wrote:

Hi All,

I would like to discuss creating an Alpha release for Ozone. The core
functionality of Ozone is complete but there are two missing features;
Security and HA, work on these features are progressing in Branches
HDDS-4 and HDDS-151. Right now, Ozone can handle millions of keys and
has a Hadoop compatible file system, which allows applications like
Hive, Spark, and YARN use Ozone.

Having an Alpha release of Ozone will help in getting some early
feedback (this release will be marked as an Alpha -- and not production
ready).

Going through a complete release cycle will help us flush out Ozone
release process, update user documentation and nail down deployment models.

Please share your thoughts on the Alpha release (over mail or in
HDDS-214), as voted on by the community earlier, Ozone release will be
independent of Hadoop releases.

Thanks a lot,
Marton Elek




-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



B�CB��[��X��ܚX�KK[XZ[
���Y]�][��X��ܚX�PY��
�\X�K�ܙ�B��܈Y][ۘ[��[X[��K[XZ[
���Y]�Z[Y��
�\X�K�ܙ�B�
本电子邮件可能为保密文件。如果阁下非电子邮件所指定之收件人,谨请立即通知本人。敬请阁下不要使用、保存、复印、打印、散布本电子邮件及其内容,或将其用于其他任何目的或向任何人披露。谢谢您的合作!
 This communication is intended only for the addressee(s) and may contain 
information that is privileged and confidential. You are hereby notified that, 
if you are not an intended recipient listed above, or an authorized employee or 
agent of an addressee of this communication responsible for delivering e-mail 
messages to an intended recipient, any dissemination, distribution or 
reproduction of this communication (including any attachments hereto) is 
strictly prohibited. If you have received this communication in error, please 
notify us immediately by a reply e-mail addressed to the sender and permanently 
delete the original e-mail communication and any attachments from all storage 
devices without making or otherwise retaining a copy.


Re: [DISCUSS] Alpha Release of Ozone

2018-08-06 Thread 俊平堵
+1. Good to see the progress here and would like to try Ozone on real
cluster.

Thanks,

Junping

2018-08-07 1:34 GMT+08:00 Elek, Marton :

> Hi All,
>
> I would like to discuss creating an Alpha release for Ozone. The core
> functionality of Ozone is complete but there are two missing features;
> Security and HA, work on these features are progressing in Branches HDDS-4
> and HDDS-151. Right now, Ozone can handle millions of keys and has a Hadoop
> compatible file system, which allows applications like Hive, Spark, and
> YARN use Ozone.
>
> Having an Alpha release of Ozone will help in getting some early feedback
> (this release will be marked as an Alpha -- and not production ready).
>
> Going through a complete release cycle will help us flush out Ozone
> release process, update user documentation and nail down deployment models.
>
> Please share your thoughts on the Alpha release (over mail or in
> HDDS-214), as voted on by the community earlier, Ozone release will be
> independent of Hadoop releases.
>
> Thanks a lot,
> Marton Elek
>
>
>
>
> -
> To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org
>
>


RE: [VOTE] Merge Storage Policy Satisfier (SPS) [HDFS-10285] feature branch to trunk

2018-08-06 Thread surendra lilhore
+1, looking forward to seeing SPS in coming releases.


Regards,
Surendra

-Original Message-
From: Uma Maheswara Rao G [mailto:hadoop@gmail.com] 
Sent: 01 August 2018 14:38
To: hdfs-dev@hadoop.apache.org
Subject: [VOTE] Merge Storage Policy Satisfier (SPS) [HDFS-10285] feature 
branch to trunk

Hi All,



 From the positive responses from JIRA discussion and no objections from below 
DISCUSS thread [1], I am converting it to voting thread.



 Last couple of weeks we spent time on testing the feature and so far it is 
working fine. Surendra uploaded a test report at HDFS-10285:  [2]



 In this phase, we provide to run SPS outside of Namenode only and as a next 
phase we continue to discuss and work on to enable it as Internal SPS as 
explained below. We have got clean QA report on branch and if there are any 
static tool comments triggered later while running this thread, we will make 
sure to fix them before merge. We committed and continue to improve the code on 
trunk. Please refer to HDFS-10285 for discussion details.



 This has been a long effort and we're grateful for the support we've received 
from the community. In particular, thanks to Andrew Wang, Anoop Sam John, Anu 
Engineer, Chris Douglas, Daryn Sharp, Du Jingcheng , Ewan Higgs, Jing Zhao, Kai 
Zheng,  Rakesh R, Ramkrishna , Surendra Singh Lilhore , Thomas Demoor, Uma 
Maheswara Rao G, Vinayakumar, Virajith,  Wei Zhou, Yuanbo Liu. Without these 
members effort, this feature might not have reached to this state.



To start with, here is my +1

It will end on 6th Aug.



Regards,

Uma

[1]  https://s.apache.org/bhyu
[2]  https://s.apache.org/AXvL


On Wed, Jun 27, 2018 at 3:21 PM, Uma Maheswara Rao G 
wrote:

> Hi All,
>
>   After long discussions(offline and on JIRA) on SPS, we came to a 
> conclusion on JIRA(HDFS-10285) that, we will go ahead with External 
> SPS merge in first phase. In this phase process will not be running 
> inside Namenode.
>   We will continue discussion on Internal SPS. Current code base 
> supports both internal and external option. We have review comments 
> for Internal which needs some additional works for analysis and 
> testing etc. We will move Internal SPS work to under HDFS-12226 
> (Follow-on work for SPS in NN) We are working on cleanup task HDFS-13076 for 
> the merge. .
> For more clarity on Internal and External SPS proposal thoughts, 
> please refer to JIRA HDFS-10285.
>
> If there are no objections with this, I will go ahead for voting soon.
>
> Regards,
> Uma
>
> On Fri, Nov 17, 2017 at 3:16 PM, Uma Maheswara Rao G 
>  > wrote:
>
>> Update: We worked on the review comments and additional JIRAs above 
>> mentioned.
>>
>> >1. After the feedbacks from Andrew, Eddy, Xiao in JIRA reviews, we
>> planned to take up the support for recursive API support. HDFS-12291< 
>> https://issues.apache.org/jira/browse/HDFS-12291>
>>
>> We provided the recursive API support now.
>>
>> >2. Xattr optimizations HDFS-12225> he.org/jira/browse/HDFS-12225>
>> Improved this portion as well
>>
>> >3. Few other review comments already fixed and committed HDFS-12214<
>> https://issues.apache.org/jira/browse/HDFS-12214>
>> Fixed the comments.
>>
>> We are continuing to test the feature and working so far well. Also 
>> we uploaded a combined patch and got the good QA report.
>>
>> If there are no further objections, we would like to go for merge 
>> vote tomorrow. Please by default this feature will be disabled.
>>
>> Regards,
>> Uma
>>
>> On Fri, Aug 18, 2017 at 11:27 PM, Gangumalla, Uma < 
>> uma.ganguma...@intel.com> wrote:
>>
>>> Hi Andrew,
>>>
>>> >Great to hear. It'd be nice to define which use cases are met by 
>>> >the
>>> current version of SPS, and which will be handled after the merge.
>>> After the discussions in JIRA, we planned to support recursive API 
>>> as well. The primary use cases we planned was for Hbase. Please 
>>> check next point for use case details.
>>>
>>> >A bit more detail in the design doc on how HBase would use this 
>>> >feature
>>> would also be helpful. Is there an HBase JIRA already?
>>> Please find the usecase details at this comment in JIRA:
>>> https://issues.apache.org/jira/browse/HDFS-10285?focusedComm
>>> entId=16120227&page=com.atlassian.jira.plugin.system.issueta
>>> bpanels:comment-tabpanel#comment-16120227
>>>
>>> >I also spent some more time with the design doc and posted a few
>>> questions on the JIRA.
>>> Thank you for the reviews.
>>>
>>> To summarize the discussions in JIRA:
>>> 1. After the feedbacks from Andrew, Eddy, Xiao in JIRA reviews, we 
>>> planned to take up the support for recursive API support. 
>>> HDFS-12291< https://issues.apache.org/jira/browse/HDFS-12291> 
>>> (Rakesh started the work on it) 2. Xattr optimizations 
>>> HDFS-12225 (Patch 
>>> available) 3. Few other review comments already fixed and committed 
>>> HDFS-12214< https://issues.apache.org/jira/browse/HDFS-12214>
>